ArFrame
Per-frame state.
Summary
Enumerations
ArCoordinates2dType
{
AR_COORDINATES_2D_TEXTURE_TEXELS
= 0,
AR_COORDINATES_2D_TEXTURE_NORMALIZED
= 1,
AR_COORDINATES_2D_IMAGE_PIXELS
= 2,
AR_COORDINATES_2D_IMAGE_NORMALIZED
= 3,
AR_COORDINATES_2D_OPENGL_NORMALIZED_DEVICE_COORDINATES
= 6,
AR_COORDINATES_2D_VIEW
= 7,
AR_COORDINATES_2D_VIEW_NORMALIZED
= 8
}
ArCoordinates3dType
{
AR_COORDINATES_3D_EIS_TEXTURE_NORMALIZED
= 0,
AR_COORDINATES_3D_EIS_NORMALIZED_DEVICE_COORDINATES
= 1
}
Typedefs
Functions
ArFrame_acquireCamera
(const ArSession
*session, const ArFrame
*frame, ArCamera
**out_camera)
void
ArFrame_acquireCameraImage
( ArSession
*session, ArFrame
*frame, ArImage
**out_image)
ArFrame_acquireDepthImage
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_depth_image)
ArFrame_acquireDepthImage16Bits
instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireDepthImage16Bits
due to the clearing of the top 3 bits per pixel. ArFrame_acquireDepthImage16Bits
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_depth_image)
ArFrame_acquireImageMetadata
(const ArSession
*session, const ArFrame
*frame, ArImageMetadata
**out_metadata)
ArFrame_acquirePointCloud
(const ArSession
*session, const ArFrame
*frame, ArPointCloud
**out_point_cloud)
ArFrame_acquireRawDepthConfidenceImage
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_confidence_image)
ArFrame_acquireRawDepthImage
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_depth_image)
ArFrame_acquireRawDepthImage16Bits
instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireRawDepthImage16Bits
due to the clearing of the top 3 bits per pixel. ArFrame_acquireRawDepthImage16Bits
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_depth_image)
ArFrame_acquireSemanticConfidenceImage
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_semantic_confidence_image)
ArFrame_acquireSemanticImage
(const ArSession
*session, const ArFrame
*frame, ArImage
**out_semantic_image)
ArFrame_create
(const ArSession
*session, ArFrame
**out_frame)
void
ArFrame_destroy
( ArFrame
*frame)
void
ArFrame_getAndroidSensorPose
(const ArSession
*session, const ArFrame
*frame, ArPose
*out_pose)
void
out_pose
to the pose of the Android Sensor Coordinate System
in the world coordinate space for this frame. ArFrame_getCameraTextureName
(const ArSession
*session, const ArFrame
*frame, uint32_t *out_texture_id)
void
ArFrame_getDisplayGeometryChanged
(const ArSession
*session, const ArFrame
*frame, int32_t *out_geometry_changed)
void
ArSession_update
. ArFrame_getHardwareBuffer
(const ArSession
*session, const ArFrame
*frame, void **out_hardware_buffer)
ArFrame_getLightEstimate
(const ArSession
*session, const ArFrame
*frame, ArLightEstimate
*out_light_estimate)
void
ArFrame_getSemanticLabelFraction
(const ArSession
*session, const ArFrame
*frame, ArSemanticLabel
query_label, float *out_fraction)
query_label
. ArFrame_getTimestamp
(const ArSession
*session, const ArFrame
*frame, int64_t *out_timestamp_ns)
void
ArFrame_getUpdatedAnchors
(const ArSession
*session, const ArFrame
*frame, ArAnchorList
*out_anchor_list)
void
ArFrame_getUpdatedTrackData
(const ArSession
*session, const ArFrame
*frame, const uint8_t *track_id_uuid_16, ArTrackDataList
*out_track_data_list)
void
ArFrame_getUpdatedTrackables
(const ArSession
*session, const ArFrame
*frame, ArTrackableType
filter_type, ArTrackableList
*out_trackable_list)
void
ArSession_update
call that produced this Frame. ArFrame_hitTest
(const ArSession
*session, const ArFrame
*frame, float pixel_x, float pixel_y, ArHitResultList
*hit_result_list)
void
ArFrame_hitTestInstantPlacement
(const ArSession
*session, const ArFrame
*frame, float pixel_x, float pixel_y, float approximate_distance_meters, ArHitResultList
*hit_result_list)
void
ArFrame_hitTestRay
(const ArSession
*session, const ArFrame
*frame, const float *ray_origin_3, const float *ray_direction_3, ArHitResultList
*hit_result_list)
void
ArFrame_hitTest
, but takes an arbitrary ray in world space coordinates instead of a screen space point. ArFrame_recordTrackData
( ArSession
*session, const ArFrame
*frame, const uint8_t *track_id_uuid_16, const void *payload, size_t payload_size)
ArFrame_transformCoordinates2d
(const ArSession
*session, const ArFrame
*frame, ArCoordinates2dType
input_coordinates, int32_t number_of_vertices, const float *vertices_2d, ArCoordinates2dType
output_coordinates, float *out_vertices_2d)
void
ArFrame_transformCoordinates3d
(const ArSession
*session, const ArFrame
*frame, ArCoordinates2dType
input_coordinates, int32_t number_of_vertices, const float *vertices_2d, ArCoordinates3dType
output_coordinates, float *out_vertices_3d)
void
ArFrame_transformDisplayUvCoords
(const ArSession
*session, const ArFrame
*frame, int32_t num_elements, const float *uvs_in, float *uvs_out)
void
ArFrame_transformCoordinates2d
instead. Enumerations
ArCoordinates2dType
ArCoordinates2dType
2d coordinate systems supported by ARCore.
CPU image, (x,y) normalized to [0.0f, 1.0f] range.
CPU image, (x,y) in pixels.
The range of x and y is determined by the CPU image resolution.
OpenGL Normalized Device Coordinates, display-rotated, (x,y) normalized to [-1.0f, 1.0f] range.
GPU texture coordinates, (s,t) normalized to [0.0f, 1.0f] range.
GPU texture, (x,y) in pixels.
Android view, display-rotated, (x,y) in pixels.
Android view, display-rotated, (x,y) normalized to [0.0f, 1.0f] range.
ArCoordinates3dType
ArCoordinates3dType
3d coordinate systems supported by ARCore.
Normalized Device Coordinates (NDC), display-rotated, (x,y) normalized to [-1.0f, 1.0f] range to compensate for perspective shift for EIS.
Use with ArFrame_transformCoordinates3d
. See the Electronic Image Stabilization developer guide
for more information.
GPU texture coordinates, using the Z component to compensate for perspective shift when using Electronic Image Stabilization (EIS).
Use with ArFrame_transformCoordinates3d
. See the Electronic Image Stabilization developer guide
for more information.
Typedefs
ArFrame
struct ArFrame_ ArFrame
The world state resulting from an update ( value type ).
- Create with:
ArFrame_create
- Allocate with:
ArSession_update
- Release with:
ArFrame_destroy
Functions
ArFrame_acquireCamera
void ArFrame_acquireCamera ( const ArSession * session , const ArFrame * frame , ArCamera ** out_camera )
Returns the camera object for the session.
Note that this Camera instance is long-lived so the same instance is returned regardless of the frame object this function was called on.
ArFrame_acquireCameraImage
ArStatus ArFrame_acquireCameraImage( ArSession *session, ArFrame *frame, ArImage * *out_image )
Returns the CPU image for the current frame.
Caller is responsible for later releasing the image with ArImage_release
. Not supported on all devices (see https://developers.google.com/ar/devices
). Return values:
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
- one more input arguments are invalid. -
AR_ERROR_DEADLINE_EXCEEDED
- the input frame is not the current frame. -
AR_ERROR_RESOURCE_EXHAUSTED
- the caller app has exceeded maximum number of images that it can hold without releasing. -
AR_ERROR_NOT_YET_AVAILABLE
- image with the timestamp of the input frame was not found within a bounded amount of time, or the camera failed to produce the image
ArFrame_acquireDepthImage
ArStatus ArFrame_acquireDepthImage ( const ArSession * session , const ArFrame * frame , ArImage ** out_depth_image )
Attempts to acquire a depth image that corresponds to the current frame.
The depth image has a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis. Currently, the three most significant bits are always set to 000. The remaining thirteen bits express values from 0 to 8191, representing depth in millimeters. To extract distance from a depth map, see the Depth API developer guide .
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage
, ArFrame_acquireRawDepthImage
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy is achieved between 500 millimeters (50 centimeters) and 5000 millimeters (5 meters) from the camera. Error increases quadratically as distance from the camera increases.
Depth is estimated using data from the world-facing cameras, user motion, and hardware depth sensors such as a time-of-flight sensor (or ToF sensor) if available. As the user moves their device through the environment, 3D depth data is collected and cached which improves the quality of subsequent depth images and reducing the error introduced by camera distance.
If an up-to-date depth image isn't ready for the current frame, the most recent depth image available from an earlier frame will be returned instead. This is expected only to occur on compute-constrained devices. An up-to-date depth image should typically become available again within a few frames.
The image must be released with ArImage_release
once it is no longer needed.
Deprecated.
Deprecated in release 1.31.0. Please use ArFrame_acquireDepthImage16Bits
instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireDepthImage16Bits
due to the clearing of the top 3 bits per pixel.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_depth_image
|
On successful return, this is filled out with a pointer to an
ArImage
. On error return, this is filled out with nullptr
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if the session, frame, or depth image arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if the number of observed camera frames is not yet sufficient for depth estimation; or depth estimation was not possible due to poor lighting, camera occlusion, or insufficient motion observed. -
AR_ERROR_NOT_TRACKING
The session is not in theAR_TRACKING_STATE_TRACKING
state, which is required to acquire depth images. -
AR_ERROR_ILLEGAL_STATE
if a supported depth mode was not enabled in Session configuration. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of depth images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the provided Frame is not the current one.
ArFrame_acquireDepthImage16Bits
ArStatus ArFrame_acquireDepthImage16Bits ( const ArSession * session , const ArFrame * frame , ArImage ** out_depth_image )
Attempts to acquire a depth image that corresponds to the current frame.
The depth image has format HardwareBuffer.D_16 , which is a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis, with the representable depth range between 0 millimeters and 65535 millimeters, or about 65 meters.
To extract distance from a depth map, see the Depth API developer guide .
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage16Bits
, ArFrame_acquireRawDepthImage16Bits
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy is achieved between 500 millimeters (50 centimeters) and 15000 millimeters (15 meters) from the camera, with depth reliably observed up to 25000 millimeters (25 meters). Error increases quadratically as distance from the camera increases.
Depth is estimated using data from the world-facing cameras, user motion, and hardware depth sensors such as a time-of-flight sensor (or ToF sensor) if available. As the user moves their device through the environment, 3D depth data is collected and cached which improves the quality of subsequent depth images and reducing the error introduced by camera distance.
If an up-to-date depth image isn't ready for the current frame, the most recent depth image available from an earlier frame will be returned instead. This is expected only to occur on compute-constrained devices. An up-to-date depth image should typically become available again within a few frames.
When the Geospatial API and the Depth API are enabled, output images from the Depth API will include terrain and building geometry when in a location with VPS coverage. See the Geospatial Depth Developer Guide for more information.
The image must be released with ArImage_release
once it is no longer needed.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_depth_image
|
On successful return, this is filled out with a pointer to an
ArImage
. On error return, this is filled out with nullptr
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if the session, frame, or depth image arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if the number of observed camera frames is not yet sufficient for depth estimation; or depth estimation was not possible due to poor lighting, camera occlusion, or insufficient motion observed. -
AR_ERROR_NOT_TRACKING
The session is not in theAR_TRACKING_STATE_TRACKING
state, which is required to acquire depth images. -
AR_ERROR_ILLEGAL_STATE
if a supported depth mode was not enabled in Session configuration. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of depth images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the provided Frame is not the current one.
ArFrame_acquireImageMetadata
ArStatus ArFrame_acquireImageMetadata ( const ArSession * session , const ArFrame * frame , ArImageMetadata ** out_metadata )
Gets the camera metadata for the current camera image.
AR_SUCCESS
or any of: -
AR_ERROR_DEADLINE_EXCEEDED
ifframe
is not the latest frame from byArSession_update
. -
AR_ERROR_RESOURCE_EXHAUSTED
if too many metadata objects are currently held. -
AR_ERROR_NOT_YET_AVAILABLE
if the camera failed to produce metadata for the given frame. Note: this commonly happens for few frames right afterArSession_resume
due to the camera stack bringup.
ArFrame_acquirePointCloud
ArStatus ArFrame_acquirePointCloud ( const ArSession * session , const ArFrame * frame , ArPointCloud ** out_point_cloud )
Acquires the current set of estimated 3d points attached to real-world geometry.
A matching call to ArPointCloud_release
must be made when the application is done accessing the Point Cloud.
Note: This information is for visualization and debugging purposes only. Its characteristics and format are subject to change in subsequent versions of the API.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_point_cloud
|
AR_SUCCESS
or any of: -
AR_ERROR_DEADLINE_EXCEEDED
ifframe
is not the latest frame from byArSession_update
. -
AR_ERROR_RESOURCE_EXHAUSTED
if too many Point Clouds are currently held.
ArFrame_acquireRawDepthConfidenceImage
ArStatus ArFrame_acquireRawDepthConfidenceImage ( const ArSession * session , const ArFrame * frame , ArImage ** out_confidence_image )
Attempts to acquire the confidence image corresponding to the raw depth image of the current frame.
The image must be released via ArImage_release
once it is no longer needed.
Each pixel is an 8-bit unsigned integer representing the estimated confidence of the corresponding pixel in the raw depth image. The confidence value is between 0 and 255, inclusive, with 0 representing the lowest confidence and 255 representing the highest confidence in the measured depth value. Pixels without a valid depth estimate have a confidence value of 0 and a corresponding depth value of 0 (see ArFrame_acquireRawDepthImage16Bits
).
The scaling of confidence values is linear and continuous within this range. Expect to see confidence values represented across the full range of 0 to 255, with values increasing as better observations are made of each location. If an application requires filtering out low-confidence pixels, removing depth pixels below a confidence threshold of half confidence (128) tends to work well.
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage16Bits
, ArFrame_acquireRawDepthImage16Bits
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_confidence_image
|
On successful return, this is filled out with a pointer to an
ArImage
. On error return, this is filled out filled out with nullptr
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if the session, frame, or depth image arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if the number of observed camera frames is not yet sufficient for depth estimation; or depth estimation was not possible due to poor lighting, camera occlusion, or insufficient motion observed. -
AR_ERROR_NOT_TRACKING
The session is not in theAR_TRACKING_STATE_TRACKING
state, which is required to acquire depth images. -
AR_ERROR_ILLEGAL_STATE
if a supported depth mode was not enabled in Session configuration. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of depth images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the providedArFrame
is not the current one.
ArFrame_acquireRawDepthImage
ArStatus ArFrame_acquireRawDepthImage ( const ArSession * session , const ArFrame * frame , ArImage ** out_depth_image )
Attempts to acquire a "raw", mostly unfiltered, depth image that corresponds to the current frame.
The raw depth image is sparse and does not provide valid depth for all pixels. Pixels without a valid depth estimate have a pixel value of 0 and a corresponding confidence value of 0 (see ArFrame_acquireRawDepthConfidenceImage
).
The depth image has a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis. Currently, the three most significant bits are always set to 000. The remaining thirteen bits express values from 0 to 8191, representing depth in millimeters. To extract distance from a depth map, see the Depth API developer guide .
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage
, ArFrame_acquireRawDepthImage
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy occurs between 500 millimeters (50 centimeters) and 5000 millimeters (5 meters) from the camera. Error increases quadratically as distance from the camera increases.
Depth is primarily estimated using data from the motion of world-facing cameras. As the user moves their device through the environment, 3D depth data is collected and cached, improving the quality of subsequent depth images and reducing the error introduced by camera distance. Depth accuracy and robustness improves if the device has a hardware depth sensor, such as a time-of-flight (ToF) camera.
Not every raw depth image contains a new depth estimate. Typically there is about 10 updates to the raw depth data per second. The depth images between those updates are a 3D reprojection which transforms each depth pixel into a 3D point in space and renders those 3D points into a new raw depth image based on the current camera pose. This effectively transforms raw depth image data from a previous frame to account for device movement since the depth data was calculated. For some applications it may be important to know whether the raw depth image contains new depth data or is a 3D reprojection (for example, to reduce the runtime cost of 3D reconstruction). To do that, compare the current raw depth image timestamp, obtained via ArImage_getTimestamp
, with the previously recorded raw depth image timestamp. If they are different, the depth image contains new information.
The image must be released via ArImage_release
once it is no longer needed.
Deprecated.
Deprecated in release 1.31.0. Please use ArFrame_acquireRawDepthImage16Bits
instead, which expands the depth range from 8191mm to 65535mm. This deprecated version may be slower than ArFrame_acquireRawDepthImage16Bits
due to the clearing of the top 3 bits per pixel.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_depth_image
|
On successful return, this is filled out with a pointer to an
ArImage
. On error return, this is filled out filled out with nullptr
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if the session, frame, or depth image arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if the number of observed camera frames is not yet sufficient for depth estimation; or depth estimation was not possible due to poor lighting, camera occlusion, or insufficient motion observed. -
AR_ERROR_NOT_TRACKING
The session is not in theAR_TRACKING_STATE_TRACKING
state, which is required to acquire depth images. -
AR_ERROR_ILLEGAL_STATE
if a supported depth mode was not enabled in Session configuration. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of depth images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the providedArFrame
is not the current one.
ArFrame_acquireRawDepthImage16Bits
ArStatus ArFrame_acquireRawDepthImage16Bits ( const ArSession * session , const ArFrame * frame , ArImage ** out_depth_image )
Attempts to acquire a "raw", mostly unfiltered, depth image that corresponds to the current frame.
The raw depth image is sparse and does not provide valid depth for all pixels. Pixels without a valid depth estimate have a pixel value of 0 and a corresponding confidence value of 0 (see ArFrame_acquireRawDepthConfidenceImage
).
The depth image has format HardwareBuffer.D_16 , which is a single 16-bit plane at index 0, stored in little-endian format. Each pixel contains the distance in millimeters along the camera principal axis, with the representable depth range between 0 millimeters and 65535 millimeters, or about 65 meters.
To extract distance from a depth map, see the Depth API developer guide .
The actual size of the depth image depends on the device and its display aspect ratio. The size of the depth image is typically around 160x120 pixels, with higher resolutions up to 640x480 on some devices. These sizes may change in the future. The outputs of ArFrame_acquireDepthImage16Bits
, ArFrame_acquireRawDepthImage16Bits
and ArFrame_acquireRawDepthConfidenceImage
will all have the exact same size.
Optimal depth accuracy is achieved between 500 millimeters (50 centimeters) and 15000 millimeters (15 meters) from the camera, with depth reliably observed up to 25000 millimeters (25 meters). Error increases quadratically as distance from the camera increases.
Depth is primarily estimated using data from the motion of world-facing cameras. As the user moves their device through the environment, 3D depth data is collected and cached, improving the quality of subsequent depth images and reducing the error introduced by camera distance. Depth accuracy and robustness improves if the device has a hardware depth sensor, such as a time-of-flight (ToF) camera.
Not every raw depth image contains a new depth estimate. Typically there are about 10 updates to the raw depth data per second. The depth images between those updates are a 3D reprojection which transforms each depth pixel into a 3D point in space and renders those 3D points into a new raw depth image based on the current camera pose. This effectively transforms raw depth image data from a previous frame to account for device movement since the depth data was calculated. For some applications it may be important to know whether the raw depth image contains new depth data or is a 3D reprojection (for example, to reduce the runtime cost of 3D reconstruction). To do that, compare the current raw depth image timestamp, obtained via ArImage_getTimestamp
, with the previously recorded raw depth image timestamp. If they are different, the depth image contains new information.
When the Geospatial API and the Depth API are enabled, output images from the Depth API will include terrain and building geometry when in a location with VPS coverage. See the Geospatial Depth Developer Guide for more information.
The image must be released via ArImage_release
once it is no longer needed.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_depth_image
|
On successful return, this is filled out with a pointer to an
ArImage
. On error return, this is filled out filled out with nullptr
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if the session, frame, or depth image arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if the number of observed camera frames is not yet sufficient for depth estimation; or depth estimation was not possible due to poor lighting, camera occlusion, or insufficient motion observed. -
AR_ERROR_NOT_TRACKING
The session is not in theAR_TRACKING_STATE_TRACKING
state, which is required to acquire depth images. -
AR_ERROR_ILLEGAL_STATE
if a supported depth mode was not enabled in Session configuration. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of depth images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the providedArFrame
is not the current one.
ArFrame_acquireSemanticConfidenceImage
ArStatus ArFrame_acquireSemanticConfidenceImage ( const ArSession * session , const ArFrame * frame , ArImage ** out_semantic_confidence_image )
Attempts to acquire the semantic confidence image corresponding to the current frame.
Each pixel is an 8-bit integer representing the estimated confidence of the corresponding pixel in the semantic image. See the Scene Semantics Developer Guide for more information.
The confidence value is between 0 and 255, inclusive, with 0 representing the lowest confidence and 255 representing the highest confidence in the semantic class prediction (see ArFrame_acquireSemanticImage
).
The image must be released via ArImage_release
once it is no longer needed.
In order to obtain a valid result from this function, you must set the session's ArSemanticMode
to AR_SEMANTIC_MODE_ENABLED
. Use ArSession_isSemanticModeSupported
to query for support for Scene Semantics.
The size of the semantic confidence image is the same size as the image obtained by ArFrame_acquireSemanticImage
.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_semantic_confidence_image
|
On successful return, this is filled out with a pointer to an
ArImage
, where each pixel denotes the confidence corresponding to the semantic label. On error return, this is filled out with nullptr
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if thesession
,frame
, orout_semantic_confidence_image
arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if no semantic image is available that corresponds to the frame. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the providedArFrame
is not the current one.
ArFrame_acquireSemanticImage
ArStatus ArFrame_acquireSemanticImage ( const ArSession * session , const ArFrame * frame , ArImage ** out_semantic_image )
Attempts to acquire the semantic image corresponding to the current frame.
Each pixel in the image is an 8-bit unsigned integer representing a semantic class label: see ArSemanticLabel
for a list of pixel labels and the Scene Semantics Developer Guide
for more information.
The image must be released via ArImage_release
once it is no longer needed.
In order to obtain a valid result from this function, you must set the session's ArSemanticMode
to AR_SEMANTIC_MODE_ENABLED
. Use ArSession_isSemanticModeSupported
to query for support for Scene Semantics.
The width of the semantic image is currently 256 pixels. The height of the image depends on the device and will match its display aspect ratio.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_semantic_image
|
On successful return, this is filled out with a pointer to an
ArImage
formatted as UINT8, where each pixel denotes the semantic class. On error return, this is filled out with NULL
. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
if thesession
,frame
, orout_semantic_image
arguments are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if no semantic image is available that corresponds to the frame. -
AR_ERROR_RESOURCE_EXHAUSTED
if the caller app has exceeded maximum number of images that it can hold without releasing. -
AR_ERROR_DEADLINE_EXCEEDED
if the providedArFrame
is not the current one.
ArFrame_create
void ArFrame_create ( const ArSession * session , ArFrame ** out_frame )
Allocates a new ArFrame
object, storing the pointer into *out_frame
.
Note: the same ArFrame
can be used repeatedly when calling ArSession_update
.
ArFrame_destroy
void ArFrame_destroy( ArFrame *frame )
Releases an ArFrame
and any references it holds.
ArFrame_getAndroidSensorPose
void ArFrame_getAndroidSensorPose ( const ArSession * session , const ArFrame * frame , ArPose * out_pose )
Sets out_pose
to the pose of the Android Sensor Coordinate System
in the world coordinate space for this frame.
The orientation follows the device's "native" orientation (it is not affected by display rotation) with all axes corresponding to those of the Android sensor coordinates.
See Also:
-
ArCamera_getDisplayOrientedPose
for the pose of the virtual camera. -
ArCamera_getPose
for the pose of the physical camera. -
ArFrame_getTimestamp
for the system time that this pose was estimated for.
Note: This pose is only useful when ArCamera_getTrackingState
returns AR_TRACKING_STATE_TRACKING
and otherwise should not be used.
session
|
The ARCore session
|
frame
|
The current frame.
|
out_pose
|
ArFrame_getCameraTextureName
void ArFrame_getCameraTextureName ( const ArSession * session , const ArFrame * frame , uint32_t * out_texture_id )
Returns the OpenGL ES camera texture name (ID) associated with this frame.
This is guaranteed to be one of the texture names previously set via ArSession_setCameraTextureNames
or ArSession_setCameraTextureName
. Texture names (IDs) are returned in a round robin fashion in sequential frames.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_texture_id
|
Where to store the texture name (ID).
|
ArFrame_getDisplayGeometryChanged
void ArFrame_getDisplayGeometryChanged ( const ArSession * session , const ArFrame * frame , int32_t * out_geometry_changed )
Checks if the display rotation or viewport geometry changed since the previous call to ArSession_update
.
The application should re-query ArCamera_getProjectionMatrix
and ArFrame_transformCoordinates2d
whenever this emits non-zero.
ArFrame_getHardwareBuffer
ArStatus ArFrame_getHardwareBuffer ( const ArSession * session , const ArFrame * frame , void ** out_hardware_buffer )
Gets the AHardwareBuffer
for this frame.
See Vulkan Rendering developer guide for more information.
The result in out_hardware_buffer
is only valid when a configuration is active that uses AR_TEXTURE_UPDATE_MODE_EXPOSE_HARDWARE_BUFFER
.
This hardware buffer is only guaranteed to be valid until the next call to ArSession_update()
. If you want to use the hardware buffer beyond that, such as for rendering, you must call AHardwareBuffer_acquire
and then call AHardwareBuffer_release
after your rendering is complete.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_hardware_buffer
|
The destination
AHardwareBuffer
representing a memory chunk of a camera image. |
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
- one or more input arguments are invalid. -
AR_ERROR_DEADLINE_EXCEEDED
- the input frame is not the current frame. -
AR_ERROR_NOT_YET_AVAILABLE
- the camera failed to produce the image.
ArFrame_getLightEstimate
void ArFrame_getLightEstimate ( const ArSession * session , const ArFrame * frame , ArLightEstimate * out_light_estimate )
Gets the current ArLightEstimate
, if Lighting Estimation is enabled.
session
|
The ARCore session.
|
frame
|
The current frame.
|
out_light_estimate
|
The
ArLightEstimate
to fill. This object must have been previously created with ArLightEstimate_create
. |
ArFrame_getSemanticLabelFraction
ArStatus ArFrame_getSemanticLabelFraction ( const ArSession * session , const ArFrame * frame , ArSemanticLabel query_label , float * out_fraction )
Retrieves the fraction of the most recent semantics frame that are query_label
.
Queries the semantic image provided by ArFrame_acquireSemanticImage
for pixels labeled by query_label
. This call is more efficient than retrieving the ArImage
and performing a pixel-wise search for the detected labels.
session
|
The ARCore session.
|
frame
|
The current frame.
|
query_label
|
The label to search for within the semantic image for this frame.
|
out_fraction
|
The fraction of pixels in the most recent semantic image that contain the query label. This value is in the range 0 to 1. If no pixels are present with that label, or if an invalid label is provided, this call returns 0.
|
AR_SUCCESS
or any of: -
AR_ERROR_INVALID_ARGUMENT
ifsession
,frame
, orquery_label
are invalid. -
AR_ERROR_NOT_YET_AVAILABLE
if no semantic image has been generated yet.
ArFrame_getTimestamp
void ArFrame_getTimestamp ( const ArSession * session , const ArFrame * frame , int64_t * out_timestamp_ns )
Returns the timestamp in nanoseconds when this image was captured.
This can be used to detect dropped frames or measure the camera frame rate. The time base of this value is specifically not
defined, but it is likely similar to clock_gettime(CLOCK_BOOTTIME)
.
ArFrame_getUpdatedAnchors
void ArFrame_getUpdatedAnchors ( const ArSession * session , const ArFrame * frame , ArAnchorList * out_anchor_list )
Gets the set of anchors that were changed by the ArSession_update
that produced this Frame.
session
|
The ARCore session
|
frame
|
The current frame.
|
out_anchor_list
|
The list to fill. This list must have already been allocated with
ArAnchorList_create
. If previously used, the list is cleared first. |
ArFrame_getUpdatedTrackData
void ArFrame_getUpdatedTrackData ( const ArSession * session , const ArFrame * frame , const uint8_t * track_id_uuid_16 , ArTrackDataList * out_track_data_list )
Gets the set of data recorded to the given track available during playback on this ArFrame
.
If frames are skipped during playback, which can happen when the device is under load, played back track data will be attached to a later frame in order.
Note, currently playback continues internally while the session is paused. Track data from frames that were processed while the session was paused will be discarded.
session
|
The ARCore session
|
frame
|
The current frame
|
track_id_uuid_16
|
The track ID as UUID as a byte array of 16 bytes in size
|
out_track_data_list
|
The list to fill. This list must have already been allocated with
ArTrackDataList_create
. If previously used, the list will first be cleared |
ArFrame_getUpdatedTrackables
void ArFrame_getUpdatedTrackables ( const ArSession * session , const ArFrame * frame , ArTrackableType filter_type , ArTrackableList * out_trackable_list )
Gets the set of trackables of a particular type that were changed by the ArSession_update
call that produced this Frame.
session
|
The ARCore session
|
frame
|
The current frame.
|
filter_type
|
|
out_trackable_list
|
The list to fill. This list must have already been allocated with
ArTrackableList_create
. If previously used, the list is cleared first. |
ArFrame_hitTest
void ArFrame_hitTest ( const ArSession * session , const ArFrame * frame , float pixel_x , float pixel_y , ArHitResultList * hit_result_list )
Performs a ray cast from the user's device in the direction of the given location in the camera view.
Intersections with detected scene geometry are returned, sorted by distance from the device; the nearest intersection is returned first.
Note: Significant geometric leeway is given when returning hit results. For example, a plane hit may be generated if the ray came close, but did not actually hit within the plane extents or plane bounds ( ArPlane_isPoseInExtents
and ArPlane_isPoseInPolygon
can be used to determine these cases). A point (Point Cloud) hit is generated when a point is roughly within one finger-width of the provided screen coordinates.
The resulting list is ordered by distance, with the nearest hit first
Note: If not tracking, the hit_result_list
will be empty.
Note: If called on an old frame (not the latest produced by ArSession_update
the hit_result_list
will be empty).
Note: When using the front-facing (selfie) camera, the returned hit result list will always be empty, as the camera is not AR_TRACKING_STATE_TRACKING
. Hit testing against tracked faces is not currently supported.
Note: In ARCore 1.24.0 or later on supported devices, if the ArDepthMode
is enabled by calling ArConfig_setDepthMode
the hit_result_list
includes ArDepthPoint
values that are sampled from the latest computed depth image.
session
|
The ARCore session.
|
frame
|
The current frame.
|
pixel_x
|
Logical X position within the view, as from an Android UI event.
|
pixel_y
|
Logical Y position within the view, as from an Android UI event.
|
hit_result_list
|
The list to fill. This list must have been previously allocated using
ArHitResultList_create
. If the list has been previously used, it will first be cleared. |
ArFrame_hitTestInstantPlacement
void ArFrame_hitTestInstantPlacement ( const ArSession * session , const ArFrame * frame , float pixel_x , float pixel_y , float approximate_distance_meters , ArHitResultList * hit_result_list )
Performs a ray cast that can return a result before ARCore establishes full tracking.
The pose and apparent scale of attached objects depends on the ArInstantPlacementPoint
tracking method and the provided approximate_distance_meters
. A discussion of the different tracking methods and the effects of apparent object scale are described in ArInstantPlacementPoint
.
This function will succeed only if ArInstantPlacementMode
is AR_INSTANT_PLACEMENT_MODE_LOCAL_Y_UP
in the ARCore session configuration, the ARCore session tracking state is AR_TRACKING_STATE_TRACKING
, and there are sufficient feature points to track the point in screen space.
session
|
The ARCore session.
|
frame
|
The current frame.
|
pixel_x
|
Logical X position within the view, as from an Android UI event.
|
pixel_y
|
Logical Y position within the view, as from an Android UI event.
|
approximate_distance_meters
|
The distance at which to create an
ArInstantPlacementPoint
. This is only used while the tracking method for the returned point is AR_INSTANT_PLACEMENT_POINT_TRACKING_METHOD_SCREENSPACE_WITH_APPROXIMATE_DISTANCE
. |
hit_result_list
|
The list to fill. If successful the list will contain a single
ArHitResult
, otherwise it will be cleared. The ArHitResult
will have a trackable of type ArInstantPlacementPoint
. The list must have been previously allocated using ArHitResultList_create
. |
ArFrame_hitTestRay
void ArFrame_hitTestRay ( const ArSession * session , const ArFrame * frame , const float * ray_origin_3 , const float * ray_direction_3 , ArHitResultList * hit_result_list )
Similar to ArFrame_hitTest
, but takes an arbitrary ray in world space coordinates instead of a screen space point.
session
|
The ARCore session.
|
frame
|
The current frame.
|
ray_origin_3
|
A pointer to float[3] array containing ray origin in world space coordinates.
|
ray_direction_3
|
A pointer to float[3] array containing ray direction in world space coordinates. Does not have to be normalized.
|
hit_result_list
|
The list to fill. This list must have been previously allocated using
ArHitResultList_create
. If the list has been previously used, it will first be cleared. |
ArFrame_recordTrackData
ArStatus ArFrame_recordTrackData ( ArSession * session , const ArFrame * frame , const uint8_t * track_id_uuid_16 , const void * payload , size_t payload_size )
Writes a data sample in the specified track.
The samples recorded using this API will be muxed into the recorded MP4 dataset in a corresponding additional MP4 stream.
For smooth playback of the MP4 on video players and for future compatibility of the MP4 datasets with ARCore's playback of tracks it is recommended that the samples are recorded at a frequency no higher than 90kHz.
Additionally, if the samples are recorded at a frequency lower than 1Hz, empty padding samples will be automatically recorded at approximately one second intervals to fill in the gaps.
Recording samples introduces additional CPU and/or I/O overhead and may affect app performance.
session
|
The ARCore session
|
frame
|
The current
ArFrame
|
track_id_uuid_16
|
The external track ID as UUID as a byte array of 16 bytes in size
|
payload
|
The byte array payload to record
|
payload_size
|
Size in bytes of the payload
|
AR_SUCCESS
or any of: -
AR_ERROR_ILLEGAL_STATE
when eitherArSession_getRecordingStatus
is not currentlyAR_RECORDING_OK
or the system is currently under excess load for images to be produced. The system should not be under such excess load for more than a few frames and an app should try to record the data again during the next frame. -
AR_ERROR_INVALID_ARGUMENT
when any argument is invalid, e.g. null. -
AR_ERROR_DEADLINE_EXCEEDED
when the frame is not the current frame.
ArFrame_transformCoordinates2d
void ArFrame_transformCoordinates2d ( const ArSession * session , const ArFrame * frame , ArCoordinates2dType input_coordinates , int32_t number_of_vertices , const float * vertices_2d , ArCoordinates2dType output_coordinates , float * out_vertices_2d )
Transforms a list of 2D coordinates from one 2D coordinate system to another 2D coordinate system.
For Android view coordinates ( AR_COORDINATES_2D_VIEW
, AR_COORDINATES_2D_VIEW_NORMALIZED
), the view information is taken from the most recent call to ArSession_setDisplayGeometry
.
Must be called on the most recently obtained ArFrame
object. If this function is called on an older frame, a log message will be printed and out_vertices_2d
will remain unchanged.
Some examples of useful conversions:
- To transform from [0,1] range to screen-quad coordinates for rendering:
AR_COORDINATES_2D_VIEW_NORMALIZED
->AR_COORDINATES_2D_TEXTURE_NORMALIZED
- To transform from [-1,1] range to screen-quad coordinates for rendering:
AR_COORDINATES_2D_OPENGL_NORMALIZED_DEVICE_COORDINATES
->AR_COORDINATES_2D_TEXTURE_NORMALIZED
- To transform a point found by a computer vision algorithm in a cpu image into a point on the screen that can be used to place an Android View (e.g. Button) at that location:
AR_COORDINATES_2D_IMAGE_PIXELS
->AR_COORDINATES_2D_VIEW
- To transform a point found by a computer vision algorithm in a CPU image into a point to be rendered using GL in clip-space ([-1,1] range):
AR_COORDINATES_2D_IMAGE_PIXELS
->AR_COORDINATES_2D_OPENGL_NORMALIZED_DEVICE_COORDINATES
If inputCoordinates
is same as outputCoordinates
, the input vertices will be copied to the output vertices unmodified.
session
|
The ARCore session.
|
frame
|
The current frame.
|
input_coordinates
|
The coordinate system used by
vectors2d_in
. |
number_of_vertices
|
The number of 2D vertices to transform.
vertices_2d
and out_vertices_2d
must point to arrays of size at least number_of_vertices
* 2. |
vertices_2d
|
Input 2D vertices to transform.
|
output_coordinates
|
The coordinate system to convert to.
|
out_vertices_2d
|
Transformed 2d vertices, can be the same array as
vertices_2d
for in-place transform. |
ArFrame_transformCoordinates3d
void ArFrame_transformCoordinates3d ( const ArSession * session , const ArFrame * frame , ArCoordinates2dType input_coordinates , int32_t number_of_vertices , const float * vertices_2d , ArCoordinates3dType output_coordinates , float * out_vertices_3d )
Transforms a list of 2D coordinates from one 2D coordinate space to 3D coordinate space.
See the Electronic Image Stabilization Developer Guide for more information.
The view information is taken from the most recent call to ArSession_setDisplayGeometry
.
If Electronic Image Stabilization is off, the device coordinates return (-1, -1, 0) -> (1, 1, 0) and texture coordinates return the same coordinates as ArFrame_transformCoordinates2d
with the Z component set to 1.0f.
In order to use EIS, your app should use EIS compensated screen coordinates and camera texture coordinates to pass on to shaders. Use the 2D NDC space coordinates as input to obtain EIS compensated 3D screen coordinates and matching camera texture coordinates.
session
|
The ARCore session.
|
frame
|
The current frame.
|
input_coordinates
|
The coordinate system used by
vectors2d_in
. |
number_of_vertices
|
The number of 2D vertices to transform.
vertices_2d
must point to arrays of size at least number_of_vertices
* 2. And out_vertices_2d
must point to arrays of size at least number_of_vertices
* 3. |
vertices_2d
|
Input 2D vertices to transform.
|
output_coordinates
|
The 3D coordinate system to convert to.
|
out_vertices_3d
|
Transformed 3d vertices.
|
ArFrame_transformDisplayUvCoords
void ArFrame_transformDisplayUvCoords ( const ArSession * session , const ArFrame * frame , int32_t num_elements , const float * uvs_in , float * uvs_out )
Transform the given texture coordinates to correctly show the background image.
This accounts for the display rotation, and any additional required adjustment. For performance, this function should be called only if ArFrame_getDisplayGeometryChanged
indicates a change.
Deprecated.
Deprecated in release 1.7.0. Use ArFrame_transformCoordinates2d
instead.
session
|
The ARCore session
|
frame
|
The current frame.
|
num_elements
|
The number of floats to transform. Must be a multiple of 2.
uvs_in
and uvs_out
must point to arrays of at least this many floats. |
uvs_in
|
Input UV coordinates in normalized screen space.
|
uvs_out
|
Output UV coordinates in texture coordinates.
|