Machine learning with ARCoreStay organized with collectionsSave and categorize content based on your preferences.
You can use the camera feed that ARCore captures in a machine learning pipeline
with theML Kitand theGoogle Cloud Vision APIto identify real-world objects, and create an
intelligent augmented reality experience.
The image at left is taken from theARCore ML Kit sample,
written in Kotlin for Android. This sample app uses a machine learning
model to classify objects in the camera's view and attaches a label to the object
in the virtual scene.
TheML KitAPI provides for both Android
and iOS development, and theGoogle Cloud Vision APIhas both REST and RPC interfaces, so you can achieve the same results as the
ARCore ML Kit sample in your own app built with the Android NDK (C), with iOS, or
with Unity (AR Foundation).
SeeUse ARCore as input for Machine Learning modelsfor an overview of the patterns you need to implement. Then apply these to your
app built with the Android NDK (C), with iOS, or with Unity (AR Foundation).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-10-31 UTC."],[],[]]