These classes enable developers to identify faces, facial contours, and landmarks within images, offering detailed facial feature information.
TheMLKFaceDetectorclass is central to the process, handling face detection, while other classes provide data structures for representing detected features.
Developers can configure detection options usingMLKFaceDetectorOptionsto customize the behavior of the face detection process.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Missing the information I need","missingTheInformationINeed","thumb-down"],["Too complicated / too many steps","tooComplicatedTooManySteps","thumb-down"],["Out of date","outOfDate","thumb-down"],["Samples / code issue","samplesCodeIssue","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-07-10 UTC."],[],["The core content details five globally available classes for iOS face detection: `MLKFace`, representing a detected human face; `MLKFaceContour`, defining a face contour; `MLKFaceDetector`, which detects faces; `MLKFaceDetectorOptions`, used to configure the face detector; and `MLKFaceLandmark`, identifying specific landmarks on a face. Each class is an Objective-C interface, inheriting from `NSObject`. These classes collectively enable detailed face detection and feature identification within images.\n"]]