Output only. Metrics for each confidence_threshold in 0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and position_threshold = INT32_MAX_VALUE. ROC and precision-recall curves, and other aggregated metrics are derived from them. The confidence metrics entries may also be supplied for additional values of position_threshold, but from these no aggregated metrics are computed.
Output only. Confusion matrix of the evaluation. Only set for MULTICLASS classification problems where number of labels is no more than 10. Only set for model level evaluation, not for evaluation per label.
↳ annotation_spec_id
array
Output only. The annotation spec ids used for this evaluation.
getAuPrc
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged
for the overall evaluation.
Returns
Type
Description
float
setAuPrc
Output only. The Area Under Precision-Recall Curve metric. Micro-averaged
for the overall evaluation.
Parameter
Name
Description
var
float
Returns
Type
Description
$this
getAuRoc
Output only. The Area Under Receiver Operating Characteristic curve metric.
Micro-averaged for the overall evaluation.
Returns
Type
Description
float
setAuRoc
Output only. The Area Under Receiver Operating Characteristic curve metric.
Micro-averaged for the overall evaluation.
Parameter
Name
Description
var
float
Returns
Type
Description
$this
getLogLoss
Output only. The Log Loss metric.
Returns
Type
Description
float
setLogLoss
Output only. The Log Loss metric.
Parameter
Name
Description
var
float
Returns
Type
Description
$this
getConfidenceMetricsEntry
Output only. Metrics for each confidence_threshold in
0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and
position_threshold = INT32_MAX_VALUE.
ROC and precision-recall curves, and other aggregated metrics are derived
from them. The confidence metrics entries may also be supplied for
additional values of position_threshold, but from these no aggregated
metrics are computed.
Output only. Metrics for each confidence_threshold in
0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and
position_threshold = INT32_MAX_VALUE.
ROC and precision-recall curves, and other aggregated metrics are derived
from them. The confidence metrics entries may also be supplied for
additional values of position_threshold, but from these no aggregated
metrics are computed.
Only set for MULTICLASS classification problems where number
of labels is no more than 10.
Only set for model level evaluation, not for evaluation per label.
Only set for MULTICLASS classification problems where number
of labels is no more than 10.
Only set for model level evaluation, not for evaluation per label.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Cloud AutoML V1 Client - Class ClassificationEvaluationMetrics (2.0.5)\n\nVersion latestkeyboard_arrow_down\n\n- [2.0.5 (latest)](/php/docs/reference/cloud-automl/latest/V1.ClassificationEvaluationMetrics)\n- [2.0.4](/php/docs/reference/cloud-automl/2.0.4/V1.ClassificationEvaluationMetrics)\n- [1.6.5](/php/docs/reference/cloud-automl/1.6.5/V1.ClassificationEvaluationMetrics)\n- [1.5.4](/php/docs/reference/cloud-automl/1.5.4/V1.ClassificationEvaluationMetrics)\n- [1.4.17](/php/docs/reference/cloud-automl/1.4.17/V1.ClassificationEvaluationMetrics) \nReference documentation and code samples for the Cloud AutoML V1 Client class ClassificationEvaluationMetrics.\n\nModel evaluation metrics for classification problems.\n\nNote: For Video Classification this metrics only describe quality of the\nVideo Classification predictions of \"segment_classification\" type.\n\nGenerated from protobuf message `google.cloud.automl.v1.ClassificationEvaluationMetrics`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ AutoMl \\\\ V1\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getAuPrc\n\nOutput only. The Area Under Precision-Recall Curve metric. Micro-averaged\nfor the overall evaluation.\n\n### setAuPrc\n\nOutput only. The Area Under Precision-Recall Curve metric. Micro-averaged\nfor the overall evaluation.\n\n### getAuRoc\n\nOutput only. The Area Under Receiver Operating Characteristic curve metric.\n\nMicro-averaged for the overall evaluation.\n\n### setAuRoc\n\nOutput only. The Area Under Receiver Operating Characteristic curve metric.\n\nMicro-averaged for the overall evaluation.\n\n### getLogLoss\n\nOutput only. The Log Loss metric.\n\n### setLogLoss\n\nOutput only. The Log Loss metric.\n\n### getConfidenceMetricsEntry\n\nOutput only. Metrics for each confidence_threshold in\n0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and\nposition_threshold = INT32_MAX_VALUE.\n\nROC and precision-recall curves, and other aggregated metrics are derived\nfrom them. The confidence metrics entries may also be supplied for\nadditional values of position_threshold, but from these no aggregated\nmetrics are computed.\n\n### setConfidenceMetricsEntry\n\nOutput only. Metrics for each confidence_threshold in\n0.00,0.05,0.10,...,0.95,0.96,0.97,0.98,0.99 and\nposition_threshold = INT32_MAX_VALUE.\n\nROC and precision-recall curves, and other aggregated metrics are derived\nfrom them. The confidence metrics entries may also be supplied for\nadditional values of position_threshold, but from these no aggregated\nmetrics are computed.\n\n### getConfusionMatrix\n\nOutput only. Confusion matrix of the evaluation.\n\nOnly set for MULTICLASS classification problems where number\nof labels is no more than 10.\nOnly set for model level evaluation, not for evaluation per label.\n\n### hasConfusionMatrix\n\n### clearConfusionMatrix\n\n### setConfusionMatrix\n\nOutput only. Confusion matrix of the evaluation.\n\nOnly set for MULTICLASS classification problems where number\nof labels is no more than 10.\nOnly set for model level evaluation, not for evaluation per label.\n\n### getAnnotationSpecId\n\nOutput only. The annotation spec ids used for this evaluation.\n\n### setAnnotationSpecId\n\nOutput only. The annotation spec ids used for this evaluation."]]