Reference documentation and code samples for the Google Cloud Datalabeling V1beta1 Client class Evaluation.
Describes an evaluation between a machine learning model's predictions and
ground truth labels. Created when anEvaluationJobruns successfully.
Generated from protobuf messagegoogle.cloud.datalabeling.v1beta1.Evaluation
Namespace
Google \ Cloud \ DataLabeling \ V1beta1
Methods
__construct
Constructor.
Parameters
Name
Description
data
array
Optional. Data for populating the Message object.
↳ name
string
Output only. Resource name of an evaluation. The name has the following format: "projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}'
Output only. Metrics comparing predictions to ground truth labels.
↳ annotation_type
int
Output only. Type of task that the model version being evaluated performs, as defined in theevaluationJobConfig.inputConfig.annotationTypefield of the evaluation job that created this evaluation.
↳ evaluated_item_count
int|string
Output only. The number of items in the ground truth dataset that were used for this evaluation. Only populated when the evaulation is for certain AnnotationTypes.
getName
Output only. Resource name of an evaluation. The name has the following
format:
"projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}'
Returns
Type
Description
string
setName
Output only. Resource name of an evaluation. The name has the following
format:
"projects/{project_id}/datasets/{dataset_id}/evaluations/{evaluation_id}'
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getConfig
Output only. Options used in the evaluation job that created this
evaluation.
Output only. Type of task that the model version being evaluated performs,
as defined in theevaluationJobConfig.inputConfig.annotationTypefield of the evaluation job that created this evaluation.
Returns
Type
Description
int
setAnnotationType
Output only. Type of task that the model version being evaluated performs,
as defined in theevaluationJobConfig.inputConfig.annotationTypefield of the evaluation job that created this evaluation.
Parameter
Name
Description
var
int
Returns
Type
Description
$this
getEvaluatedItemCount
Output only. The number of items in the ground truth dataset that were used
for this evaluation. Only populated when the evaulation is for certain
AnnotationTypes.
Returns
Type
Description
int|string
setEvaluatedItemCount
Output only. The number of items in the ground truth dataset that were used
for this evaluation. Only populated when the evaulation is for certain
AnnotationTypes.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Datalabeling V1beta1 Client - Class Evaluation (0.6.3)\n\nVersion latestkeyboard_arrow_down\n\n- [0.6.3 (latest)](/php/docs/reference/cloud-datalabeling/latest/V1beta1.Evaluation)\n- [0.6.2](/php/docs/reference/cloud-datalabeling/0.6.2/V1beta1.Evaluation)\n- [0.5.7](/php/docs/reference/cloud-datalabeling/0.5.7/V1beta1.Evaluation)\n- [0.4.2](/php/docs/reference/cloud-datalabeling/0.4.2/V1beta1.Evaluation)\n- [0.3.1](/php/docs/reference/cloud-datalabeling/0.3.1/V1beta1.Evaluation)\n- [0.2.0](/php/docs/reference/cloud-datalabeling/0.2.0/V1beta1.Evaluation)\n- [0.1.14](/php/docs/reference/cloud-datalabeling/0.1.14/V1beta1.Evaluation) \n| **Beta**\n|\n|\n| This library is covered by the [Pre-GA Offerings Terms](/terms/service-terms#1)\n| of the Terms of Service. Pre-GA libraries might have limited support,\n| and changes to pre-GA libraries might not be compatible with other pre-GA versions.\n| For more information, see the\n[launch stage descriptions](/products#product-launch-stages). \nReference documentation and code samples for the Google Cloud Datalabeling V1beta1 Client class Evaluation.\n\nDescribes an evaluation between a machine learning model's predictions and\nground truth labels. Created when an [EvaluationJob](/php/docs/reference/cloud-datalabeling/latest/V1beta1.EvaluationJob) runs successfully.\n\nGenerated from protobuf message `google.cloud.datalabeling.v1beta1.Evaluation`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ DataLabeling \\\\ V1beta1\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getName\n\nOutput only. Resource name of an evaluation. The name has the following\nformat:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/datasets/\u003cvar translate=\"no\"\u003e{dataset_id}\u003c/var\u003e/evaluations/\u003cvar translate=\"no\"\u003e{evaluation_id\u003c/var\u003e}'\n\n### setName\n\nOutput only. Resource name of an evaluation. The name has the following\nformat:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/datasets/\u003cvar translate=\"no\"\u003e{dataset_id}\u003c/var\u003e/evaluations/\u003cvar translate=\"no\"\u003e{evaluation_id\u003c/var\u003e}'\n\n### getConfig\n\nOutput only. Options used in the evaluation job that created this\nevaluation.\n\n### hasConfig\n\n### clearConfig\n\n### setConfig\n\nOutput only. Options used in the evaluation job that created this\nevaluation.\n\n### getEvaluationJobRunTime\n\nOutput only. Timestamp for when the evaluation job that created this\nevaluation ran.\n\n### hasEvaluationJobRunTime\n\n### clearEvaluationJobRunTime\n\n### setEvaluationJobRunTime\n\nOutput only. Timestamp for when the evaluation job that created this\nevaluation ran.\n\n### getCreateTime\n\nOutput only. Timestamp for when this evaluation was created.\n\n### hasCreateTime\n\n### clearCreateTime\n\n### setCreateTime\n\nOutput only. Timestamp for when this evaluation was created.\n\n### getEvaluationMetrics\n\nOutput only. Metrics comparing predictions to ground truth labels.\n\n### hasEvaluationMetrics\n\n### clearEvaluationMetrics\n\n### setEvaluationMetrics\n\nOutput only. Metrics comparing predictions to ground truth labels.\n\n### getAnnotationType\n\nOutput only. Type of task that the model version being evaluated performs,\nas defined in the\n[evaluationJobConfig.inputConfig.annotationType](/php/docs/reference/cloud-datalabeling/latest/V1beta1.EvaluationJobConfig#_Google_Cloud_DataLabeling_V1beta1_EvaluationJobConfig__getInputConfig__)\nfield of the evaluation job that created this evaluation.\n\n### setAnnotationType\n\nOutput only. Type of task that the model version being evaluated performs,\nas defined in the\n[evaluationJobConfig.inputConfig.annotationType](/php/docs/reference/cloud-datalabeling/latest/V1beta1.EvaluationJobConfig#_Google_Cloud_DataLabeling_V1beta1_EvaluationJobConfig__getInputConfig__)\nfield of the evaluation job that created this evaluation.\n\n### getEvaluatedItemCount\n\nOutput only. The number of items in the ground truth dataset that were used\nfor this evaluation. Only populated when the evaulation is for certain\nAnnotationTypes.\n\n### setEvaluatedItemCount\n\nOutput only. The number of items in the ground truth dataset that were used\nfor this evaluation. Only populated when the evaulation is for certain\nAnnotationTypes."]]