Reference documentation and code samples for the Google Cloud Datalabeling V1beta1 Client class EvaluationJob.
Defines an evaluation job that runs periodically to generateEvaluations.Creating an evaluation
jobis the starting point
for using continuous evaluation.
Generated from protobuf messagegoogle.cloud.datalabeling.v1beta1.EvaluationJob
Namespace
Google \ Cloud \ DataLabeling \ V1beta1
Methods
__construct
Constructor.
Parameters
Name
Description
data
array
Optional. Data for populating the Message object.
↳ name
string
Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/{project_id}/evaluationJobs/{evaluation_job_id}"
↳ description
string
Required. Description of the job. The description can be up to 25,000 characters long.
↳ state
int
Output only. Describes the current state of the job.
↳ schedule
string
Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule incrontab formator in anEnglish-like format. Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
↳ model_version
string
Required. TheAI Platform Prediction model versionto be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/{project_id}/models/{model_name}/versions/{version_name}" There can only be one evaluation job per model version.
Required. Configuration details for the evaluation job.
↳ annotation_spec_set
string
Required. Name of theAnnotationSpecSetdescribing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
↳ label_missing_ground_truth
bool
Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this totrue. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this tofalse.
Output only. Timestamp of when this evaluation job was created.
getName
Output only. After you create a job, Data Labeling Service assigns a name
to the job with the following format:
"projects/{project_id}/evaluationJobs/{evaluation_job_id}"
Returns
Type
Description
string
setName
Output only. After you create a job, Data Labeling Service assigns a name
to the job with the following format:
"projects/{project_id}/evaluationJobs/{evaluation_job_id}"
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getDescription
Required. Description of the job. The description can be up to 25,000
characters long.
Returns
Type
Description
string
setDescription
Required. Description of the job. The description can be up to 25,000
characters long.
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getState
Output only. Describes the current state of the job.
Returns
Type
Description
int
setState
Output only. Describes the current state of the job.
Parameter
Name
Description
var
int
Returns
Type
Description
$this
getSchedule
Required. Describes the interval at which the job runs. This interval must
be at least 1 day, and it is rounded to the nearest day. For example, if
you specify a 50-hour interval, the job runs every 2 days.
You can provide the schedule incrontab formator in anEnglish-like
format.
Regardless of what you specify, the job will run at 10:00 AM UTC. Only the
interval from this schedule is used, not the specific time of day.
Returns
Type
Description
string
setSchedule
Required. Describes the interval at which the job runs. This interval must
be at least 1 day, and it is rounded to the nearest day. For example, if
you specify a 50-hour interval, the job runs every 2 days.
You can provide the schedule incrontab formator in anEnglish-like
format.
Regardless of what you specify, the job will run at 10:00 AM UTC. Only the
interval from this schedule is used, not the specific time of day.
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getModelVersion
Required. TheAI Platform Prediction model
versionto be evaluated. Prediction
input and output is sampled from this model version. When creating an
evaluation job, specify the model version in the following format:
"projects/{project_id}/models/{model_name}/versions/{version_name}"
There can only be one evaluation job per model version.
Returns
Type
Description
string
setModelVersion
Required. TheAI Platform Prediction model
versionto be evaluated. Prediction
input and output is sampled from this model version. When creating an
evaluation job, specify the model version in the following format:
"projects/{project_id}/models/{model_name}/versions/{version_name}"
There can only be one evaluation job per model version.
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getEvaluationJobConfig
Required. Configuration details for the evaluation job.
Required. Name of theAnnotationSpecSetdescribing all the
labels that your machine learning model outputs. You must create this
resource before you create an evaluation job and provide its name in the
following format:
"projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
Returns
Type
Description
string
setAnnotationSpecSet
Required. Name of theAnnotationSpecSetdescribing all the
labels that your machine learning model outputs. You must create this
resource before you create an evaluation job and provide its name in the
following format:
"projects/{project_id}/annotationSpecSets/{annotation_spec_set_id}"
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getLabelMissingGroundTruth
Required. Whether you want Data Labeling Service to provide ground truth
labels for prediction input. If you want the service to assign human
labelers to annotate your data, set this totrue. If you want to provide
your own ground truth labels in the evaluation job's BigQuery table, set
this tofalse.
Returns
Type
Description
bool
setLabelMissingGroundTruth
Required. Whether you want Data Labeling Service to provide ground truth
labels for prediction input. If you want the service to assign human
labelers to annotate your data, set this totrue. If you want to provide
your own ground truth labels in the evaluation job's BigQuery table, set
this tofalse.
Parameter
Name
Description
var
bool
Returns
Type
Description
$this
getAttempts
Output only. Every time the evaluation job runs and an error occurs, the
failed attempt is appended to this array.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Datalabeling V1beta1 Client - Class EvaluationJob (0.6.3)\n\nVersion latestkeyboard_arrow_down\n\n- [0.6.3 (latest)](/php/docs/reference/cloud-datalabeling/latest/V1beta1.EvaluationJob)\n- [0.6.2](/php/docs/reference/cloud-datalabeling/0.6.2/V1beta1.EvaluationJob)\n- [0.5.7](/php/docs/reference/cloud-datalabeling/0.5.7/V1beta1.EvaluationJob)\n- [0.4.2](/php/docs/reference/cloud-datalabeling/0.4.2/V1beta1.EvaluationJob)\n- [0.3.1](/php/docs/reference/cloud-datalabeling/0.3.1/V1beta1.EvaluationJob)\n- [0.2.0](/php/docs/reference/cloud-datalabeling/0.2.0/V1beta1.EvaluationJob)\n- [0.1.14](/php/docs/reference/cloud-datalabeling/0.1.14/V1beta1.EvaluationJob) \n| **Beta**\n|\n|\n| This library is covered by the [Pre-GA Offerings Terms](/terms/service-terms#1)\n| of the Terms of Service. Pre-GA libraries might have limited support,\n| and changes to pre-GA libraries might not be compatible with other pre-GA versions.\n| For more information, see the\n[launch stage descriptions](/products#product-launch-stages). \nReference documentation and code samples for the Google Cloud Datalabeling V1beta1 Client class EvaluationJob.\n\nDefines an evaluation job that runs periodically to generate\n[Evaluations](/php/docs/reference/cloud-datalabeling/latest/V1beta1.Evaluation). [Creating an evaluation\njob](/ml-engine/docs/continuous-evaluation/create-job) is the starting point\nfor using continuous evaluation.\n\nGenerated from protobuf message `google.cloud.datalabeling.v1beta1.EvaluationJob`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ DataLabeling \\\\ V1beta1\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getName\n\nOutput only. After you create a job, Data Labeling Service assigns a name\nto the job with the following format:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/evaluationJobs/\u003cvar translate=\"no\"\u003e{evaluation_job_id}\u003c/var\u003e\"\n\n### setName\n\nOutput only. After you create a job, Data Labeling Service assigns a name\nto the job with the following format:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/evaluationJobs/\u003cvar translate=\"no\"\u003e{evaluation_job_id}\u003c/var\u003e\"\n\n### getDescription\n\nRequired. Description of the job. The description can be up to 25,000\ncharacters long.\n\n### setDescription\n\nRequired. Description of the job. The description can be up to 25,000\ncharacters long.\n\n### getState\n\nOutput only. Describes the current state of the job.\n\n### setState\n\nOutput only. Describes the current state of the job.\n\n### getSchedule\n\nRequired. Describes the interval at which the job runs. This interval must\nbe at least 1 day, and it is rounded to the nearest day. For example, if\nyou specify a 50-hour interval, the job runs every 2 days.\n\nYou can provide the schedule in\n[crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an\n[English-like\nformat](/appengine/docs/standard/python/config/cronref#schedule_format).\nRegardless of what you specify, the job will run at 10:00 AM UTC. Only the\ninterval from this schedule is used, not the specific time of day.\n\n### setSchedule\n\nRequired. Describes the interval at which the job runs. This interval must\nbe at least 1 day, and it is rounded to the nearest day. For example, if\nyou specify a 50-hour interval, the job runs every 2 days.\n\nYou can provide the schedule in\n[crontab format](/scheduler/docs/configuring/cron-job-schedules) or in an\n[English-like\nformat](/appengine/docs/standard/python/config/cronref#schedule_format).\nRegardless of what you specify, the job will run at 10:00 AM UTC. Only the\ninterval from this schedule is used, not the specific time of day.\n\n### getModelVersion\n\nRequired. The [AI Platform Prediction model\nversion](/ml-engine/docs/prediction-overview) to be evaluated. Prediction\ninput and output is sampled from this model version. When creating an\nevaluation job, specify the model version in the following format:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/models/\u003cvar translate=\"no\"\u003e{model_name}\u003c/var\u003e/versions/\u003cvar translate=\"no\"\u003e{version_name}\u003c/var\u003e\"\nThere can only be one evaluation job per model version.\n\n### setModelVersion\n\nRequired. The [AI Platform Prediction model\nversion](/ml-engine/docs/prediction-overview) to be evaluated. Prediction\ninput and output is sampled from this model version. When creating an\nevaluation job, specify the model version in the following format:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/models/\u003cvar translate=\"no\"\u003e{model_name}\u003c/var\u003e/versions/\u003cvar translate=\"no\"\u003e{version_name}\u003c/var\u003e\"\nThere can only be one evaluation job per model version.\n\n### getEvaluationJobConfig\n\nRequired. Configuration details for the evaluation job.\n\n### hasEvaluationJobConfig\n\n### clearEvaluationJobConfig\n\n### setEvaluationJobConfig\n\nRequired. Configuration details for the evaluation job.\n\n### getAnnotationSpecSet\n\nRequired. Name of the [AnnotationSpecSet](/php/docs/reference/cloud-datalabeling/latest/V1beta1.AnnotationSpecSet) describing all the\nlabels that your machine learning model outputs. You must create this\nresource before you create an evaluation job and provide its name in the\nfollowing format:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/annotationSpecSets/\u003cvar translate=\"no\"\u003e{annotation_spec_set_id}\u003c/var\u003e\"\n\n### setAnnotationSpecSet\n\nRequired. Name of the [AnnotationSpecSet](/php/docs/reference/cloud-datalabeling/latest/V1beta1.AnnotationSpecSet) describing all the\nlabels that your machine learning model outputs. You must create this\nresource before you create an evaluation job and provide its name in the\nfollowing format:\n\"projects/\u003cvar translate=\"no\"\u003e{project_id}\u003c/var\u003e/annotationSpecSets/\u003cvar translate=\"no\"\u003e{annotation_spec_set_id}\u003c/var\u003e\"\n\n### getLabelMissingGroundTruth\n\nRequired. Whether you want Data Labeling Service to provide ground truth\nlabels for prediction input. If you want the service to assign human\nlabelers to annotate your data, set this to `true`. If you want to provide\nyour own ground truth labels in the evaluation job's BigQuery table, set\nthis to `false`.\n\n### setLabelMissingGroundTruth\n\nRequired. Whether you want Data Labeling Service to provide ground truth\nlabels for prediction input. If you want the service to assign human\nlabelers to annotate your data, set this to `true`. If you want to provide\nyour own ground truth labels in the evaluation job's BigQuery table, set\nthis to `false`.\n\n### getAttempts\n\nOutput only. Every time the evaluation job runs and an error occurs, the\nfailed attempt is appended to this array.\n\n### setAttempts\n\nOutput only. Every time the evaluation job runs and an error occurs, the\nfailed attempt is appended to this array.\n\n### getCreateTime\n\nOutput only. Timestamp of when this evaluation job was created.\n\n### hasCreateTime\n\n### clearCreateTime\n\n### setCreateTime\n\nOutput only. Timestamp of when this evaluation job was created."]]