Reference documentation and code samples for the Google Cloud Datalabeling V1beta1 Client class EvaluationJob.
Defines an evaluation job that runs periodically to generate Evaluations . Creating an evaluation job is the starting point for using continuous evaluation.
Generated from protobuf message google.cloud.datalabeling.v1beta1.EvaluationJob
Namespace
Google \ Cloud \ DataLabeling \ V1beta1Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ name
string
Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/ {project_id} /evaluationJobs/ {evaluation_job_id} "
↳ description
string
Required. Description of the job. The description can be up to 25,000 characters long.
↳ state
int
Output only. Describes the current state of the job.
↳ schedule
string
Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days. You can provide the schedule in crontab format or in an English-like format . Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
↳ model_version
string
Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/ {project_id} /models/ {model_name} /versions/ {version_name} " There can only be one evaluation job per model version.
↳ evaluation_job_config
Google\Cloud\DataLabeling\V1beta1\EvaluationJobConfig
Required. Configuration details for the evaluation job.
↳ annotation_spec_set
string
Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/ {project_id} /annotationSpecSets/ {annotation_spec_set_id} "
↳ label_missing_ground_truth
bool
Required. Whether you want Data Labeling Service to provide ground truth labels for prediction input. If you want the service to assign human labelers to annotate your data, set this to true
. If you want to provide your own ground truth labels in the evaluation job's BigQuery table, set this to false
.
↳ attempts
array< Google\Cloud\DataLabeling\V1beta1\Attempt
>
Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
↳ create_time
getName
Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/ {project_id} /evaluationJobs/ {evaluation_job_id} "
string
setName
Output only. After you create a job, Data Labeling Service assigns a name to the job with the following format: "projects/ {project_id} /evaluationJobs/ {evaluation_job_id} "
var
string
$this
getDescription
Required. Description of the job. The description can be up to 25,000 characters long.
string
setDescription
Required. Description of the job. The description can be up to 25,000 characters long.
var
string
$this
getState
Output only. Describes the current state of the job.
int
setState
Output only. Describes the current state of the job.
var
int
$this
getSchedule
Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days.
You can provide the schedule in crontab format or in an English-like format . Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
string
setSchedule
Required. Describes the interval at which the job runs. This interval must be at least 1 day, and it is rounded to the nearest day. For example, if you specify a 50-hour interval, the job runs every 2 days.
You can provide the schedule in crontab format or in an English-like format . Regardless of what you specify, the job will run at 10:00 AM UTC. Only the interval from this schedule is used, not the specific time of day.
var
string
$this
getModelVersion
Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/ {project_id} /models/ {model_name} /versions/ {version_name} " There can only be one evaluation job per model version.
string
setModelVersion
Required. The AI Platform Prediction model version to be evaluated. Prediction input and output is sampled from this model version. When creating an evaluation job, specify the model version in the following format: "projects/ {project_id} /models/ {model_name} /versions/ {version_name} " There can only be one evaluation job per model version.
var
string
$this
getEvaluationJobConfig
Required. Configuration details for the evaluation job.
hasEvaluationJobConfig
clearEvaluationJobConfig
setEvaluationJobConfig
Required. Configuration details for the evaluation job.
$this
getAnnotationSpecSet
Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/ {project_id} /annotationSpecSets/ {annotation_spec_set_id} "
string
setAnnotationSpecSet
Required. Name of the AnnotationSpecSet describing all the labels that your machine learning model outputs. You must create this resource before you create an evaluation job and provide its name in the following format: "projects/ {project_id} /annotationSpecSets/ {annotation_spec_set_id} "
var
string
$this
getLabelMissingGroundTruth
Required. Whether you want Data Labeling Service to provide ground truth
labels for prediction input. If you want the service to assign human
labelers to annotate your data, set this to true
. If you want to provide
your own ground truth labels in the evaluation job's BigQuery table, set
this to false
.
bool
setLabelMissingGroundTruth
Required. Whether you want Data Labeling Service to provide ground truth
labels for prediction input. If you want the service to assign human
labelers to annotate your data, set this to true
. If you want to provide
your own ground truth labels in the evaluation job's BigQuery table, set
this to false
.
var
bool
$this
getAttempts
Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
setAttempts
Output only. Every time the evaluation job runs and an error occurs, the failed attempt is appended to this array.
$this
getCreateTime
Output only. Timestamp of when this evaluation job was created.
hasCreateTime
clearCreateTime
setCreateTime
Output only. Timestamp of when this evaluation job was created.
$this