Cloud AutoML V1 Client - Class ModelEvaluation (2.0.4)

Reference documentation and code samples for the Cloud AutoML V1 Client class ModelEvaluation.

Evaluation results of a model.

Generated from protobuf message google.cloud.automl.v1.ModelEvaluation

Namespace

Google \ Cloud \ AutoMl \ V1

Methods

__construct

Constructor.

Parameters
Name
Description
data
array

Optional. Data for populating the Message object.

↳ classification_evaluation_metrics
ClassificationEvaluationMetrics

Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType.

↳ translation_evaluation_metrics
TranslationEvaluationMetrics

Model evaluation metrics for translation.

↳ image_object_detection_evaluation_metrics
ImageObjectDetectionEvaluationMetrics

Model evaluation metrics for image object detection.

↳ text_sentiment_evaluation_metrics
TextSentimentEvaluationMetrics

Evaluation metrics for text sentiment models.

↳ text_extraction_evaluation_metrics
TextExtractionEvaluationMetrics

Evaluation metrics for text extraction models.

↳ name
string

Output only. Resource name of the model evaluation. Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}

↳ annotation_spec_id
string

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.

↳ display_name
string

Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.

↳ create_time
Google\Protobuf\Timestamp

Output only. Timestamp when this model evaluation was created.

↳ evaluated_example_count
int

Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id .

getClassificationEvaluationMetrics

Model evaluation metrics for image, text, video and tables classification.

Tables problem is considered a classification when the target column is CATEGORY DataType.

Returns
Type
Description

hasClassificationEvaluationMetrics

setClassificationEvaluationMetrics

Model evaluation metrics for image, text, video and tables classification.

Tables problem is considered a classification when the target column is CATEGORY DataType.

Parameter
Name
Description
Returns
Type
Description
$this

getTranslationEvaluationMetrics

Model evaluation metrics for translation.

Returns
Type
Description

hasTranslationEvaluationMetrics

setTranslationEvaluationMetrics

Model evaluation metrics for translation.

Parameter
Name
Description
Returns
Type
Description
$this

getImageObjectDetectionEvaluationMetrics

Model evaluation metrics for image object detection.

Returns
Type
Description

hasImageObjectDetectionEvaluationMetrics

setImageObjectDetectionEvaluationMetrics

Model evaluation metrics for image object detection.

Parameter
Name
Description
Returns
Type
Description
$this

getTextSentimentEvaluationMetrics

Evaluation metrics for text sentiment models.

Returns
Type
Description

hasTextSentimentEvaluationMetrics

setTextSentimentEvaluationMetrics

Evaluation metrics for text sentiment models.

Parameter
Name
Description
Returns
Type
Description
$this

getTextExtractionEvaluationMetrics

Evaluation metrics for text extraction models.

Returns
Type
Description

hasTextExtractionEvaluationMetrics

setTextExtractionEvaluationMetrics

Evaluation metrics for text extraction models.

Parameter
Name
Description
Returns
Type
Description
$this

getName

Output only. Resource name of the model evaluation.

Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}

Returns
Type
Description
string

setName

Output only. Resource name of the model evaluation.

Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}

Parameter
Name
Description
var
string
Returns
Type
Description
$this

getAnnotationSpecId

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.

For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.

Returns
Type
Description
string

setAnnotationSpecId

Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.

For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.

Parameter
Name
Description
var
string
Returns
Type
Description
$this

getDisplayName

Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here.

The display_name is empty for the overall model evaluation.

Returns
Type
Description
string

setDisplayName

Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here.

The display_name is empty for the overall model evaluation.

Parameter
Name
Description
var
string
Returns
Type
Description
$this

getCreateTime

Output only. Timestamp when this model evaluation was created.

Returns
Type
Description

hasCreateTime

clearCreateTime

setCreateTime

Output only. Timestamp when this model evaluation was created.

Parameter
Name
Description
Returns
Type
Description
$this

getEvaluatedExampleCount

Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model.

For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id .

Returns
Type
Description
int

setEvaluatedExampleCount

Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model.

For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id .

Parameter
Name
Description
var
int
Returns
Type
Description
$this

getMetrics

Returns
Type
Description
string
Design a Mobile Site
View Site in Mobile | Classic
Share by: