Reference documentation and code samples for the Cloud AutoML V1beta1 Client class ModelEvaluation.
Evaluation results of a model.
Generated from protobuf message google.cloud.automl.v1beta1.ModelEvaluation
Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ classification_evaluation_metrics
Google\Cloud\AutoMl\V1beta1\ClassificationEvaluationMetrics
Model evaluation metrics for image, text, video and tables classification. Tables problem is considered a classification when the target column is CATEGORY DataType.
↳ regression_evaluation_metrics
Google\Cloud\AutoMl\V1beta1\RegressionEvaluationMetrics
Model evaluation metrics for Tables regression. Tables problem is considered a regression when the target column has FLOAT64 DataType.
↳ translation_evaluation_metrics
↳ image_object_detection_evaluation_metrics
Google\Cloud\AutoMl\V1beta1\ImageObjectDetectionEvaluationMetrics
Model evaluation metrics for image object detection.
↳ video_object_tracking_evaluation_metrics
Google\Cloud\AutoMl\V1beta1\VideoObjectTrackingEvaluationMetrics
Model evaluation metrics for video object tracking.
↳ text_sentiment_evaluation_metrics
Google\Cloud\AutoMl\V1beta1\TextSentimentEvaluationMetrics
Evaluation metrics for text sentiment models.
↳ text_extraction_evaluation_metrics
Google\Cloud\AutoMl\V1beta1\TextExtractionEvaluationMetrics
Evaluation metrics for text extraction models.
↳ name
string
Output only. Resource name of the model evaluation. Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
↳ annotation_spec_id
string
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation. For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.
↳ display_name
string
Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings. For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
↳ create_time
↳ evaluated_example_count
int
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model. For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id .
getClassificationEvaluationMetrics
Model evaluation metrics for image, text, video and tables classification.
Tables problem is considered a classification when the target column is CATEGORY DataType.
hasClassificationEvaluationMetrics
setClassificationEvaluationMetrics
Model evaluation metrics for image, text, video and tables classification.
Tables problem is considered a classification when the target column is CATEGORY DataType.
$this
getRegressionEvaluationMetrics
Model evaluation metrics for Tables regression.
Tables problem is considered a regression when the target column has FLOAT64 DataType.
hasRegressionEvaluationMetrics
setRegressionEvaluationMetrics
Model evaluation metrics for Tables regression.
Tables problem is considered a regression when the target column has FLOAT64 DataType.
$this
getTranslationEvaluationMetrics
Model evaluation metrics for translation.
hasTranslationEvaluationMetrics
setTranslationEvaluationMetrics
Model evaluation metrics for translation.
$this
getImageObjectDetectionEvaluationMetrics
Model evaluation metrics for image object detection.
hasImageObjectDetectionEvaluationMetrics
setImageObjectDetectionEvaluationMetrics
Model evaluation metrics for image object detection.
$this
getVideoObjectTrackingEvaluationMetrics
Model evaluation metrics for video object tracking.
hasVideoObjectTrackingEvaluationMetrics
setVideoObjectTrackingEvaluationMetrics
Model evaluation metrics for video object tracking.
$this
getTextSentimentEvaluationMetrics
Evaluation metrics for text sentiment models.
hasTextSentimentEvaluationMetrics
setTextSentimentEvaluationMetrics
Evaluation metrics for text sentiment models.
$this
getTextExtractionEvaluationMetrics
Evaluation metrics for text extraction models.
hasTextExtractionEvaluationMetrics
setTextExtractionEvaluationMetrics
Evaluation metrics for text extraction models.
$this
getName
Output only. Resource name of the model evaluation.
Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
string
setName
Output only. Resource name of the model evaluation.
Format: projects/{project_id}/locations/{location_id}/models/{model_id}/modelEvaluations/{model_evaluation_id}
var
string
$this
getAnnotationSpecId
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.
For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.
string
setAnnotationSpecId
Output only. The ID of the annotation spec that the model evaluation applies to. The The ID is empty for the overall model evaluation.
For Tables annotation specs in the dataset do not exist and this ID is always not set, but for CLASSIFICATION prediction_type-s the display_name field is used.
var
string
$this
getDisplayName
Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings.
For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
string
setDisplayName
Output only. The value of display_name at the moment when the model was trained. Because this field returns a value at model training time, for different models trained from the same dataset, the values may differ, since display names could had been changed between the two model's trainings.
For Tables CLASSIFICATION prediction_type-s distinct values of the target column at the moment of the model evaluation are populated here. The display_name is empty for the overall model evaluation.
var
string
$this
getCreateTime
Output only. Timestamp when this model evaluation was created.
hasCreateTime
clearCreateTime
setCreateTime
Output only. Timestamp when this model evaluation was created.
$this
getEvaluatedExampleCount
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model.
For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id .
int
setEvaluatedExampleCount
Output only. The number of examples used for model evaluation, i.e. for which ground truth from time of model creation is compared against the predicted annotations created by the model.
For overall ModelEvaluation (i.e. with annotation_spec_id not set) this is the total number of all examples used for evaluation. Otherwise, this is the count of examples that according to the ground truth were annotated by the annotation_spec_id .
var
int
$this
getMetrics
string