Stay organized with collectionsSave and categorize content based on your preferences.
Resource: ModelEvaluation
A collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.
Fields
namestring
Output only. The resource name of the ModelEvaluation.
displayNamestring
The display name of the ModelEvaluation.
metricsSchemaUristring
Points to a YAML file stored on Google Cloud Storage describing themetricsof this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2Schema Object.
Output only. timestamp when this ModelEvaluation was created.
Uses RFC 3339, where generated output will always be Z-normalized and uses 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples:"2014-10-02T15:01:23Z","2014-10-02T15:01:23.045123456Z"or"2014-10-02T15:01:23+05:30".
Aggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.
The metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of "pipelineJobId", "evaluation_dataset_type", "evaluation_dataset_path", "row_based_metrics_path".
Specification for how the data should be sliced for bias. It contains a list of slices, with limitation of two slices. The first slice of data will be the slice_a. The second slice in the list (slice_b) will be compared against the first slice. If only a single slice is provided, then slice_a will be compared against "not slice_a". Below are examples with feature "education" with value "low", "medium", "high" in the dataset:
Example 1:
biasSlices = [{'education': 'low'}]
A single slice provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'medium' or 'high'.
Two slices provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'high'.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[],[],null,["# REST Resource: projects.locations.models.evaluations\n\nResource: ModelEvaluation\n-------------------------\n\nA collection of metrics calculated by comparing Model's predictions on all of the test data against annotations from the test data.\nFields `name` `string` \nOutput only. The resource name of the ModelEvaluation.\n`displayName` `string` \nThe display name of the ModelEvaluation.\n`metricsSchemaUri` `string` \nPoints to a YAML file stored on Google Cloud Storage describing the [metrics](/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations#ModelEvaluation.FIELDS.metrics) of this ModelEvaluation. The schema is defined as an OpenAPI 3.0.2 [Schema Object](https://github.com/OAI/OpenAPI-Specification/blob/main/versions/3.0.2.md#schemaObject).\n`metrics` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nEvaluation metrics of the Model. The schema of the metrics is stored in [metricsSchemaUri](/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations#ModelEvaluation.FIELDS.metrics_schema_uri)\n`createTime` `string (`[Timestamp](https://protobuf.dev/reference/protobuf/google.protobuf/#timestamp)` format)` \nOutput only. timestamp when this ModelEvaluation was created.\n\nUses RFC 3339, where generated output will always be Z-normalized and uses 0, 3, 6 or 9 fractional digits. Offsets other than \"Z\" are also accepted. Examples: `\"2014-10-02T15:01:23Z\"`, `\"2014-10-02T15:01:23.045123456Z\"` or `\"2014-10-02T15:01:23+05:30\"`.\n`sliceDimensions[]` `string` \nAll possible [dimensions](/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations.slices#Slice.FIELDS.dimension) of ModelEvaluationSlices. The dimensions can be used as the filter of the [ModelService.ListModelEvaluationSlices](/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations.slices/list#google.cloud.aiplatform.v1beta1.ModelService.ListModelEvaluationSlices) request, in the form of `slice.dimension = \u003cdimension\u003e`.\n`modelExplanation` `object (`[ModelExplanation](/vertex-ai/docs/reference/rest/v1beta1/ModelExplanation)`)` \nAggregated explanation metrics for the Model's prediction output over the data this ModelEvaluation uses. This field is populated only if the Model is evaluated with explanations, and only for AutoML tabular Models.\n`explanationSpecs[]` `object (`[ModelEvaluationExplanationSpec](/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations#ModelEvaluationExplanationSpec)`)` \nDescribes the values of [ExplanationSpec](/vertex-ai/docs/reference/rest/v1beta1/ExplanationSpec) that are used for explaining the predicted values on the evaluated data.\n`metadata` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nThe metadata of the ModelEvaluation. For the ModelEvaluation uploaded from Managed Pipeline, metadata contains a structured value with keys of \"pipelineJobId\", \"evaluation_dataset_type\", \"evaluation_dataset_path\", \"row_based_metrics_path\".\n`biasConfigs` `object (`[BiasConfig](/vertex-ai/docs/reference/rest/v1beta1/projects.locations.models.evaluations#BiasConfig)`)` \nSpecify the configuration for bias detection. \n\nModelEvaluationExplanationSpec\n------------------------------\n\nFields `explanationType` `string` \nExplanation type.\n\nFor AutoML Image Classification models, possible values are:\n\n- `image-integrated-gradients`\n- `image-xrai`\n`explanationSpec` `object (`[ExplanationSpec](/vertex-ai/docs/reference/rest/v1beta1/ExplanationSpec)`)` \nExplanation spec details. \n\nBiasConfig\n----------\n\nConfiguration for bias detection.\nFields `biasSlices` `object (`[SliceSpec](/vertex-ai/docs/reference/rest/v1beta1/SliceSpec)`)` \nSpecification for how the data should be sliced for bias. It contains a list of slices, with limitation of two slices. The first slice of data will be the slice_a. The second slice in the list (slice_b) will be compared against the first slice. If only a single slice is provided, then slice_a will be compared against \"not slice_a\". Below are examples with feature \"education\" with value \"low\", \"medium\", \"high\" in the dataset:\n\nExample 1: \n\n biasSlices = [{'education': 'low'}]\n\nA single slice provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'medium' or 'high'.\n\nExample 2: \n\n biasSlices = [{'education': 'low'},\n {'education': 'high'}]\n\nTwo slices provided. In this case, slice_a is the collection of data with 'education' equals 'low', and slice_b is the collection of data with 'education' equals 'high'.\n`labels[]` `string` \nPositive labels selection on the target field."]]