Output only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs.
For Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item.Attribution.output_indexcan be used to identify which output this attribution is explaining.
NOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts.Attribution.approximation_erroris not populated.
Attribution that explains a particular prediction output.
Fields
baselineOutputValuenumber
Output only. Model predicted output if the input instance is constructed from the baselines of all the features defined inExplanationMetadata.inputs. The field name of the output is determined by the key inExplanationMetadata.outputs.
If the Model's predicted output has multiple dimensions (rank > 1), this is the value in the output located byoutputIndex.
If there are multiple baselines, their output values are averaged.
instanceOutputValuenumber
Output only. Model predicted output on the corresponding [explanation instance][ExplainRequest.instances]. The field name of the output is determined by the key inExplanationMetadata.outputs.
If the Model predicted output has multiple dimensions, this is the value in the output located byoutputIndex.
The value is a struct, whose keys are the name of the feature. The values are how much the feature in theinstancecontributed to the predicted result.
The format of the value is determined by the feature's input format:
If the feature is a scalar value, the attribution value is afloating number.
If the feature is an array of scalar values, the attribution value is anarray.
If the feature is a struct, the attribution value is astruct. The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct.
Output only. The index that locates the explained prediction output.
If the prediction output is a scalar value, outputIndex is not populated. If the prediction output has multiple dimensions, the length of the outputIndex list is the same as the number of dimensions of the output. The i-th element in outputIndex is the element index of the i-th dimension of the output vector. Indices start from 0.
outputDisplayNamestring
Output only. The display name of the output identified byoutputIndex. For example, the predicted class name by a multi-classification Model.
This field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using outputIndex.
approximationErrornumber
Output only. Error offeatureAttributionscaused by approximation used in the explanation method. Lower value means more precise attributions.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-27 UTC."],[],[],null,["# ModelExplanation\n\nAggregated explanation metrics for a Model over a set of instances.\nFields `meanAttributions[]` `object (`[Attribution](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution)`)` \nOutput only. Aggregated attributions explaining the Model's prediction outputs over the set of instances. The attributions are grouped by outputs.\n\nFor Models that predict only one output, such as regression Models that predict only one score, there is only one attibution that explains the predicted output. For Models that predict multiple outputs, such as multiclass Models that predict multiple classes, each element explains one specific item. [Attribution.output_index](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.output_index) can be used to identify which output this attribution is explaining.\n\nThe [baselineOutputValue](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.baseline_output_value), [instanceOutputValue](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.instance_output_value) and [featureAttributions](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.feature_attributions) fields are averaged over the test data.\n\nNOTE: Currently AutoML tabular classification Models produce only one attribution, which averages attributions over all the classes it predicts. [Attribution.approximation_error](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.approximation_error) is not populated. \n\nAttribution\n-----------\n\nAttribution that explains a particular prediction output.\nFields `baselineOutputValue` `number` \nOutput only. Model predicted output if the input instance is constructed from the baselines of all the features defined in [ExplanationMetadata.inputs](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationMetadata.FIELDS.inputs). The field name of the output is determined by the key in [ExplanationMetadata.outputs](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationMetadata.FIELDS.outputs).\n\nIf the Model's predicted output has multiple dimensions (rank \\\u003e 1), this is the value in the output located by [outputIndex](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.output_index).\n\nIf there are multiple baselines, their output values are averaged.\n`instanceOutputValue` `number` \nOutput only. Model predicted output on the corresponding \\[explanation instance\\]\\[ExplainRequest.instances\\]. The field name of the output is determined by the key in [ExplanationMetadata.outputs](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationMetadata.FIELDS.outputs).\n\nIf the Model predicted output has multiple dimensions, this is the value in the output located by [outputIndex](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.output_index).\n`featureAttributions` `value (`[Value](https://protobuf.dev/reference/protobuf/google.protobuf/#value)` format)` \nOutput only. Attributions of each explained feature. Features are extracted from the [prediction instances](/vertex-ai/docs/reference/rest/v1/projects.locations.endpoints/explain#body.request_body.FIELDS.instances) according to [explanation metadata for inputs](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationMetadata.FIELDS.inputs).\n\nThe value is a struct, whose keys are the name of the feature. The values are how much the feature in the [instance](/vertex-ai/docs/reference/rest/v1/projects.locations.endpoints/explain#body.request_body.FIELDS.instances) contributed to the predicted result.\n\nThe format of the value is determined by the feature's input format:\n\n- If the feature is a scalar value, the attribution value is a [floating number](https://protobuf.dev/reference/protobuf/google.protobuf/#value).\n\n- If the feature is an array of scalar values, the attribution value is an [array](https://protobuf.dev/reference/protobuf/google.protobuf/#value).\n\n- If the feature is a struct, the attribution value is a [struct](https://protobuf.dev/reference/protobuf/google.protobuf/#value). The keys in the attribution value struct are the same as the keys in the feature struct. The formats of the values in the attribution struct are determined by the formats of the values in the feature struct.\n\nThe [ExplanationMetadata.feature_attributions_schema_uri](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationMetadata.FIELDS.feature_attributions_schema_uri) field, pointed to by the [ExplanationSpec](/vertex-ai/docs/reference/rest/v1/ExplanationSpec) field of the [Endpoint.deployed_models](/vertex-ai/docs/reference/rest/v1/projects.locations.endpoints#Endpoint.FIELDS.deployed_models) object, points to the schema file that describes the features and their attribution values (if it is populated).\n`outputIndex[]` `integer` \nOutput only. The index that locates the explained prediction output.\n\nIf the prediction output is a scalar value, outputIndex is not populated. If the prediction output has multiple dimensions, the length of the outputIndex list is the same as the number of dimensions of the output. The i-th element in outputIndex is the element index of the i-th dimension of the output vector. Indices start from 0.\n`outputDisplayName` `string` \nOutput only. The display name of the output identified by [outputIndex](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.output_index). For example, the predicted class name by a multi-classification Model.\n\nThis field is only populated iff the Model predicts display names as a separate field along with the explained output. The predicted display name must has the same shape of the explained output, and can be located using outputIndex.\n`approximationError` `number` \nOutput only. Error of [featureAttributions](/vertex-ai/docs/reference/rest/v1/ModelExplanation#Attribution.FIELDS.feature_attributions) caused by approximation used in the explanation method. Lower value means more precise attributions.\n\n- For Sampled Shapley [attribution](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationParameters.FIELDS.sampled_shapley_attribution), increasing [pathCount](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#SampledShapleyAttribution.FIELDS.path_count) might reduce the error.\n- For Integrated Gradients [attribution](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationParameters.FIELDS.integrated_gradients_attribution), increasing [stepCount](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#IntegratedGradientsAttribution.FIELDS.step_count) might reduce the error.\n- For [XRAI attribution](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationParameters.FIELDS.xrai_attribution), increasing [stepCount](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#XraiAttribution.FIELDS.step_count) might reduce the error.\n\nSee [this introduction](/vertex-ai/docs/explainable-ai/overview) for more information.\n`outputName` `string` \nOutput only. name of the explain output. Specified as the key in [ExplanationMetadata.outputs](/vertex-ai/docs/reference/rest/v1/ExplanationSpec#ExplanationMetadata.FIELDS.outputs)."]]