Class ExplanationConfig (1.15.1)

  ExplanationConfig 
 ( 
 mapping 
 = 
 None 
 , 
 * 
 , 
 ignore_unknown_fields 
 = 
 False 
 , 
 ** 
 kwargs 
 ) 
 

The config for integrating with Vertex Explainable AI. Only applicable if the Model has explanation_spec populated.

Attributes

Name Description
enable_feature_attributes bool
If want to analyze the Vertex Explainable AI feature attribute scores or not. If set to true, Vertex AI will log the feature attributions from explain response and do the skew/drift detection for them.
explanation_baseline google.cloud.aiplatform_v1.types.ModelMonitoringObjectiveConfig.ExplanationConfig.ExplanationBaseline
Predictions generated by the BatchPredictionJob using baseline dataset.

Inheritance

builtins.object > proto.message.Message > ExplanationConfig

Classes

ExplanationBaseline

  ExplanationBaseline 
 ( 
 mapping 
 = 
 None 
 , 
 * 
 , 
 ignore_unknown_fields 
 = 
 False 
 , 
 ** 
 kwargs 
 ) 
 

Output from BatchPredictionJob for Model Monitoring baseline dataset, which can be used to generate baseline attribution scores.

This message has oneof _ fields (mutually exclusive fields). For each oneof, at most one member field can be set at the same time. Setting any member of the oneof automatically clears all other members.

.. _oneof: https://proto-plus-python.readthedocs.io/en/stable/fields.html#oneofs-mutually-exclusive-fields