BigQuery Explainable AI overview

This document describes how BigQuery ML supports Explainable artificial intelligence (AI), sometimes called XAI.

Explainable AI helps you understand the results that your predictive machine learning model generates for classification and regression tasks by defining how each feature in a row of data contributed to the predicted result. This information is often referred to as feature attribution. You can use this information to verify that the model is behaving as expected, to recognize biases in your models, and to inform ways to improve your model and your training data.

BigQuery ML and Vertex AI both have Explainable AI offerings which offer feature-based explanations. You can perform explainability in BigQuery ML, or you can register your model in Vertex AI and perform explainability there.

Local versus global explainability

There are two types of explainability: local explainability and global explainability. These are also known respectively as local feature importance and global feature importance .

  • Local explainability returns feature attribution values for each explained example. These values describe how much a particular feature affected the prediction relative to the baseline prediction.
  • Global explainability returns the feature's overall influence on the model and is often obtained by aggregating the feature attributions over the entire dataset. A higher absolute value indicates the feature had a greater influence on the model's predictions.

Explainable AI offerings in BigQuery ML

Explainable AI in BigQuery ML supports a variety of machine learning models, including both time series and non-time series models. Each of the models takes advantage of a different explainability method.

Model category
Model types
Explainability method
Basic explanation of the method
Local explain functions
Global explain functions
Supervised models
Shapley values for linear models are equal to model weight * feature value , where feature values are standardized and model weights are trained with the standardized feature values.
Standard errors and p-values are used for significance testing against the model weights.
N/A
Tree SHAP is an algorithm to compute exact SHAP values for decision tree-based models.
Approximates the feature contribution values. It is faster and simpler compared to Tree SHAP.
A global feature importance score that indicates how useful or valuable each feature was in the construction of the boosted tree or random forest model during training.
N/A
A gradients-based method that efficiently computes feature attributions with the same axiomatic properties as the Shapley value. It provides a sampling approximation of exact feature attributions. Its accuracy is controlled by the integrated_gradients_num_steps parameter.
Sampled Shapley assigns credit for the model's outcome to each feature, and considers different permutations of the features. This method provides a sampling approximation of exact Shapley values.
N/A
Time series models
Decomposes the time series into multiple components if those components are present in the time series. The components include trend, seasonal, holiday, step changes, and spike and dips. See ARIMA_PLUS modeling pipeline for more details.
N/A
Decomposes the time series into multiple components, including trend, seasonal, holiday, step changes, and spike and dips (similar to ARIMA_PLUS ). Attribution of each external regressor is calculated based on Shapley Values, which is equal to model weight * feature value .
N/A

1 ML_EXPLAIN_PREDICT is an extended version of ML.PREDICT .

2 ML.GLOBAL_EXPLAIN returns the global explainability obtained by taking the mean absolute attribution that each feature receives for all the rows in the evaluation dataset.

3 ML.EXPLAIN_FORECAST is an extended version of ML.FORECAST .

4 ML.ADVANCED_WEIGHTS is an extended version of ML.WEIGHTS .

Explainable AI in Vertex AI

Explainable AI is available in Vertex AI for the following subset of exportable supervised learning models:

Model type Explainable AI method
dnn_classifier Integrated gradients
dnn_regressor Integrated gradients
dnn_linear_combined_classifier Integrated gradients
dnn_linear_combined_regressor Integrated gradients
boosted_tree_regressor Sampled shapley
boosted_tree_classifier Sampled shapley
random_forest_regressor Sampled shapley
random_forest_classifier Sampled shapley

See Feature Attribution Methods to learn more about these methods.

Enable Explainable AI in Model Registry

When your BigQuery ML model is registered in Model Registry, and if it is a type of model that supports Explainable AI, you can enable Explainable AI on the model when deploying to an endpoint. When you register your BigQuery ML model, all of the associated metadata is populated for you.

  1. Register your BigQuery ML model to the Model Registry .
  2. Go to the Model Registrypage from the BigQuery section in the Google Cloud console.
  3. From the Model Registry, select the BigQuery ML model and click the model version to redirect to the model detail page.
  4. Select More actionsfrom the model version.
  5. Click Deploy to endpoint.
  6. Define your endpoint - create an endpoint name and click continue.
  7. Select a machine type, for example, n1-standard-2 .
  8. Under Model settings, in the logging section, select the checkbox to enable Explainability options.
  9. Click Done, and then Continueto deploy to the endpoint.

To learn how to use XAI on your models from the Model Registry, see Get an online explanation using your deployed model . To learn more about XAI in Vertex AI, see Get explanations .

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: