Stay organized with collectionsSave and categorize content based on your preferences.
BigQuery Explainable AI overview
This document describes how BigQuery ML supports
Explainable artificial intelligence (AI), sometimes called XAI.
Explainable AI helps you understand the results that
your predictive machine learning model generates for classification and
regression tasks by defining how each feature in a row of data contributed to
the predicted result. This information is often referred to as feature
attribution. You can use this information to verify that the model is behaving
as expected, to recognize biases in your models, and to inform ways to
improve your model and your training data.
BigQuery ML and Vertex AI both have Explainable AI
offerings which offer feature-based explanations. You can perform
explainability in BigQuery ML, or you canregister your modelin Vertex AI and perform explainability there.
Local versus global explainability
There are two types of explainability: local explainability and global
explainability. These are also known respectively aslocal feature importanceandglobal feature importance.
Local explainability returns feature attribution values for each explained
example. These values describe how much a particular feature affected the
prediction relative to the baseline prediction.
Global explainability returns the feature's overall influence on the
model and is often obtained by aggregating the feature attributions over the
entire dataset. A higher absolute value indicates the feature had a greater
influence on the model's predictions.
Explainable AI offerings in BigQuery ML
Explainable AI in BigQuery ML supports a variety of machine
learning models, including both time series and non-time series models. Each of
the models takes advantage of a different explainability method.
Shapley values for linear models are equal tomodel weight * feature
value, where feature values are standardized and model weights are
trained with the standardized feature values.
A global feature importance score that indicates how useful or valuable
each feature was in the construction of the boosted tree or random forest model during
training.
A gradients-based method that efficiently computes feature attributions
with the same axiomatic properties as the Shapley value. It provides a
sampling approximation of exact feature attributions. Its accuracy is
controlled by theintegrated_gradients_num_stepsparameter.
Sampled Shapley assigns credit for the model's outcome to each feature,
and considers different permutations of the features. This method
provides a sampling approximation of exact Shapley values.
Decomposes the time series into multiple components if those components
are present in the time series. The components include trend, seasonal, holiday, step changes, and spike and dips. See ARIMA_PLUSmodeling pipelinefor more
details.
Decomposes the time series into multiple components, including trend, seasonal, holiday, step changes, and spike and dips
(similar toARIMA_PLUS).
Attribution of each external regressor is calculated based on
Shapley Values, which is equal tomodel weight * feature value.
1ML_EXPLAIN_PREDICTis an extended version ofML.PREDICT.
2ML.GLOBAL_EXPLAINreturns the global explainability
obtained by taking the mean absolute attribution that each feature receives for
all the rows in the evaluation dataset.
3ML.EXPLAIN_FORECASTis an extended version ofML.FORECAST.
4ML.ADVANCED_WEIGHTSis an extended version ofML.WEIGHTS.
Explainable AI in Vertex AI
Explainable AI is available in Vertex AI for the following
subset of exportable supervised learning models:
When your BigQuery ML model is registered in
Model Registry, and if it is a type of model that supports
Explainable AI, you can enable Explainable AI on the model when deploying to an
endpoint. When you register your BigQuery ML model, all of the
associated metadata is populated for you.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eBigQuery ML supports Explainable AI (XAI), which helps users understand how individual features contribute to predictions in classification and regression models.\u003c/p\u003e\n"],["\u003cp\u003eXAI in BigQuery ML offers both local explainability, detailing the impact of features on individual predictions, and global explainability, showing a feature's overall influence on the model across a dataset.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery ML provides Explainable AI support for various models, including time series and non-time series, with different methods like Shapley values, Tree SHAP, and Integrated Gradients, depending on the model type.\u003c/p\u003e\n"],["\u003cp\u003eBigQuery ML models can be registered in Vertex AI, where Explainable AI can be enabled during deployment, allowing users to obtain explanations through online predictions with an extra cost.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003eML.EXPLAIN_PREDICT\u003c/code\u003e and \u003ccode\u003eML.GLOBAL_EXPLAIN\u003c/code\u003e SQL functions are supported to achieve explainability, with time-series models also having \u003ccode\u003eML.EXPLAIN_FORECAST\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# BigQuery Explainable AI overview\n================================\n\nThis document describes how BigQuery ML supports\nExplainable artificial intelligence (AI), sometimes called XAI.\n\nExplainable AI helps you understand the results that\nyour predictive machine learning model generates for classification and\nregression tasks by defining how each feature in a row of data contributed to\nthe predicted result. This information is often referred to as feature\nattribution. You can use this information to verify that the model is behaving\nas expected, to recognize biases in your models, and to inform ways to\nimprove your model and your training data.\n\nBigQuery ML and Vertex AI both have Explainable AI\nofferings which offer feature-based explanations. You can perform\nexplainability in BigQuery ML, or you can\n[register your model](/bigquery/docs/managing-models-vertex#register_models)\nin Vertex AI and perform explainability there.\n\nFor information about the supported SQL statements and functions for each\nmodel type, see\n[End-to-end user journey for each model](/bigquery/docs/e2e-journey).\n\nLocal versus global explainability\n----------------------------------\n\nThere are two types of explainability: local explainability and global\nexplainability. These are also known respectively as\n*local feature importance* and *global feature importance*.\n\n- Local explainability returns feature attribution values for each explained example. These values describe how much a particular feature affected the prediction relative to the baseline prediction.\n- Global explainability returns the feature's overall influence on the model and is often obtained by aggregating the feature attributions over the entire dataset. A higher absolute value indicates the feature had a greater influence on the model's predictions.\n\nExplainable AI offerings in BigQuery ML\n---------------------------------------\n\nExplainable AI in BigQuery ML supports a variety of machine\nlearning models, including both time series and non-time series models. Each of\nthe models takes advantage of a different explainability method.\n\n^1^`ML_EXPLAIN_PREDICT` is an extended version of `ML.PREDICT`.\n\n^2^`ML.GLOBAL_EXPLAIN` returns the global explainability\nobtained by taking the mean absolute attribution that each feature receives for\nall the rows in the evaluation dataset.\n\n^3^`ML.EXPLAIN_FORECAST` is an extended version of `ML.FORECAST`.\n\n^4^`ML.ADVANCED_WEIGHTS` is an extended version of `ML.WEIGHTS`.\n\nExplainable AI in Vertex AI\n---------------------------\n\nExplainable AI is available in Vertex AI for the following\nsubset of exportable supervised learning models:\n\nSee\n[Feature Attribution Methods](/vertex-ai/docs/explainable-ai/overview#feature-attribution-methods)\nto learn more about these methods.\n\n### Enable Explainable AI in Model Registry\n\nWhen your BigQuery ML model is registered in\nModel Registry, and if it is a type of model that supports\nExplainable AI, you can enable Explainable AI on the model when deploying to an\nendpoint. When you register your BigQuery ML model, all of the\nassociated metadata is populated for you.\n| **Note:** Explainable AI incurs a minor additional cost. See [Vertex AI pricing](/vertex-ai/pricing) to learn more.\n\n1. [Register your BigQuery ML model to the Model Registry](/bigquery/docs/managing-models-vertex#register_models).\n2. Go to the **Model Registry** page from the BigQuery section in the Google Cloud console.\n3. From the Model Registry, select the BigQuery ML model and click the model version to redirect to the model detail page.\n4. Select **More actions** from the model version. more_vert\n5. Click **Deploy to endpoint**.\n6. Define your endpoint - create an endpoint name and click continue.\n7. Select a machine type, for example, `n1-standard-2`.\n8. Under **Model settings**, in the logging section, select the checkbox to enable Explainability options.\n9. Click **Done** , and then **Continue** to deploy to the endpoint.\n\nTo learn how to use XAI on your models from the\nModel Registry, see\n[Get an online explanation using your deployed model](/vertex-ai/docs/tabular-data/classification-regression/get-online-predictions#online-explanation).\nTo learn more about XAI in Vertex AI, see\n[Get explanations](/vertex-ai/docs/explainable-ai/getting-explanations).\n\nWhat's next\n-----------\n\n- Learn how to [manage BigQuery ML models in Vertex AI](/bigquery/docs/managing-models-vertex)."]]