The AI.GENERATE_BOOL function
This document describes the AI.GENERATE_BOOL
function, which lets you
analyze any combination of text and unstructured data. For each row
in the table, the function generates a STRUCT
that contains a BOOL
value.
The function works by sending requests to a Vertex AI Gemini model, and then returning that model's response.
You can use the AI.GENERATE_BOOL
function to perform tasks such as
classification and sentiment analysis.
Prompt design can strongly affect the responses returned by the model. For more information, see Introduction to prompting .
Input
Using the AI.GENERATE_BOOL
function, you can use the following types
of input:
- Text data from standard tables.
-
ObjectRefRuntimevalues that are generated by theOBJ.GET_ACCESS_URLfunction . You can useObjectRefvalues from standard tables as input to theOBJ.GET_ACCESS_URLfunction. ( Preview )
When you analyze unstructured data, that data must meet the following requirements:
- Content must be in one of the supported formats that are
described in the Gemini API model
mimeTypeparameter . - If you are analyzing a video, the maximum supported length is two minutes.
If the video is longer than two minutes,
AI.GENERATE_BOOLonly returns results based on the first two minutes.
Syntax
AI . GENERATE_BOOL ( [ prompt => ] ' PROMPT ' , [ , endpoint => ' ENDPOINT ' ] [ , model_params => MODEL_PARAMS ] [ , connection_id => ' CONNECTION ' ] [ , request_type => ' REQUEST_TYPE ' ] )
Arguments
AI.GENERATE_BOOL
takes the following arguments:
-
PROMPT: aSTRINGorSTRUCTvalue that specifies thePROMPTvalue to send to the model. The prompt must be the first argument that you specify. You can provide the prompt value in the following ways:- Specify a
STRINGvalue. For example,'Is Seattle a US city?'. -
Specify a
STRUCTvalue that contains one or more fields. You can use the following types of fields, or arrays containing these types, within theSTRUCTvalue:Field type Description Examples STRINGA string literal, or the name of a STRINGcolumn.String literal:
'Is Seattle a US city?'
String column name:
my_string_columnObjectRefRuntimeAn
ObjectRefRuntimevalue returned by theOBJ.GET_ACCESS_URLfunction . TheOBJ.GET_ACCESS_URLfunction takes anObjectRefvalue as input, which you can provide by either specifying the name of a column that containsObjectRefvalues, or by constructing anObjectRefvalue.ObjectRefRuntimevalues must have theaccess_url.read_urlanddetails.gcs_metadata.content_typeelements of the JSON value populated.Function call with ObjectRefcolumn:
OBJ.GET_ACCESS_URL(my_objectref_column, 'r')
Function call with constructedObjectRefvalue:
OBJ.GET_ACCESS_URL(OBJ.MAKE_REF('gs://image.jpg', 'myconnection'), 'r')The function combines
STRUCTfields similarly to aCONCAToperation and concatenates the fields in their specified order. The same is true for the elements of any arrays used within the struct. The following table shows some examples ofSTRUCTprompt values and how they are interpreted:Struct field types Struct value Semantic equivalent STRUCT<STRING, STRING, STRING>('Is the city of ', my_city_column, ' in the US?')'Is the city of my_city_column_value in the US?' STRUCT<STRING, ObjectRefRuntime>('Is the city in the following image in the US?', OBJ.GET_ACCESS_URL(image_objectref_column, 'r'))'Is the city in the following image in the US?' image
- Specify a
-
ENDPOINT: aSTRINGvalue that specifies the Vertex AI endpoint to use for the model. You can specify any generally available or preview Gemini model. If you specify the model name, BigQuery ML automatically identifies and uses the full endpoint of the model. If you don't specify anENDPOINTvalue, BigQuery ML selects a recent stable version of Gemini to use. -
MODEL_PARAMS: aJSONliteral that provides additional parameters to the model. TheMODEL_PARAMSvalue must conform to thegenerateContentrequest body format . You can provide a value for any field in the request body except for thecontentsfield; thecontentsfield is populated with thePROMPTargument value. -
CONNECTION: aSTRINGvalue specifying the connection to use to communicate with the model, in the format[ PROJECT_ID ]. LOCATION . CONNECTION_ID. For example,myproject.us.myconnection.If you don't specify a connection, then the query uses your end-user credentials .
For information about configuring permissions, see Set permissions for BigQuery ML generative AI functions that call Vertex AI models .
-
REQUEST_TYPE: aSTRINGvalue that specifies the type of inference request to send to the Gemini model. The request type determines what quota the request uses. Valid values are as follows:-
SHARED: The function only uses dynamic shared quota (DSQ) . -
DEDICATED: The function only uses Provisioned Throughput quota. The function returns an invalid query error if Provisioned Throughput quota isn't available. For more information, see Use Vertex AI Provisioned Throughput . -
UNSPECIFIED: The function uses quota as follows:- If you haven't purchased Provisioned Throughput quota, the function uses DSQ quota.
- If you have purchased Provisioned Throughput quota, the function uses the Provisioned Throughput quota first. If requests exceed the Provisioned Throughput quota, the overflow traffic uses DSQ quota.
The default value is
UNSPECIFIED. -
Output
AI.GENERATE_BOOL
returns a STRUCT
value for each row in the table. The struct
contains the following fields:
-
result: aBOOLvalue containing the model's response to the prompt. The result isNULLif the request fails or is filtered by responsible AI . -
full_response: a JSON value containing the response from theprojects.locations.endpoints.generateContentcall to the model. The generated text is in thetextelement. -
status: aSTRINGvalue that contains the API response status for the corresponding row. This value is empty if the operation was successful.
Examples
The following examples assume that you have granted the Vertex AI User role to your personal account. For more information, see Run generative AI queries with end-user credentials .
Use string input
To determine whether each city is located in the US, call the AI.GENERATE_BOOL
function and select the result
field in the output
by running the following query:
SELECT city , AI . GENERATE_BOOL (( 'Is ' , city , ' a US city?' )). result FROM UNNEST ( [ "Seattle" , "Beijing" , "Paris" , "London" ] ) city ;
The result is similar to the following:
+---------+--------+ | city | result | +---------+--------+ | Seattle | true | | Beijing | false | | Paris | false | | London | false | +---------+--------+
Process images in a Cloud Storage bucket
The following query creates an external table from images of pet products stored in a publicly available Cloud Storage bucket:
CREATE SCHEMA IF NOT EXISTS bqml_tutorial ; CREATE OR REPLACE EXTERNAL TABLE bqml_tutorial . product_images WITH CONNECTION DEFAULT OPTIONS ( object_metadata = 'SIMPLE' , uris = [ 'gs://cloud-samples-data/bigquery/tutorials/cymbal-pets/images/*.png' ] );
To determine which animals are mammals, call the AI.GENERATE_BOOL
function
and select the result
field in the output by running the following query:
SELECT uri , STRING ( OBJ . GET_ACCESS_URL ( ref , 'r' ). access_urls . read_url ) AS signed_url , AI . GENERATE_BOOL (( "Is " , OBJ . GET_ACCESS_URL ( ref , 'r' ), " an aquarium?" )). result FROM bqml_tutorial . product_images LIMIT 3 ;
The result is similar to the following:

Set the thinking budget for a Gemini 2.5 Flash model
The following query shows how to set the model_params
argument to set the
model's thinking budget to 0
for the request:
SELECT city , AI . GENERATE_BOOL (( 'Is ' , city , ' a US city?' ), endpoint => 'gemini-2.5-flash' , model_params => JSON '{"generation_config":{"thinking_config": {"thinking_budget": 0}}}' ) FROM mydataset . cities ;
Best Practices
This function passes your input to a Gemini model and
incurs charges in Vertex AI each time it's called.
For information about how to view these charges, see Track costs
.
To minimize Vertex AI charges when you use AI.GENERATE_BOOL
on
filtered data using the WHERE
clause, materialize the filtered data to a
table first. For example, the first of the following examples is preferable to the second one:
CREATE TABLE mydataset . cities AS ( SELECT city_name from mydataset . customers WHERE ... ); SELECT city , AI . GENERATE_BOOL ( ( 'Is ' , city , ' a US city?' )). result FROM mydataset . cities ;
SELECT city , AI . GENERATE_BOOL ( ( 'Is ' , city , ' a US city?' )). result FROM ( SELECT city_name from mydataset . customers WHERE ...);
Writing the query results to a table beforehand helps you to ensure that you are sending as few rows as possible to the model.
Use Vertex AI Provisioned Throughput
You can use Vertex AI Provisioned Throughput
with the AI.GENERATE_BOOL
function to provide consistent high throughput for
requests. The remote model that you reference in the AI.GENERATE_BOOL
function
must use a supported Gemini model
in order for you to use Provisioned Throughput.
To use Provisioned Throughput, calculate your Provisioned Throughput requirements
and then purchase Provisioned Throughput
quota before running the AI.GENERATE_BOOL
function. When you purchase
Provisioned Throughput, do the following:
- For Model, select the same Gemini model as the one used
by the remote model that you reference in the
AI.GENERATE_BOOLfunction. -
For Region, select the same region as the dataset that contains the remote model that you reference in the
AI.GENERATE_BOOLfunction, with the following exceptions:- If the dataset is in the
USmulti-region, select theus-central1region. - If the dataset is in the
EUmulti-region, select theeurope-west4region.
- If the dataset is in the
After you submit the order, wait for the order to be approved and appear on the Orders page.
After you have purchased Provisioned Throughput quota, use the REQUEST_TYPE
argument to determine how the AI.GENERATE_BOOL
function uses
the quota.
Locations
You can run AI.GENERATE_BOOL
in all of the regions
that support Gemini models, and also in the US
and EU
multi-regions.
Quotas
See Vertex AI and Cloud AI service functions quotas and limits .
What's next
- For more information about using Vertex AI models to generate text and embeddings, see Generative AI overview .
- For more information about using Cloud AI APIs to perform AI tasks, see AI application overview .
- For more information about supported SQL statements and functions for generative AI models, see End-to-end user journeys for generative AI models .

