The AI.EMBED function

This document describes the AI.EMBED function, which lets you create embeddings from text or image data in BigQuery. For example, the following query creates an embedding for a piece of text:

  SELECT 
  
 AI 
 . 
 EMBED 
 ( 
 "Some text to embed!" 
 , 
  
 endpoint 
  
 = 
>  
 'text-embedding-005' 
 ); 
 

The function works by sending a request to a stable Vertex AI embedding model , and then returning that model's response. The function incurs charges in Vertex AI each time it's called.

Embeddings

Embeddings are high-dimensional numerical vectors that represent a given entity. Machine learning (ML) models use embeddings to encode semantics about entities to make it easier to reason about and compare them. If two entities are semantically similar, then their respective embeddings are located near each other in the embedding vector space.

Embeddings help you perform the following tasks:

  • Semantic search: search entities ranked by semantic similarity.
  • Recommendation: return entities with attributes similar to a given entity.
  • Classification: return the class of entities whose attributes are similar to the given entity.
  • Clustering: cluster entities whose attributes are similar to a given entity.
  • Outlier detection: return entities whose attributes are least related to the given entity.

Input

Using the AI.EMBED function, you can use the following types of input:

When you analyze image data, the content must be in one of the supported image formats that are described in the Gemini API model mimeType parameter .

Syntax

Text embedding

 AI 
 . 
 EMBED 
 ( 
  
 [ 
  
 content 
  
 => 
  
 ] 
  
 ' content 
' 
 , 
  
 endpoint 
  
 => 
  
 ' endpoint 
' 
  
 [ 
 , 
  
 task_type 
  
 => 
  
 ' task_type 
' 
 ] 
  
 [ 
 , 
  
 title 
  
 => 
  
 ' title 
' 
 ] 
  
 [ 
 , 
  
 model_params 
  
 => 
  
  model_params 
 
 ] 
  
 [ 
 , 
  
 connection_id 
  
 => 
  
 ' connection 
' 
 ] 
 ) 

Arguments

AI.EMBED takes the following arguments:

  • content : a STRING value that provides the text to embed.

    The content value can be a string literal, the name of a table column, or the output of an expression that evaluates to a string.

  • endpoint : a STRING value that specifies a supported Vertex AI text embedding model endpoint to use for the text embedding model. The endpoint value that you specify must include the model version, for example text-embedding-005 . If you specify the model name rather than a URL, BigQuery ML automatically identifies the model and uses the model's full endpoint.
  • task_type : a STRING literal that specifies the intended downstream application to help the model produce better quality embeddings. The task_type argument accepts the following values:
    • RETRIEVAL_QUERY : specifies that the given text is a query in a search or retrieval setting.
    • RETRIEVAL_DOCUMENT : specifies that the given text is a document in a search or retrieval setting.
    • SEMANTIC_SIMILARITY : specifies that the given text will be used for Semantic Textual Similarity (STS).
    • CLASSIFICATION : specifies that the embeddings will be used for classification.
    • CLUSTERING : specifies that the embeddings will be used for clustering.
    • QUESTION_ANSWERING : specifies that the embeddings will be used for question answering.
    • FACT_VERIFICATION : specifies that the embeddings will be used for fact verification.
    • CODE_RETRIEVAL_QUERY : specifies that the embeddings will be used for code retrieval.
  • title : A STRING value that specifies the document title, which the model uses to improve embedding quality. You can only use this parameter if you specify RETRIEVAL_DOCUMENT for the task_type value.
  • model_params : a JSON literal that provides additional parameters to the model. You can use any of the parameters object fields. One of these fields, outputDimensionality , lets you specify the number of dimensions to use when generating embeddings. For example, if you specify 256 for the outputDimensionality field , then the model returns 256 embeddings for each input value.
  • connection_id : a STRING value specifying the connection to use to communicate with the model, in the format PROJECT_ID . LOCATION . CONNECTION_ID . For example, myproject.us.myconnection .

    For user-initiated queries, the CONNECTION argument is optional. When a user initiates a query, BigQuery ML uses the credentials of the user who submitted the query to run it.

    If your query job is expected to run for 48 hours or longer, you should use the CONNECTION argument to run the query using a service account.

    Replace the following:

    • PROJECT_ID : the project ID of the project that contains the connection.
    • LOCATION : the location used by the connection. The connection must be in the same region in which the query is run.
    • CONNECTION_ID : the connection ID—for example, myconnection .

      You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID . For example, projects/myproject/locations/connection_location/connections/ myconnection .

    You need to grant the Vertex AI User role to the connection's service account in the project where you run the function.

Multimodal embedding

 AI 
 . 
 EMBED 
 ( 
  
 [ 
  
 content 
  
 => 
  
 ] 
  
 ' content 
' 
 , 
  
 connection_id 
  
 => 
  
 ' connection 
' 
 , 
  
 endpoint 
  
 => 
  
 ' endpoint 
' 
  
 [ 
 , 
  
 model_params 
  
 => 
  
  model_params 
 
 ] 
 ) 

Arguments

AI.EMBED takes the following arguments:

  • content : a STRING , ObjectRef , or ObjectRefRuntime value that provides the data to embed.
    • For text embeddings, you can specify one of the following:
      • A string literal.
      • The name of a STRING column.
      • The output of an expression that evaluates to a string.
    • For image content embeddings, you can specify one of the following:
      • The name of an ObjectRef column.
      • An ObjectRef value generated by a combination of the OBJ.FETCH_METADATA and OBJ.MAKE_REF functions. For example SELECT OBJ.FETCH_METADATA(OBJ.MAKE_REF("gs://mybucket/path/to/file.jpg", "us.connection1")); .
      • An ObjectRefRuntime value generated by the OBJ.GET_ACCESS_URL function .

    ObjectRef values must have the details.gcs_metadata.content_type elements of the JSON value populated.

    ObjectRefRuntime values must have the access_url.read_url and details.gcs_metadata.content_type elements of the JSON value populated.

  • connection_id : a STRING value specifying the connection to use to communicate with the model, in the format PROJECT_ID . LOCATION . CONNECTION_ID . For example, myproject.us.myconnection .

    Replace the following:

    • PROJECT_ID : the project ID of the project that contains the connection.
    • LOCATION : the location used by the connection. The connection must be in the same region in which the query is run.
    • CONNECTION_ID : the connection ID—for example, myconnection .

      You can get this value by viewing the connection details in the Google Cloud console and copying the value in the last section of the fully qualified connection ID that is shown in Connection ID . For example, projects/myproject/locations/connection_location/connections/ myconnection .

    You need to grant the Vertex AI User role to the connection's service account in the project where you run the function.

  • endpoint : a STRING value that specifies a supported Vertex AI multimodal embedding model endpoint to use for the multimodal embedding model. The endpoint value that you specify must include the model version, for example multimodalembedding@001 . If you specify the model name rather than a URL, BigQuery ML automatically identifies the model and uses the model's full endpoint.
  • model_params : a JSON literal that provides additional parameters to the model. Only the dimension field is supported. You can use the dimension field to specify the number of dimensions to use when generating embeddings. For example, if you specify 256 for the dimension field , then the model returns 256 embeddings for each input value.

Output

AI.EMBED returns a STRUCT value for each row in the table. The struct contains the following fields:

  • result : an ARRAY<FLOAT64> value containing the generated embeddings.
  • status : a STRING value that contains the API response status for the corresponding row. This value is empty if the operation was successful.

Examples

text embedding

The following example shows how to embed a string literal:

  SELECT 
  
 AI 
 . 
 EMBED 
 ( 
  
 'A piece of text to embed' 
 , 
  
 endpoint 
  
 = 
>  
 'text-embedding-005' 
 ). 
 result 
  
 AS 
  
 embedding 
 ; 
 

If you need to reuse embeddings of the same data across many queries, you should save the results to table. The following example generates 768-dimensional embeddings for publicly available BBC news articles and writes the results to a table:

  CREATE 
  
 OR 
  
 REPLACE 
  
 TABLE 
  
 mydataset 
 . 
 bbc_news_embeddings 
  
 AS 
 SELECT 
  
 title 
 , 
  
 body 
 , 
  
 AI 
 . 
 EMBED 
 ( 
  
 body 
 , 
  
 endpoint 
  
 = 
>  
 'text-embedding-005' 
 , 
  
 model_params 
  
 = 
>  
 JSON 
  
 '{"outputDimensionality": 768}' 
  
 ). 
 result 
  
 AS 
  
 embedding 
 FROM 
  
 `bigquery-public-data.bbc_news.fulltext` 
 ; 
 

The following example queries the table that you just created for the five articles that are most related to the topic "latest news in tech". It calls the VECTOR_SEARCH function and uses AI.EMBED to create an embedding to pass to the function as the search query.

  SELECT 
  
 base 
 . 
 title 
 , 
  
 base 
 . 
 body 
 FROM 
  
 VECTOR_SEARCH 
 ( 
  
 TABLE 
  
 mydataset 
 . 
 bbc_news_embeddings 
 , 
  
 # The name of the column that contains the embedding 
  
 'embedding' 
 , 
  
 # The embedding to search 
  
 ( 
 SELECT 
  
 AI 
 . 
 EMBED 
 ( 
 'latest news in tech' 
 , 
  
 endpoint 
  
 = 
>  
 'text-embedding-005' 
 ). 
 result 
 ), 
  
 top_k 
  
 = 
>  
 5 
 ); 
 

multimodal embedding

The following query creates an external table from images of pet products stored in a publicly available Cloud Storage bucket. Then, it generates embeddings for two of the images:

 # Create a dataset 
 CREATE 
  
 SCHEMA 
  
 IF 
  
 NOT 
  
 EXISTS 
  
 cymbal_pets 
 ; 
 # Create an object table 
 CREATE 
  
 OR 
  
 REPLACE 
  
 EXTERNAL 
  
 TABLE 
  
 cymbal_pets 
 . 
 product_images 
 WITH 
  
 CONNECTION 
  
 DEFAULT 
 OPTIONS 
  
 ( 
  
 object_metadata 
  
 = 
  
 'SIMPLE' 
 , 
  
 uris 
  
 = 
  
 [ 
 'gs://cloud-samples-data/bigquery/tutorials/cymbal-pets/images/*.png' 
 ] 
 ); 
 SELECT 
  
 ref 
 . 
 uri 
 , 
  
 STRING 
 ( 
 OBJ 
 . 
 GET_ACCESS_URL 
 ( 
 ref 
 , 
 'r' 
 ). 
 access_urls 
 . 
 read_url 
 ) 
  
 AS 
  
 signed_url 
 , 
  
 AI 
 . 
 EMBED 
 ( 
  
 ( 
 OBJ 
 . 
 GET_ACCESS_URL 
 ( 
 ref 
 , 
  
 'r' 
 )), 
  
 connection_id 
  
 => 
  
 'us.example_connection' 
 , 
  
 endpoint 
  
 => 
  
 'multimodalembedding@001' 
 ) 
  
 AS 
  
 embedding 
 FROM 
  
 `cymbal_pets.product_images` 
 LIMIT 
  
 2 
 ; 

Locations

You can run AI.EMBED in all of the locations that support Vertex AI embedding models, and also in the US and EU multi-regions.

Quotas

See Vertex AI and Cloud AI service functions quotas and limits .

What's next

Create a Mobile Website
View Site in Mobile | Classic
Share by: