Gemini API in Vertex AI quickstart

This quickstart shows you how to install the Google Gen AI SDK for your language of choice and then make your first API request. The samples vary slightly based on whether you authenticate to Vertex AI using an API keyor application default credentials (ADC).

Choose your authentication method:


Before you begin

Configure application default credentials if you haven't yet.

Required roles

To get the permissions that you need to use the Gemini API in Vertex AI, ask your administrator to grant you the Vertex AI User ( roles/aiplatform.user ) IAM role on your project. For more information about granting roles, see Manage access to projects, folders, and organizations .

You might also be able to get the required permissions through custom roles or other predefined roles .

Install the SDK and set up your environment

On your local machine, click one of the following tabs to install the SDK for your programming language.

Gen AI SDK for Python

Install and update the Gen AI SDK for Python by running this command.

pip  
install  
--upgrade  
google-genai

Set environment variables:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 

export GOOGLE_GENAI_USE_VERTEXAI = True

Gen AI SDK for Go

Install and update the Gen AI SDK for Go by running this command.

go  
get  
google.golang.org/genai

Set environment variables:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 

export GOOGLE_GENAI_USE_VERTEXAI = True

Gen AI SDK for Node.js

Install and update the Gen AI SDK for Node.js by running this command.

npm  
install  
@google/genai

Set environment variables:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 

export GOOGLE_GENAI_USE_VERTEXAI = True

Gen AI SDK for Java

Install and update the Gen AI SDK for Java by running this command.

Maven

Add the following to your pom.xml :

 <dependencies>  
<dependency>  
<groupId>com.google.genai</groupId>  
<artifactId>google-genai</artifactId>  
<version>0.7.0</version>  
</dependency>
</dependencies> 

Set environment variables:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 

export GOOGLE_GENAI_USE_VERTEXAI = True

REST

Set environment variables:

 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 API_ENDPOINT 
 = 
 YOUR_API_ENDPOINT 
 MODEL_ID 
 = 
 "gemini-2.5-flash" 
 GENERATE_CONTENT_API 
 = 
 "generateContent" 

Make your first request

Use the generateContent method to send a request to the Gemini API in Vertex AI:

Python

  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 HttpOptions 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1" 
 )) 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 "gemini-2.5-flash" 
 , 
 contents 
 = 
 "How does AI work?" 
 , 
 ) 
 print 
 ( 
 response 
 . 
 text 
 ) 
 # Example response: 
 # Okay, let's break down how AI works. It's a broad field, so I'll focus on the ... 
 # 
 # Here's a simplified overview: 
 # ... 
 

Go

  import 
  
 ( 
 "context" 
 "fmt" 
 "io" 
 "google.golang.org/genai" 
 ) 
 // 
 generateWithText 
 shows 
 how 
 to 
 generate 
 text 
 using 
 a 
 text 
 prompt 
 . 
 func 
 generateWithText 
 ( 
 w 
 io 
 . 
 Writer 
 ) 
 error 
 { 
 ctx 
 := 
 context 
 . 
 Background 
 () 
 client 
 , 
 err 
 := 
 genai 
 . 
 NewClient 
 ( 
 ctx 
 , 
& genai 
 . 
 ClientConfig 
 { 
 HTTPOptions 
 : 
 genai 
 . 
 HTTPOptions 
 { 
 APIVersion 
 : 
 "v1" 
 }, 
 }) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to create genai client: %w" 
 , 
 err 
 ) 
 } 
 resp 
 , 
 err 
 := 
 client 
 . 
 Models 
 . 
 GenerateContent 
 ( 
 ctx 
 , 
 "gemini-2.5-flash" 
 , 
 genai 
 . 
 Text 
 ( 
 "How does AI work?" 
 ), 
 nil 
 , 
 ) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to generate content: %w" 
 , 
 err 
 ) 
 } 
 respText 
 := 
 resp 
 . 
 Text 
 () 
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
 respText 
 ) 
 // 
 Example 
 response 
 : 
 // 
 That 
 's a great question! Understanding how AI works can feel like ... 
 // 
 ... 
 // 
 ** 
 1. 
 The 
 Foundation 
 : 
 Data 
 and 
 Algorithms 
 ** 
 // 
 ... 
 return 
 nil 
 } 
 

Node.js

  const 
  
 { 
 GoogleGenAI 
 } 
  
 = 
  
 require 
 ( 
 '@google/genai' 
 ); 
 const 
  
 GOOGLE_CLOUD_PROJECT 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_PROJECT 
 ; 
 const 
  
 GOOGLE_CLOUD_LOCATION 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_LOCATION 
  
 || 
  
 'global' 
 ; 
 async 
  
 function 
  
 generateContent 
 ( 
  
 projectId 
  
 = 
  
 GOOGLE_CLOUD_PROJECT 
 , 
  
 location 
  
 = 
  
 GOOGLE_CLOUD_LOCATION 
 ) 
  
 { 
  
 const 
  
 ai 
  
 = 
  
 new 
  
 GoogleGenAI 
 ({ 
  
 vertexai 
 : 
  
 true 
 , 
  
 project 
 : 
  
 projectId 
 , 
  
 location 
 : 
  
 location 
 , 
  
 }); 
  
 const 
  
 response 
  
 = 
  
 await 
  
 ai 
 . 
 models 
 . 
 generateContent 
 ({ 
  
 model 
 : 
  
 'gemini-2.5-flash' 
 , 
  
 contents 
 : 
  
 'How does AI work?' 
 , 
  
 }); 
  
 console 
 . 
 log 
 ( 
 response 
 . 
 text 
 ); 
  
 return 
  
 response 
 . 
 text 
 ; 
 } 
 

Java

  import 
  
 com.google.genai.Client 
 ; 
 import 
  
 com.google.genai.types.GenerateContentResponse 
 ; 
 import 
  
 com.google.genai.types.HttpOptions 
 ; 
 public 
 class 
  
 TextGenerationWithText 
 { 
 public 
 static 
 void 
 main 
 ( 
 String 
 [] 
 args 
 ) 
 { 
 // 
 TODO 
 ( 
 developer 
 ): 
 Replace 
 these 
 variables 
 before 
 running 
 the 
 sample 
 . 
 String 
 modelId 
 = 
 "gemini-2.5-flash" 
 ; 
 generateContent 
 ( 
 modelId 
 ); 
 } 
 // 
 Generates 
 text 
 with 
 text 
 input 
 public 
 static 
 String 
 generateContent 
 ( 
 String 
 modelId 
 ) 
 { 
 // 
 Initialize 
 client 
 that 
 will 
 be 
 used 
 to 
 send 
 requests 
 . 
 This 
 client 
 only 
 needs 
 to 
 be 
 created 
 // 
 once 
 , 
 and 
 can 
 be 
 reused 
 for 
 multiple 
 requests 
 . 
 try 
 ( 
 Client 
 client 
 = 
 Client 
 . 
 builder 
 () 
 . 
 location 
 ( 
 "global" 
 ) 
 . 
 vertexAI 
 ( 
 true 
 ) 
 . 
 httpOptions 
 ( 
 HttpOptions 
 . 
 builder 
 () 
 . 
 apiVersion 
 ( 
 "v1" 
 ) 
 . 
 build 
 ()) 
 . 
 build 
 ()) 
 { 
 GenerateContentResponse 
 response 
 = 
 client 
 . 
 models 
 . 
 generateContent 
 ( 
 modelId 
 , 
 "How does AI work?" 
 , 
 null 
 ); 
 System 
 . 
 out 
 . 
 print 
 ( 
 response 
 . 
 text 
 ()); 
 // 
 Example 
 response 
 : 
 // 
 Okay 
 , 
 let 
 's break down how AI works. It' 
 s 
 a 
 broad 
 field 
 , 
 so 
 I 
 'll focus on the ... 
 // 
 // 
 Here 
 's a simplified overview: 
 // 
 ... 
 return 
 response 
 . 
 text 
 (); 
 } 
 } 
 } 
 

REST

To send this prompt request, run the curl command from the command line or include the REST call in your application.

curl
-X  
POST
-H  
 "Content-Type: application/json" 
-H  
 "Authorization: Bearer 
 $( 
gcloud  
auth  
print-access-token ) 
 " 
 "https:// 
 ${ 
 API_ENDPOINT 
 } 
 /v1/projects/ 
 ${ 
 GOOGLE_CLOUD_PROJECT 
 } 
 /locations/ 
 ${ 
 GOOGLE_CLOUD_LOCATION 
 } 
 /publishers/google/models/ 
 ${ 
 MODEL_ID 
 } 
 : 
 ${ 
 GENERATE_CONTENT_API 
 } 
 " 
  
-d $'{ 
 "contents": { 
 "role": "user", 
 "parts": { 
 "text": "Explain how AI works in a few words" 
 } 
 } 
 }' 

The model returns a response. Note that the response is generated in sections with each section separately evaluated for safety.

Generate images

Gemini can generate and process images conversationally. You can prompt Gemini with text, images, or a combination of both to achieve various image-related tasks, such as image generation and editing. The following code demonstrates how to generate an image based on a descriptive prompt:

You must include responseModalities: ["TEXT", "IMAGE"] in your configuration. Image-only output is not supported with these models.

Python

  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 GenerateContentConfig 
 , 
 Modality 
 from 
  
 PIL 
  
 import 
 Image 
 from 
  
 io 
  
 import 
 BytesIO 
 client 
 = 
 genai 
 . 
 Client 
 () 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 "gemini-2.5-flash-image-preview" 
 , 
 contents 
 = 
 ( 
 "Generate an image of the Eiffel tower with fireworks in the background." 
 ), 
 config 
 = 
 GenerateContentConfig 
 ( 
 response_modalities 
 = 
 [ 
 Modality 
 . 
 TEXT 
 , 
 Modality 
 . 
 IMAGE 
 ], 
 candidate_count 
 = 
 1 
 , 
 safety_settings 
 = 
 [ 
 { 
 "method" 
 : 
 "PROBABILITY" 
 }, 
 { 
 "category" 
 : 
 "HARM_CATEGORY_DANGEROUS_CONTENT" 
 }, 
 { 
 "threshold" 
 : 
 "BLOCK_MEDIUM_AND_ABOVE" 
 }, 
 ], 
 ), 
 ) 
 for 
 part 
 in 
 response 
 . 
 candidates 
 [ 
 0 
 ] 
 . 
 content 
 . 
 parts 
 : 
 if 
 part 
 . 
 text 
 : 
 print 
 ( 
 part 
 . 
 text 
 ) 
 elif 
 part 
 . 
 inline_data 
 : 
 image 
 = 
 Image 
 . 
 open 
 ( 
 BytesIO 
 (( 
 part 
 . 
 inline_data 
 . 
 data 
 ))) 
 image 
 . 
 save 
 ( 
 "output_folder/example-image-eiffel-tower.png" 
 ) 
 # Example response: 
 #   I will generate an image of the Eiffel Tower at night, with a vibrant display of 
 #   colorful fireworks exploding in the dark sky behind it. The tower will be 
 #   illuminated, standing tall as the focal point of the scene, with the bursts of 
 #   light from the fireworks creating a festive atmosphere. 
 

Node.js

  const 
  
 fs 
  
 = 
  
 require 
 ( 
 'fs' 
 ); 
 const 
  
 { 
 GoogleGenAI 
 , 
  
 Modality 
 } 
  
 = 
  
 require 
 ( 
 '@google/genai' 
 ); 
 const 
  
 GOOGLE_CLOUD_PROJECT 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_PROJECT 
 ; 
 const 
  
 GOOGLE_CLOUD_LOCATION 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_LOCATION 
  
 || 
  
 'us-central1' 
 ; 
 async 
  
 function 
  
 generateContent 
 ( 
  
 projectId 
  
 = 
  
 GOOGLE_CLOUD_PROJECT 
 , 
  
 location 
  
 = 
  
 GOOGLE_CLOUD_LOCATION 
 ) 
  
 { 
  
 const 
  
 ai 
  
 = 
  
 new 
  
 GoogleGenAI 
 ({ 
  
 vertexai 
 : 
  
 true 
 , 
  
 project 
 : 
  
 projectId 
 , 
  
 location 
 : 
  
 location 
 , 
  
 }); 
  
 const 
  
 response 
  
 = 
  
 await 
  
 ai 
 . 
 models 
 . 
 generateContentStream 
 ({ 
  
 model 
 : 
  
 'gemini-2.0-flash-exp' 
 , 
  
 contents 
 : 
  
 'Generate an image of the Eiffel tower with fireworks in the background.' 
 , 
  
 config 
 : 
  
 { 
  
 responseModalities 
 : 
  
 [ 
 Modality 
 . 
 TEXT 
 , 
  
 Modality 
 . 
 IMAGE 
 ], 
  
 }, 
  
 }); 
  
 const 
  
 generatedFileNames 
  
 = 
  
 []; 
  
 let 
  
 imageIndex 
  
 = 
  
 0 
 ; 
  
 for 
  
 await 
  
 ( 
 const 
  
 chunk 
  
 of 
  
 response 
 ) 
  
 { 
  
 const 
  
 text 
  
 = 
  
 chunk 
 . 
 text 
 ; 
  
 const 
  
 data 
  
 = 
  
 chunk 
 . 
 data 
 ; 
  
 if 
  
 ( 
 text 
 ) 
  
 { 
  
 console 
 . 
 debug 
 ( 
 text 
 ); 
  
 } 
  
 else 
  
 if 
  
 ( 
 data 
 ) 
  
 { 
  
 const 
  
 fileName 
  
 = 
  
 ` 
 generate_content_streaming_image_ 
 $ 
 { 
 imageIndex 
 ++ 
 } 
 . 
 png 
 ` 
 ; 
  
 console 
 . 
 debug 
 ( 
 ` 
 Writing 
  
 response 
  
 image 
  
 to 
  
 file 
 : 
  
 $ 
 { 
 fileName 
 } 
 . 
 ` 
 ); 
  
 try 
  
 { 
  
 fs 
 . 
 writeFileSync 
 ( 
 fileName 
 , 
  
 data 
 ); 
  
 generatedFileNames 
 . 
 push 
 ( 
 fileName 
 ); 
  
 } 
  
 catch 
  
 ( 
 error 
 ) 
  
 { 
  
 console 
 . 
 error 
 ( 
 ` 
 Failed 
  
 to 
  
 write 
  
 image 
  
 file 
  
 $ 
 { 
 fileName 
 }: 
 ` 
 , 
  
 error 
 ); 
  
 } 
  
 } 
  
 } 
  
 return 
  
 generatedFileNames 
 ; 
 } 
 

Image understanding

Gemini can understand images as well. The following code uses the image generated in the previous section and uses a different model to infer information about the image:

Python

  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 HttpOptions 
 , 
 Part 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1" 
 )) 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 "gemini-2.5-flash" 
 , 
 contents 
 = 
 [ 
 "What is shown in this image?" 
 , 
 Part 
 . 
 from_uri 
 ( 
 file_uri 
 = 
 "gs://cloud-samples-data/generative-ai/image/scones.jpg" 
 , 
 mime_type 
 = 
 "image/jpeg" 
 , 
 ), 
 ], 
 ) 
 print 
 ( 
 response 
 . 
 text 
 ) 
 # Example response: 
 # The image shows a flat lay of blueberry scones arranged on parchment paper. There are ... 
 

Go

  import 
  
 ( 
 "context" 
 "fmt" 
 "io" 
 genai 
 "google.golang.org/genai" 
 ) 
 // 
 generateWithTextImage 
 shows 
 how 
 to 
 generate 
 text 
 using 
 both 
 text 
 and 
 image 
 input 
 func 
 generateWithTextImage 
 ( 
 w 
 io 
 . 
 Writer 
 ) 
 error 
 { 
 ctx 
 := 
 context 
 . 
 Background 
 () 
 client 
 , 
 err 
 := 
 genai 
 . 
 NewClient 
 ( 
 ctx 
 , 
& genai 
 . 
 ClientConfig 
 { 
 HTTPOptions 
 : 
 genai 
 . 
 HTTPOptions 
 { 
 APIVersion 
 : 
 "v1" 
 }, 
 }) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to create genai client: %w" 
 , 
 err 
 ) 
 } 
 modelName 
 := 
 "gemini-2.5-flash" 
 contents 
 := 
 [] 
 * 
 genai 
 . 
 Content 
 { 
 { 
 Parts 
 : 
 [] 
 * 
 genai 
 . 
 Part 
 { 
 { 
 Text 
 : 
 "What is shown in this image?" 
 }, 
 { 
 FileData 
 : 
& genai 
 . 
 FileData 
 { 
 // 
 Image 
 source 
 : 
 https 
 : 
 // 
 storage 
 . 
 googleapis 
 . 
 com 
 / 
 cloud 
 - 
 samples 
 - 
 data 
 / 
 generative 
 - 
 ai 
 / 
 image 
 / 
 scones 
 . 
 jpg 
 FileURI 
 : 
 "gs://cloud-samples-data/generative-ai/image/scones.jpg" 
 , 
 MIMEType 
 : 
 "image/jpeg" 
 , 
 }}, 
 }, 
 Role 
 : 
 "user" 
 }, 
 } 
 resp 
 , 
 err 
 := 
 client 
 . 
 Models 
 . 
 GenerateContent 
 ( 
 ctx 
 , 
 modelName 
 , 
 contents 
 , 
 nil 
 ) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to generate content: %w" 
 , 
 err 
 ) 
 } 
 respText 
 := 
 resp 
 . 
 Text 
 () 
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
 respText 
 ) 
 // 
 Example 
 response 
 : 
 // 
 The 
 image 
 shows 
 an 
 overhead 
 shot 
 of 
 a 
 rustic 
 , 
 artistic 
 arrangement 
 on 
 a 
 surface 
 that 
 ... 
 return 
 nil 
 } 
 

Node.js

  const 
  
 { 
 GoogleGenAI 
 } 
  
 = 
  
 require 
 ( 
 '@google/genai' 
 ); 
 const 
  
 GOOGLE_CLOUD_PROJECT 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_PROJECT 
 ; 
 const 
  
 GOOGLE_CLOUD_LOCATION 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_LOCATION 
  
 || 
  
 'global' 
 ; 
 async 
  
 function 
  
 generateContent 
 ( 
  
 projectId 
  
 = 
  
 GOOGLE_CLOUD_PROJECT 
 , 
  
 location 
  
 = 
  
 GOOGLE_CLOUD_LOCATION 
 ) 
  
 { 
  
 const 
  
 ai 
  
 = 
  
 new 
  
 GoogleGenAI 
 ({ 
  
 vertexai 
 : 
  
 true 
 , 
  
 project 
 : 
  
 projectId 
 , 
  
 location 
 : 
  
 location 
 , 
  
 }); 
  
 const 
  
 image 
  
 = 
  
 { 
  
 fileData 
 : 
  
 { 
  
 fileUri 
 : 
  
 'gs://cloud-samples-data/generative-ai/image/scones.jpg' 
 , 
  
 mimeType 
 : 
  
 'image/jpeg' 
 , 
  
 }, 
  
 }; 
  
 const 
  
 response 
  
 = 
  
 await 
  
 ai 
 . 
 models 
 . 
 generateContent 
 ({ 
  
 model 
 : 
  
 'gemini-2.5-flash' 
 , 
  
 contents 
 : 
  
 [ 
 image 
 , 
  
 'What is shown in this image?' 
 ], 
  
 }); 
  
 console 
 . 
 log 
 ( 
 response 
 . 
 text 
 ); 
  
 return 
  
 response 
 . 
 text 
 ; 
 } 
 

Java

  import 
  
 com.google.genai.Client 
 ; 
 import 
  
 com.google.genai.types.Content 
 ; 
 import 
  
 com.google.genai.types.GenerateContentResponse 
 ; 
 import 
  
 com.google.genai.types.HttpOptions 
 ; 
 import 
  
 com.google.genai.types.Part 
 ; 
 public 
 class 
  
 TextGenerationWithTextAndImage 
 { 
 public 
 static 
 void 
 main 
 ( 
 String 
 [] 
 args 
 ) 
 { 
 // 
 TODO 
 ( 
 developer 
 ): 
 Replace 
 these 
 variables 
 before 
 running 
 the 
 sample 
 . 
 String 
 modelId 
 = 
 "gemini-2.5-flash" 
 ; 
 generateContent 
 ( 
 modelId 
 ); 
 } 
 // 
 Generates 
 text 
 with 
 text 
 and 
 image 
 input 
 public 
 static 
 String 
 generateContent 
 ( 
 String 
 modelId 
 ) 
 { 
 // 
 Initialize 
 client 
 that 
 will 
 be 
 used 
 to 
 send 
 requests 
 . 
 This 
 client 
 only 
 needs 
 to 
 be 
 created 
 // 
 once 
 , 
 and 
 can 
 be 
 reused 
 for 
 multiple 
 requests 
 . 
 try 
 ( 
 Client 
 client 
 = 
 Client 
 . 
 builder 
 () 
 . 
 location 
 ( 
 "global" 
 ) 
 . 
 vertexAI 
 ( 
 true 
 ) 
 . 
 httpOptions 
 ( 
 HttpOptions 
 . 
 builder 
 () 
 . 
 apiVersion 
 ( 
 "v1" 
 ) 
 . 
 build 
 ()) 
 . 
 build 
 ()) 
 { 
 GenerateContentResponse 
 response 
 = 
 client 
 . 
 models 
 . 
 generateContent 
 ( 
 modelId 
 , 
 Content 
 . 
 fromParts 
 ( 
 Part 
 . 
 fromText 
 ( 
 "What is shown in this image?" 
 ), 
 Part 
 . 
 fromUri 
 ( 
 "gs://cloud-samples-data/generative-ai/image/scones.jpg" 
 , 
 "image/jpeg" 
 )), 
 null 
 ); 
 System 
 . 
 out 
 . 
 print 
 ( 
 response 
 . 
 text 
 ()); 
 // 
 Example 
 response 
 : 
 // 
 The 
 image 
 shows 
 a 
 flat 
 lay 
 of 
 blueberry 
 scones 
 arranged 
 on 
 parchment 
 paper 
 . 
 There 
 are 
 ... 
 return 
 response 
 . 
 text 
 (); 
 } 
 } 
 } 
 

Code execution

The Gemini API in Vertex AI code execution feature enables the model to generate and run Python code and learn iteratively from the results until it arrives at a final output. Vertex AI provides code execution as a tool, similar to function calling. You can use this code execution capability to build applications that benefit from code-based reasoning and that produce text output. For example:

Python

  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 ( 
 HttpOptions 
 , 
 Tool 
 , 
 ToolCodeExecution 
 , 
 GenerateContentConfig 
 , 
 ) 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1" 
 )) 
 model_id 
 = 
 "gemini-2.5-flash" 
 code_execution_tool 
 = 
 Tool 
 ( 
 code_execution 
 = 
 ToolCodeExecution 
 ()) 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 model_id 
 , 
 contents 
 = 
 "Calculate 20th fibonacci number. Then find the nearest palindrome to it." 
 , 
 config 
 = 
 GenerateContentConfig 
 ( 
 tools 
 = 
 [ 
 code_execution_tool 
 ], 
 temperature 
 = 
 0 
 , 
 ), 
 ) 
 print 
 ( 
 "# Code:" 
 ) 
 print 
 ( 
 response 
 . 
 executable_code 
 ) 
 print 
 ( 
 "# Outcome:" 
 ) 
 print 
 ( 
 response 
 . 
 code_execution_result 
 ) 
 # Example response: 
 # # Code: 
 # def fibonacci(n): 
 #     if n <= 0: 
 #         return 0 
 #     elif n == 1: 
 #         return 1 
 #     else: 
 #         a, b = 0, 1 
 #         for _ in range(2, n + 1): 
 #             a, b = b, a + b 
 #         return b 
 # 
 # fib_20 = fibonacci(20) 
 # print(f'{fib_20=}') 
 # 
 # # Outcome: 
 # fib_20=6765 
 

Go

  import 
  
 ( 
 "context" 
 "fmt" 
 "io" 
 genai 
 "google.golang.org/genai" 
 ) 
 // 
 generateWithCodeExec 
 shows 
 how 
 to 
 generate 
 text 
 using 
 the 
 code 
 execution 
 tool 
 . 
 func 
 generateWithCodeExec 
 ( 
 w 
 io 
 . 
 Writer 
 ) 
 error 
 { 
 ctx 
 := 
 context 
 . 
 Background 
 () 
 client 
 , 
 err 
 := 
 genai 
 . 
 NewClient 
 ( 
 ctx 
 , 
& genai 
 . 
 ClientConfig 
 { 
 HTTPOptions 
 : 
 genai 
 . 
 HTTPOptions 
 { 
 APIVersion 
 : 
 "v1" 
 }, 
 }) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to create genai client: %w" 
 , 
 err 
 ) 
 } 
 prompt 
 := 
 "Calculate 20th fibonacci number. Then find the nearest palindrome to it." 
 contents 
 := 
 [] 
 * 
 genai 
 . 
 Content 
 { 
 { 
 Parts 
 : 
 [] 
 * 
 genai 
 . 
 Part 
 { 
 { 
 Text 
 : 
 prompt 
 }, 
 }, 
 Role 
 : 
 "user" 
 }, 
 } 
 config 
 := 
& genai 
 . 
 GenerateContentConfig 
 { 
 Tools 
 : 
 [] 
 * 
 genai 
 . 
 Tool 
 { 
 { 
 CodeExecution 
 : 
& genai 
 . 
 ToolCodeExecution 
 {}}, 
 }, 
 Temperature 
 : 
 genai 
 . 
 Ptr 
 ( 
 float32 
 ( 
 0.0 
 )), 
 } 
 modelName 
 := 
 "gemini-2.5-flash" 
 resp 
 , 
 err 
 := 
 client 
 . 
 Models 
 . 
 GenerateContent 
 ( 
 ctx 
 , 
 modelName 
 , 
 contents 
 , 
 config 
 ) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to generate content: %w" 
 , 
 err 
 ) 
 } 
 for 
 _ 
 , 
 p 
 := 
 range 
 resp 
 . 
 Candidates 
 [ 
 0 
 ] 
 . 
 Content 
 . 
 Parts 
 { 
 if 
 p 
 . 
 Text 
 != 
 "" 
 { 
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
 "Gemini: 
 %s 
 " 
 , 
 p 
 . 
 Text 
 ) 
 } 
 if 
 p 
 . 
 ExecutableCode 
 != 
 nil 
 { 
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
 "Language: 
 %s 
 \n 
 %s 
 \n 
 " 
 , 
 p 
 . 
 ExecutableCode 
 . 
 Language 
 , 
 p 
 . 
 ExecutableCode 
 . 
 Code 
 ) 
 } 
 if 
 p 
 . 
 CodeExecutionResult 
 != 
 nil 
 { 
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
 "Outcome: 
 %s 
 \n 
 %s 
 \n 
 " 
 , 
 p 
 . 
 CodeExecutionResult 
 . 
 Outcome 
 , 
 p 
 . 
 CodeExecutionResult 
 . 
 Output 
 ) 
 } 
 } 
 // 
 Example 
 response 
 : 
 // 
 Gemini 
 : 
 Okay 
 , 
 I 
 can 
 do 
 that 
 . 
 First 
 , 
 I 
 'll calculate the 20th Fibonacci number. Then, I need ... 
 // 
 // 
 Language 
 : 
 PYTHON 
 // 
 // 
 def 
  
 fibonacci 
 ( 
 n 
 ): 
 // 
 ... 
 // 
 // 
 fib_20 
 = 
 fibonacci 
 ( 
 20 
 ) 
 // 
 print 
 ( 
 f 
 ' 
 { 
 fib_20 
 =} 
 ' 
 ) 
 // 
 // 
 Outcome 
 : 
 OUTCOME_OK 
 // 
 fib_20 
 = 
 6765 
 // 
 // 
 Now 
 that 
 I 
 have 
 the 
 20 
 th 
 Fibonacci 
 number 
 ( 
 6765 
 ), 
 I 
 need 
 to 
 find 
 the 
 nearest 
 palindrome 
 . 
 ... 
 // 
 ... 
 return 
 nil 
 } 
 

Node.js

  const 
  
 { 
 GoogleGenAI 
 } 
  
 = 
  
 require 
 ( 
 '@google/genai' 
 ); 
 const 
  
 GOOGLE_CLOUD_PROJECT 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_PROJECT 
 ; 
 const 
  
 GOOGLE_CLOUD_LOCATION 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_LOCATION 
  
 || 
  
 'global' 
 ; 
 async 
  
 function 
  
 generateContent 
 ( 
  
 projectId 
  
 = 
  
 GOOGLE_CLOUD_PROJECT 
 , 
  
 location 
  
 = 
  
 GOOGLE_CLOUD_LOCATION 
 ) 
  
 { 
  
 const 
  
 ai 
  
 = 
  
 new 
  
 GoogleGenAI 
 ({ 
  
 vertexai 
 : 
  
 true 
 , 
  
 project 
 : 
  
 projectId 
 , 
  
 location 
 : 
  
 location 
 , 
  
 }); 
  
 const 
  
 response 
  
 = 
  
 await 
  
 ai 
 . 
 models 
 . 
 generateContent 
 ({ 
  
 model 
 : 
  
 'gemini-2.5-flash' 
 , 
  
 contents 
 : 
  
 'What is the sum of the first 50 prime numbers? Generate and run code for the calculation, and make sure you get all 50.' 
 , 
  
 config 
 : 
  
 { 
  
 tools 
 : 
  
 [{ 
 codeExecution 
 : 
  
 {}}], 
  
 temperature 
 : 
  
 0 
 , 
  
 }, 
  
 }); 
  
 console 
 . 
 debug 
 ( 
 response 
 . 
 executableCode 
 ); 
  
 console 
 . 
 debug 
 ( 
 response 
 . 
 codeExecutionResult 
 ); 
  
 return 
  
 response 
 . 
 codeExecutionResult 
 ; 
 } 
 

For more examples of code execution, check out the code execution documentation .

What's next

Now that you made your first API request, you might want to explore the following guides that show how to set up more advanced Vertex AI features for production code:

Create a Mobile Website
View Site in Mobile | Classic
Share by: