Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, seeModel versions and lifecycle.
Stay organized with collectionsSave and categorize content based on your preferences.
Function calling, also known astool use, provides the LLM with definitions of external tools (for example, aget_current_weatherfunction). When processing a prompt, the model intelligently determines if a tool is needed and, if so, outputs structured data specifying the tool to call and its parameters (for example,get_current_weather(location='Boston')). Your application then executes this tool, feeds the result back to the model, allowing it to complete its response with dynamic, real-world information or the outcome of an action. This effectively bridges the LLM with your systems and extends its capabilities.
Function calling enables two primary use cases:
Fetching data: Retrieve up-to-date information for model responses, such as current weather, currency conversion, or specific data from knowledge bases and APIs (RAG).
Taking action: Perform external operations like submitting forms, updating application state, or orchestrating agentic workflows (e.g., conversation handoffs).
For more use cases and examples that are powered by function calling, seeUse cases.
The following examples submit a prompt and function declaration to the Gemini models.
REST
PROJECT_ID=myprojectLOCATION=us-central1MODEL_ID=gemini-2.0-flash-001
curl-XPOST\-H"Authorization: Bearer$(gcloudauthprint-access-token)"\-H"Content-Type: application/json"\https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent\-d'{"contents": [{"role": "user","parts": [{"text": "What is the weather in Boston?"}]}],"tools": [{"functionDeclarations": [{"name": "get_current_weather","description": "Get the current weather in a given location","parameters": {"type": "object","properties": {"location": {"type": "string","description": "The city name of the location for which to get the weather.","default": {"string_value": "Boston, MA"}}},"required": ["location"]}}]}]}'
Python
You can specify the schema either manually using a Python dictionary or automatically with thefrom_funchelper function. The following example demonstrates how to declare a function manually.
importvertexaifromvertexai.generative_modelsimport(Content,FunctionDeclaration,GenerationConfig,GenerativeModel,Part,Tool,ToolConfig)# Initialize Vertex AI# TODO(developer): Update the projectvertexai.init(project="PROJECT_ID",location="us-central1")# Initialize Gemini modelmodel=GenerativeModel(model_name="gemini-2.0-flash")# Manual function declarationget_current_weather_func=FunctionDeclaration(name="get_current_weather",description="Get the current weather in a given location",# Function parameters are specified in JSON schema formatparameters={"type":"object","properties":{"location":{"type":"string","description":"The city name of the location for which to get the weather.","default":{"string_value":"Boston, MA"}}},},)response=model.generate_content(contents=[Content(role="user",parts=[Part.from_text("What is the weather like in Boston?"),],)],generation_config=GenerationConfig(temperature=0),tools=[Tool(function_declarations=[get_current_weather_func],)])
Alternatively, you can declare the function automatically with thefrom_funchelper function as shown in the following example:
defget_current_weather(location:str="Boston, MA"):"""Get the current weather in a given locationArgs:location: The city name of the location for which to get the weather."""# This example uses a mock implementation.# You can define a local function or import the requests library to call an APIreturn{"location":"Boston, MA","temperature":38,"description":"Partly Cloudy","icon":"partly-cloudy","humidity":65,"wind":{"speed":10,"direction":"NW"}}get_current_weather_func=FunctionDeclaration.from_func(get_current_weather)
Node.js
This example demonstrates a text scenario with one function and one
prompt.
const{VertexAI,FunctionDeclarationSchemaType,}=require('@google-cloud/vertexai');constfunctionDeclarations=[{function_declarations:[{name:'get_current_weather',description:'get weather in a given location',parameters:{type:FunctionDeclarationSchemaType.OBJECT,properties:{location:{type:FunctionDeclarationSchemaType.STRING},unit:{type:FunctionDeclarationSchemaType.STRING,enum:['celsius','fahrenheit'],},},required:['location'],},},],},];constfunctionResponseParts=[{functionResponse:{name:'get_current_weather',response:{name:'get_current_weather',content:{weather:'super nice'}},},},];/*** TODO(developer): Update these variables before running the sample.*/asyncfunctionfunctionCallingStreamContent(projectId='PROJECT_ID',location='us-central1',model='gemini-2.0-flash-001'){// Initialize Vertex with your Cloud project and locationconstvertexAI=newVertexAI({project:projectId,location:location});// Instantiate the modelconstgenerativeModel=vertexAI.getGenerativeModel({model:model,});constrequest={contents:[{role:'user',parts:[{text:'What is the weather in Boston?'}]},{role:'ASSISTANT',parts:[{functionCall:{name:'get_current_weather',args:{location:'Boston'},},},],},{role:'USER',parts:functionResponseParts},],tools:functionDeclarations,};conststreamingResp=awaitgenerativeModel.generateContentStream(request);forawait(constitemofstreamingResp.stream){console.log(item.candidates[0].content.parts[0].text);}}
Go
This example demonstrates a text scenario with one function and one prompt.
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True
import("context""fmt""io"genai"google.golang.org/genai")//generateWithFuncCallshowshowtosubmitapromptandafunctiondeclarationtothemodel,//allowingittosuggestacalltothefunctiontofetchexternaldata.Returningthisdata//enablesthemodeltogenerateatextresponsethatincorporatesthedata.funcgenerateWithFuncCall(wio.Writer)error{ctx:=context.Background()client,err:=genai.NewClient(ctx,&genai.ClientConfig{HTTPOptions:genai.HTTPOptions{APIVersion:"v1"},})iferr!=nil{returnfmt.Errorf("failed to create genai client: %w",err)}weatherFunc:=&genai.FunctionDeclaration{Description:"Returns the current weather in a location.",Name:"getCurrentWeather",Parameters:&genai.Schema{Type:"object",Properties:map[string]*genai.Schema{"location":{Type:"string"},},Required:[]string{"location"},},}config:=&genai.GenerateContentConfig{Tools:[]*genai.Tool{{FunctionDeclarations:[]*genai.FunctionDeclaration{weatherFunc}},},Temperature:genai.Ptr(float32(0.0)),}modelName:="gemini-2.5-flash"contents:=[]*genai.Content{{Parts:[]*genai.Part{{Text:"What is the weather like in Boston?"},},Role:"user"},}resp,err:=client.Models.GenerateContent(ctx,modelName,contents,config)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}varfuncCall*genai.FunctionCallfor_,p:=rangeresp.Candidates[0].Content.Parts{ifp.FunctionCall!=nil{funcCall=p.FunctionCallfmt.Fprint(w,"The model suggests to call the function ")fmt.Fprintf(w,"%q with args: %v\n",funcCall.Name,funcCall.Args)//Exampleresponse://Themodelsuggeststocallthefunction"getCurrentWeather"withargs:map[location:Boston]}}iffuncCall==nil{returnfmt.Errorf("model did not suggest a function call")}//UsesyntheticdatatosimulatearesponsefromtheexternalAPI.//Inarealapplication,thiswouldcomefromanactualweatherAPI.funcResp:=&genai.FunctionResponse{Name:"getCurrentWeather",Response:map[string]any{"location":"Boston","temperature":"38","temperature_unit":"F","description":"Cold and cloudy","humidity":"65","wind":`{"speed":"10","direction":"NW"}`,},}//ReturnconversationturnsandAPIresponsetocompletethemodel's response.contents=[]*genai.Content{{Parts:[]*genai.Part{{Text:"What is the weather like in Boston?"},},Role:"user"},{Parts:[]*genai.Part{{FunctionCall:funcCall},}},{Parts:[]*genai.Part{{FunctionResponse:funcResp},}},}resp,err=client.Models.GenerateContent(ctx,modelName,contents,config)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}respText:=resp.Text()fmt.Fprintln(w,respText)//Exampleresponse://TheweatherinBostoniscoldandcloudywithatemperatureof38degreesFahrenheit.Thehumidityis...returnnil}
C#
This example demonstrates a text scenario with one function and one prompt.
usingGoogle.Cloud.AIPlatform.V1;usingSystem;usingSystem.Threading.Tasks;usingType=Google.Cloud.AIPlatform.V1.Type;usingValue=Google.Protobuf.WellKnownTypes.Value;publicclassFunctionCalling{publicasyncTask<string>GenerateFunctionCall(stringprojectId="your-project-id",stringlocation="us-central1",stringpublisher="google",stringmodel="gemini-2.0-flash-001"){varpredictionServiceClient=newPredictionServiceClientBuilder{Endpoint=$"{location}-aiplatform.googleapis.com"}.Build();// Define the user's prompt in a Content object that we can reuse in// model callsvaruserPromptContent=newContent{Role="USER",Parts={newPart{Text="What is the weather like in Boston?"}}};// Specify a function declaration and parameters for an API requestvarfunctionName="get_current_weather";vargetCurrentWeatherFunc=newFunctionDeclaration{Name=functionName,Description="Get the current weather in a given location",Parameters=newOpenApiSchema{Type=Type.Object,Properties={["location"]=new(){Type=Type.String,Description="Get the current weather in a given location"},["unit"]=new(){Type=Type.String,Description="The unit of measurement for the temperature",Enum={"celsius","fahrenheit"}}},Required={"location"}}};// Send the prompt and instruct the model to generate content using the tool that you just createdvargenerateContentRequest=newGenerateContentRequest{Model=$"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",GenerationConfig=newGenerationConfig{Temperature=0f},Contents={userPromptContent},Tools={newTool{FunctionDeclarations={getCurrentWeatherFunc}}}};GenerateContentResponseresponse=awaitpredictionServiceClient.GenerateContentAsync(generateContentRequest);varfunctionCall=response.Candidates[0].Content.Parts[0].FunctionCall;Console.WriteLine(functionCall);stringapiResponse="";// Check the function name that the model responded with, and make an API call to an external systemif(functionCall.Name==functionName){// Extract the arguments to use in your API callstringlocationCity=functionCall.Args.Fields["location"].StringValue;// Here you can use your preferred method to make an API request to// fetch the current weather// In this example, we'll use synthetic data to simulate a response// payload from an external APIapiResponse=@"{ ""location"": ""Boston, MA"",""temperature"": 38, ""description"": ""Partly Cloudy""}";}// Return the API response to Gemini so it can generate a model response or request another function callgenerateContentRequest=newGenerateContentRequest{Model=$"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",Contents={userPromptContent,// User promptresponse.Candidates[0].Content,// Function call response,newContent{Parts={newPart{FunctionResponse=new(){Name=functionName,Response=new(){Fields={{"content",newValue{StringValue=apiResponse}}}}}}}}},Tools={newTool{FunctionDeclarations={getCurrentWeatherFunc}}}};response=awaitpredictionServiceClient.GenerateContentAsync(generateContentRequest);stringresponseText=response.Candidates[0].Content.Parts[0].Text;Console.WriteLine(responseText);returnresponseText;}}
importcom.google.cloud.vertexai.VertexAI;importcom.google.cloud.vertexai.api.Content;importcom.google.cloud.vertexai.api.FunctionDeclaration;importcom.google.cloud.vertexai.api.GenerateContentResponse;importcom.google.cloud.vertexai.api.Schema;importcom.google.cloud.vertexai.api.Tool;importcom.google.cloud.vertexai.api.Type;importcom.google.cloud.vertexai.generativeai.ChatSession;importcom.google.cloud.vertexai.generativeai.ContentMaker;importcom.google.cloud.vertexai.generativeai.GenerativeModel;importcom.google.cloud.vertexai.generativeai.PartMaker;importcom.google.cloud.vertexai.generativeai.ResponseHandler;importjava.io.IOException;importjava.util.Arrays;importjava.util.Collections;publicclassFunctionCalling{publicstaticvoidmain(String[]args)throwsIOException{// TODO(developer): Replace these variables before running the sample.StringprojectId="your-google-cloud-project-id";Stringlocation="us-central1";StringmodelName="gemini-2.0-flash-001";StringpromptText="What's the weather like in Paris?";whatsTheWeatherLike(projectId,location,modelName,promptText);}// A request involving the interaction with an external toolpublicstaticStringwhatsTheWeatherLike(StringprojectId,Stringlocation,StringmodelName,StringpromptText)throwsIOException{// Initialize client that will be used to send requests.// This client only needs to be created once, and can be reused for multiple requests.try(VertexAIvertexAI=newVertexAI(projectId,location)){FunctionDeclarationfunctionDeclaration=FunctionDeclaration.newBuilder().setName("getCurrentWeather").setDescription("Get the current weather in a given location").setParameters(Schema.newBuilder().setType(Type.OBJECT).putProperties("location",Schema.newBuilder().setType(Type.STRING).setDescription("location").build()).addRequired("location").build()).build();System.out.println("Function declaration:");System.out.println(functionDeclaration);// Add the function to a "tool"Tooltool=Tool.newBuilder().addFunctionDeclarations(functionDeclaration).build();// Start a chat session from a model, with the use of the declared function.GenerativeModelmodel=newGenerativeModel(modelName,vertexAI).withTools(Arrays.asList(tool));ChatSessionchat=model.startChat();System.out.println(String.format("Ask the question: %s",promptText));GenerateContentResponseresponse=chat.sendMessage(promptText);// The model will most likely return a function call to the declared// function `getCurrentWeather` with "Paris" as the value for the// argument `location`.System.out.println("\nPrint response: ");System.out.println(ResponseHandler.getContent(response));// Provide an answer to the model so that it knows what the result// of a "function call" is.Contentcontent=ContentMaker.fromMultiModalData(PartMaker.fromFunctionResponse("getCurrentWeather",Collections.singletonMap("currentWeather","sunny")));System.out.println("Provide the function response: ");System.out.println(content);response=chat.sendMessage(content);// See what the model replies nowSystem.out.println("Print response: ");StringfinalAnswer=ResponseHandler.getText(response);System.out.println(finalAnswer);returnfinalAnswer;}}}
If the model determines that it needs the output of a particular function, the
response that the application receives from the model contains the function name
and the parameter values that the function should be called with.
The following is an example of a model response to the user prompt "What is the weather like in Boston?". The model proposes calling
theget_current_weatherfunction with the parameterBoston, MA.
Invoke the external API and pass the API output back to the model.
The following example uses synthetic data to simulate a response payload from an
external API and submits the output back to the model.
REST
PROJECT_ID=myprojectMODEL_ID=gemini-2.0-flashLOCATION="us-central1"curl-XPOST\-H"Authorization: Bearer$(gcloudauthprint-access-token)"\-H"Content-Type: application/json"\https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent\-d'{"contents": [{"role": "user","parts": {"text": "What is the weather in Boston?"}},{"role": "model","parts": [{"functionCall": {"name": "get_current_weather","args": {"location": "Boston, MA"}}}]},{"role": "user","parts": [{"functionResponse": {"name": "get_current_weather","response": {"temperature": 20,"unit": "C"}}}]}],"tools": [{"function_declarations": [{"name": "get_current_weather","description": "Get the current weather in a specific location","parameters": {"type": "object","properties": {"location": {"type": "string","description": "The city name of the location for which to get the weather."}},"required": ["location"]}}]}]}'
Python
function_response_contents=[]function_response_parts=[]# Iterates through the function calls in the response in case there are parallel function call requestsforfunction_callinresponse.candidates[0].function_calls:print(f"Function call:{function_call.name}")# In this example, we'll use synthetic data to simulate a response payload from an external APIif(function_call.args['location']=="Boston, MA"):api_response={"location":"Boston, MA","temperature":38,"description":"Partly Cloudy"}if(function_call.args['location']=="San Francisco, CA"):api_response={"location":"San Francisco, CA","temperature":58,"description":"Sunny"}function_response_parts.append(Part.from_function_response(name=function_call.name,response={"contents":api_response}))# Add the function call response to the contentsfunction_response_contents=Content(role="user",parts=function_response_parts)# Submit the User's prompt, model's response, and API output back to the modelresponse=model.generate_content([Content(# User promptrole="user",parts=[Part.from_text("What is the weather like in Boston?"),],),response.candidates[0].content,# Function call responsefunction_response_contents# API output],tools=[Tool(function_declarations=[get_current_weather_func],)],)# Get the model summary responseprint(response.text)
If the model had proposed several parallel function calls, the application must
provide all of the responses back to the model. To learn more, seeParallel function calling example.
The model may determine that the
output of another function is necessary for responding to the prompt. In this case,
the response that the application receives from the model contains another
function name and another set of parameter values.
If the model determines that the API response is sufficient for responding to
the user's prompt, it creates a natural language response and returns it to the
application. In this case, the application must pass the response back to the
user. The following is an example of a natural language response:
It is currently 38 degrees Fahrenheit in Boston, MA with partly cloudy skies.
Function calling with thoughts
When calling functions withthinkingenabled, you'll
need to get thethought_signaturefrom
the model response object and return it when you send the result of the function
execution back to the model. For example:
Python
# Call the model with function declarations# ...Generation config, Configure the client, and Define user prompt (No changes)# Send request with declarations (using a thinking model)response=client.models.generate_content(model="gemini-2.5-flash",config=config,contents=contents)# See thought signaturesforpartinresponse.candidates[0].content.parts:ifnotpart.text:continueifpart.thoughtandpart.thought_signature:print("Thought signature:")print(part.thought_signature)
Viewing thought signatures isn't required, but you will need to adjustStep
2to return them along with the result of the function execution
so it can incorporate the thoughts into its final response:
Python
# Create user friendly response with function result and call the model again# ...Create a function response part (No change)# Append thought signatures, function call and result of the function execution to contentsfunction_call_content=response.candidates[0].content# Append the model's function call message, which includes thought signaturescontents.append(function_call_content)contents.append(types.Content(role="user",parts=[function_response_part]))# Append the function responsefinal_response=client.models.generate_content(model="gemini-2.5-flash",config=config,contents=contents,)print(final_response.text)
When returning thought signatures, follow these guidelines:
The model returns signatures within other parts in the response,
for example function calling or text, text, or thought summaries parts.
Return the entire response with all parts back to the model in
subsequent turns.
Don't merge part with one signature with another part which also contains a
signature. Signatures can't be concatenated together.
Don't merge one part with a signature with another part without a signature.
This breaks the correct positioning of the thought represented by the
signature.
Learn more about limitations and usage of thought signatures, and about thinking
models in general, on theThinkingpage.
Parallel function calling
For prompts such as "Get weather details in Boston and San Francisco?",
the model may propose several parallel function calls. For a list of models that
support parallel function calling, seeSupported models.
REST
This example demonstrates a scenario with oneget_current_weatherfunction.
The user prompt is "Get weather details in Boston and San Francisco?". The
model proposes two parallelget_current_weatherfunction calls: one with the
parameterBostonand the other with the parameterSan Francisco.
To learn more about the request parameters, seeGemini API.
The following command demonstrates how you can provide the function output to
the model. Replacemy-projectwith the name of your Google Cloud project.
Model request
PROJECT_ID=my-projectMODEL_ID=gemini-2.0-flash
LOCATION="us-central1"
curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json" \
https://${LOCATION}-aiplatform.googleapis.com/v1/projects/${PROJECT_ID}/locations/${LOCATION}/publishers/google/models/${MODEL_ID}:generateContent \
-d '{
"contents": [
{
"role": "user",
"parts": {
"text": "What is difference in temperature in Boston and San Francisco?"
}
},
{
"role": "model",
"parts": [
{
"functionCall": {
"name": "get_current_weather",
"args": {
"location": "Boston"
}
}
},
{
"functionCall": {
"name": "get_current_weather",
"args": {
"location": "San Francisco"
}
}
}
]
},
{
"role": "user",
"parts": [
{
"functionResponse": {
"name": "get_current_weather",
"response": {
"temperature": 30.5,
"unit": "C"
}
}
},
{
"functionResponse": {
"name": "get_current_weather",
"response": {
"temperature": 20,
"unit": "C"
}
}
}
]
}
],
"tools": [
{
"function_declarations": [
{
"name": "get_current_weather",
"description": "Get the current weather in a specific location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name of the location for which to get the weather."
}
},
"required": [
"location"
]
}
}
]
}
]
}'
The natural language response created by the model is similar to the following:
Model response
[
{
"candidates": [
{
"content": {
"parts": [
{
"text": "The temperature in Boston is 30.5C and the temperature in San Francisco is 20C. The difference is 10.5C. \n"
}
]
},
"finishReason": "STOP",
...
}
]
...
}
]
Python
This example demonstrates a scenario with oneget_current_weatherfunction.
The user prompt is "What is the weather like in Boston and San Francisco?".
Replacemy-projectwith the name of your Google Cloud project.
importvertexaifromvertexai.generative_modelsimport(Content,FunctionDeclaration,GenerationConfig,GenerativeModel,Part,Tool,ToolConfig)# Initialize Vertex AI# TODO(developer): Update the projectvertexai.init(project="my-project",location="us-central1")# Initialize Gemini modelmodel=GenerativeModel(model_name="gemini-2.0-flash")# Manual function declarationget_current_weather_func=FunctionDeclaration(name="get_current_weather",description="Get the current weather in a given location",# Function parameters are specified in JSON schema formatparameters={"type":"object","properties":{"location":{"type":"string","description":"The city name of the location for which to get the weather.","default":{"string_value":"Boston, MA"}}},},)response=model.generate_content(contents=[Content(role="user",parts=[Part.from_text("What is the weather like in Boston and San Francisco?"),],)],generation_config=GenerationConfig(temperature=0),tools=[Tool(function_declarations=[get_current_weather_func],)])
The following command demonstrates how you can provide the function output to
the model.
function_response_contents=[]function_response_parts=[]# You can have parallel function call requests for the same function type.# For example, 'location_to_lat_long("London")' and 'location_to_lat_long("Paris")'# In that case, collect API responses in parts and send them back to the modelforfunction_callinresponse.candidates[0].function_calls:print(f"Function call:{function_call.name}")# In this example, we'll use synthetic data to simulate a response payload from an external APIif(function_call.args['location']=="Boston, MA"):api_response={"location":"Boston, MA","temperature":38,"description":"Partly Cloudy"}if(function_call.args['location']=="San Francisco, CA"):api_response={"location":"San Francisco, CA","temperature":58,"description":"Sunny"}function_response_parts.append(Part.from_function_response(name=function_call.name,response={"contents":api_response}))# Add the function call response to the contentsfunction_response_contents=Content(role="user",parts=function_response_parts)function_response_contentsresponse=model.generate_content(contents=[Content(role="user",parts=[Part.from_text("What is the weather like in Boston and San Francisco?"),],),# User promptresponse.candidates[0].content,# Function call responsefunction_response_contents,# Function response],tools=[Tool(function_declarations=[get_current_weather_func],)])# Get the model summary responseprint(response.text)
Go
import("context""encoding/json""errors""fmt""io""cloud.google.com/go/vertexai/genai")// parallelFunctionCalling shows how to execute multiple function calls in parallel// and return their results to the model for generating a complete response.funcparallelFunctionCalling(wio.Writer,projectID,location,modelNamestring)error{// location = "us-central1"// modelName = "gemini-2.0-flash-001"ctx:=context.Background()client,err:=genai.NewClient(ctx,projectID,location)iferr!=nil{returnfmt.Errorf("failed to create GenAI client: %w",err)}deferclient.Close()model:=client.GenerativeModel(modelName)// Set temperature to 0.0 for maximum determinism in function calling.model.SetTemperature(0.0)funcName:="getCurrentWeather"funcDecl:=&genai.FunctionDeclaration{Name:funcName,Description:"Get the current weather in a given location",Parameters:&genai.Schema{Type:genai.TypeObject,Properties:map[string]*genai.Schema{"location":{Type:genai.TypeString,Description:"The location for which to get the weather. "+"It can be a city name, a city name and state, or a zip code. "+"Examples: 'San Francisco', 'San Francisco, CA', '95616', etc.",},},Required:[]string{"location"},},}// Add the weather function to our model toolbox.model.Tools=[]*genai.Tool{{FunctionDeclarations:[]*genai.FunctionDeclaration{funcDecl},},}prompt:=genai.Text("Get weather details in New Delhi and San Francisco?")resp,err:=model.GenerateContent(ctx,prompt)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}iflen(resp.Candidates)==0{returnerrors.New("got empty response from model")}elseiflen(resp.Candidates[0].FunctionCalls())==0{returnerrors.New("got no function call suggestions from model")}// In a production environment, consider adding validations for function names and arguments.for_,fnCall:=rangeresp.Candidates[0].FunctionCalls(){fmt.Fprintf(w,"The model suggests to call the function %q with args: %v\n",fnCall.Name,fnCall.Args)// Example response:// The model suggests to call the function "getCurrentWeather" with args: map[location:New Delhi]// The model suggests to call the function "getCurrentWeather" with args: map[location:San Francisco]}// Use synthetic data to simulate responses from the external API.// In a real application, this would come from an actual weather API.mockAPIResp1,err:=json.Marshal(map[string]string{"location":"New Delhi","temperature":"42","temperature_unit":"C","description":"Hot and humid","humidity":"65",})iferr!=nil{returnfmt.Errorf("failed to marshal function response to JSON: %w",err)}mockAPIResp2,err:=json.Marshal(map[string]string{"location":"San Francisco","temperature":"36","temperature_unit":"F","description":"Cold and cloudy","humidity":"N/A",})iferr!=nil{returnfmt.Errorf("failed to marshal function response to JSON: %w",err)}// Note, that the function calls don't have to be chained. We can obtain both responses in parallel// and return them to Gemini at once.funcResp1:=&genai.FunctionResponse{Name:funcName,Response:map[string]any{"content":mockAPIResp1,},}funcResp2:=&genai.FunctionResponse{Name:funcName,Response:map[string]any{"content":mockAPIResp2,},}// Return both API responses to the model allowing it to complete its response.resp,err=model.GenerateContent(ctx,prompt,funcResp1,funcResp2)iferr!=nil{returnfmt.Errorf("failed to generate content: %w",err)}iflen(resp.Candidates)==0||len(resp.Candidates[0].Content.Parts)==0{returnerrors.New("got empty response from model")}fmt.Fprintln(w,resp.Candidates[0].Content.Parts[0])// Example response:// The weather in New Delhi is hot and humid with a humidity of 65 and a temperature of 42°C. The weather in San Francisco ...returnnil}
Function calling modes
You can control how the model uses the provided tools (function declarations) by setting the mode within thefunction_calling_config.
Mode
Description
AUTO
The default model behavior. The model decides whether to predict function calls or respond with natural language based on the context. This is the most flexible mode and recommended for most scenarios.
VALIDATED(Preview)
The model is constrained to predict either function calls or natural language, and ensures function schema adherence. Ifallowed_function_namesis not provided, the model picks from all of the available function declarations. Ifallowed_function_namesis provided, the model picks from the set of allowed functions.
ANY
The model is constrained to always predict one or more function calls and ensures function schema adherence. Ifallowed_function_namesis not provided, the model picks from all of the available function declarations. Ifallowed_function_namesis provided, the model picks from the set of allowed functions. Use this mode when you require a function call response to every prompt (if applicable).
NONE
The model isprohibitedfrom making function calls. This is equivalent to sending a request without any function declarations. Use this mode to temporarily disable function calling without removing your tool definitions.
Forced function calling
Instead of allowing the model to choose between a natural language response and a function call, you can force it to only predict function calls. This is known asforced function calling. You can also choose to provide the model with a full set of function declarations, but restrict its responses to a subset of these functions.
The following example is forced to predict onlyget_weatherfunction calls.
Python
response=model.generate_content(contents=[Content(role="user",parts=[Part.from_text("What is the weather like in Boston?"),],)],generation_config=GenerationConfig(temperature=0),tools=[Tool(function_declarations=[get_weather_func,some_other_function],)],tool_config=ToolConfig(function_calling_config=ToolConfig.FunctionCallingConfig(# ANY mode forces the model to predict only function callsmode=ToolConfig.FunctionCallingConfig.Mode.ANY,# Allowed function calls to predict when the mode is ANY. If empty, any of# the provided function calls will be predicted.allowed_function_names=["get_weather"],)))
Function schema examples
Function declarations are compatible with theOpenAPI schema. We support the following attributes:type,nullable,required,format,description,properties,items,enum,anyOf,$ref, and$defs. Remaining attributes are not supported.
Function with object and array parameters
The following example uses a Python dictionary to declare a function that takes both object and array parameters:
extract_sale_records_func=FunctionDeclaration(name="extract_sale_records",description="Extract sale records from a document.",parameters={"type":"object","properties":{"records":{"type":"array","description":"A list of sale records","items":{"description":"Data for a sale record","type":"object","properties":{"id":{"type":"integer","description":"The unique id of the sale."},"date":{"type":"string","description":"Date of the sale, in the format of MMDDYY, e.g., 031023"},"total_amount":{"type":"number","description":"The total amount of the sale."},"customer_name":{"type":"string","description":"The name of the customer, including first name and last name."},"customer_contact":{"type":"string","description":"The phone number of the customer, e.g., 650-123-4567."},},"required":["id","date","total_amount"],},},},"required":["records"],},)
Function with enum parameter
The following example uses a Python dictionary to declare a function that takes an integerenumparameter:
set_status_func=FunctionDeclaration(name="set_status",description="set a ticket's status field",# Function parameters are specified in JSON schema formatparameters={"type":"object","properties":{"status":{"type":"integer","enum":["10","20","30"],# Provide integer (or any other type) values as strings.}},},)
Function with ref and def
The following JSON function declaration uses therefanddefsattributes:
{"contents":...,"tools":[{"function_declarations":[{"name":"get_customer","description":"Search for a customer by name","parameters":{"type":"object","properties":{"first_name":{"ref":"#/defs/name"},"last_name":{"ref":"#/defs/name"}},"defs":{"name":{"type":"string"}}}}]}]}
Usage notes:
Unlike, the OpenAPI schema, specifyrefanddefswithout the$symbol.
refmust refer to direct child ofdefs; no
external references.
The maximum depth of nested schema is 32.
Recursion depth indefs(self-reference) is limited to two.
from_funcwith array parameter
The following code sample declares a function that multiplies an array of numbers and usesfrom_functo generate theFunctionDeclarationschema.
fromtypingimportList# Define a function. Could be a local function or you can import the requests library to call an APIdefmultiply_numbers(numbers:List[int]=[1,1])->int:"""Calculates the product of all numbers in an array.Args:numbers: An array of numbers to be multiplied.Returns:The product of all the numbers. If the array is empty, returns 1."""ifnotnumbers:# Handle empty arrayreturn1product=1fornuminnumbers:product*=numreturnproductmultiply_number_func=FunctionDeclaration.from_func(multiply_numbers)"""multiply_number_func contains the following schema:{'name': 'multiply_numbers','description': 'Calculates the product of all numbers in an array.','parameters': {'properties': {'numbers': {'items': {'type': 'INTEGER'},'description': 'list of numbers','default': [1.0, 1.0],'title': 'Numbers','type': 'ARRAY'}},'description': 'Calculates the product of all numbers in an array.','title': 'multiply_numbers','property_ordering': ['numbers'],'type': 'OBJECT'}}"""
Best practices for function calling
Write clear and detailed function names, parameter descriptions, and instructions
Function names should start with a letter or an underscore and contain only characters a-z, A-Z, 0-9, underscores, dots or dashes with a maximum length of 64.
Be extremely clear and specific in your function and parameter descriptions.
The model relies on these to choose the correct function and provide
appropriate arguments. For example, abook_flight_ticketfunction could
have the descriptionbook flight tickets after confirming users' specific requirements, such as time, departure, destination, party size and preferred airline
Use strong typed parameters
If the parameter values are from a finite set, add anenumfield instead of putting the set of values into the description. If the parameter value is always an integer, set the type tointegerrather thannumber.
Tool Selection
While the model can use an arbitrary number of tools, providing too many can
increase the risk of selecting an incorrect or suboptimal tool. For best
results, aim to provide only the relevant tools for the context or task,
ideally keeping the active set to a maximum of 10-20. If you have a large
total number of tools, consider dynamic tool selection based on
conversation context.
If you provide generic, low-level tools (likebash) the model might use the tool
more often, but with less accuracy. If you provide a specific, high-level tool
(likeget_weather), the model will be able to use the tool more accurately, but
the tool might not be used as often.
Use system instructions
When using functions with date, time, or location parameters, include the
current date, time, or relevant location information (for example, city and
country) in the system instruction. This provides the model with the necessary
context to process the request accurately, even if the user's prompt lacks
details.
Prompt engineering
For best results, prepend the user prompt with the following details:
Additional context for the model-for example,You are a flight API assistant to help with searching flights based on user preferences.
Details or instructions on how and when to use the functions-for example,Don't make assumptions on the departure or destination airports. Always use a future date for the departure or destination time.
Instructions to ask clarifying questions if user queries are ambiguous-for example,Ask clarifying questions if not enough information is available.
Use generation configuration
For the temperature parameter, use0or another low value. This instructs
the model to generate more confident results and reduces hallucinations.
Validate the API call
If the model proposes the invocation of a function that would send an order,
update a database, or otherwise have significant consequences, validate the
function call with the user before executing it.
Use thought signatures
Thought signaturesshould always be used
with function calling for best results.
Pricing
The pricing for function calling is based on the number of characters within the
text inputs and outputs. To learn more, seeVertex AI pricing.
Here, text input (prompt)
refers to the user prompt for the current conversation turn, the function
declarations for the current conversation turn, and the history of the
conversation. The history of the conversation includes the queries, the function
calls, and the function responses of previous conversation turns.
Vertex AI truncates the history of the conversation at 32,000 characters.
Text output (response) refers to the function calls and the text responses
for the current conversation turn.
Use cases of function calling
You can use function calling for the following tasks:
Use Case
Example description
Example link
Integrate with external APIs
Get weather information using a meteorological API
Interpret voice commands: Create functions that correspond with
in-vehicle tasks. For example, you can create functions that turn on the
radio or activate the air conditioning. Send audio files of the user's voice
commands to the model, and ask the model to convert the audio into text and
identify the function that the user wants to call.
Automate workflows based on environmental triggers: Create functions to
represent processes that can be automated. Provide the model with data from
environmental sensors and ask it to parse and process the data to determine
whether one or more of the workflows should be activated. For example, a
model could process temperature data in a warehouse and choose to activate a
sprinkler function.
Automate the assignment of support tickets: Provide the model with
support tickets, logs, and context-aware rules. Ask the model to process all
of this information to determine who the ticket should be assigned to. Call
a function to assign the ticket to the person suggested by the model.
Retrieve information from a knowledge base: Create functions that
retrieve academic articles on a given subject and summarize them. Enable the
model to answer questions about academic subjects and provide citations for
its answers.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-03 UTC."],[],[],null,[]]