Structured output

You can guarantee that a model's generated output always adheres to a specific schema so that you receive consistently formatted responses. For example, you might have an established data schema that you use for other tasks. If you have the model follow the same schema, you can directly extract data from the model's output without any post-processing.

To specify the structure of a model's output, define a response schema , which works like a blueprint for model responses. When you submit a prompt and include the response schema , the model's response always follows your defined schema.

You can control generated output when using the following models:

For Open Models, follow this user guide .

Example use cases

One use case for applying a response schema is to ensure that a model's response produces valid JSON and conforms to your schema. Generative model outputs can have some degree of variability, so including a response schema ensures that you always receive valid JSON. Consequently, your downstream tasks can reliably expect valid JSON input from generated responses.

Another example is to constrain how a model can respond. For example, you can have a model annotate text with user-defined labels, not with labels that the model produces. This constraint is useful when you expect a specific set of labels such as positive or negative and don't want to receive a mixture of other labels that the model might generate like good , positive , negative , or bad .

Considerations

The following considerations discuss potential limitations if you plan on using a response schema:

  • You must use the API to define and use a response schema. There's no console support.
  • The size of your response schema counts towards the input token limit.
  • Only certain output formats are supported, such as application/json or text/x.enum . For more information, see the responseMimeType parameter in the Gemini API reference .
  • Structured output supports a subset of the Vertex AI schema reference . For more information, see Supported schema fields .
  • A complex schema can result in an InvalidArgument: 400 error. Complexity might come from long property names, long array length limits, enums with many values, objects with lots of optional properties, or a combination of these factors.

    If you get this error with a valid schema, make one or more of the following changes to resolve the error:

    • Shorten property names or enum names.
    • Flatten nested arrays.
    • Reduce the number of properties with constraints, such as numbers with minimum and maximum limits.
    • Reduce the number of properties with complex constraints, such as properties with complex formats like date-time .
    • Reduce the number of optional properties.
    • Reduce the number of valid values for enums.

Supported schema fields

Structured output supports the following fields from the Vertex AI schema . If you use an unsupported field, Vertex AI can still handle your request but ignores the field.

  • anyOf
  • enum : only string enums are supported
  • format
  • items
  • maximum
  • maxItems
  • minimum
  • minItems
  • nullable
  • properties
  • propertyOrdering *
  • required

* propertyOrdering is specifically for structured output and not part of the Vertex AI schema. This field defines the order in which properties are generated. The listed properties must be unique and must be valid keys in the properties dictionary.

For the format field, Vertex AI supports the following values: date , date-time , duration , and time . The description and format of each value is described in the OpenAPI Initiative Registry

Before you begin

Define a response schema to specify the structure of a model's output, the field names, and the expected data type for each field. Use only the supported fields as listed in the Considerations section. All other fields are ignored.

Include your response schema as part of the responseSchema field only. Don't duplicate the schema in your input prompt. If you do, the generated output might be lower in quality.

For sample schemas, see the Example schemas and model responses section.

Model behavior and response schema

When a model generates a response, it uses the field name and context from your prompt. As such, we recommend that you use a clear structure and unambiguous field names so that your intent is clear.

By default, fields are optional, meaning the model can populate the fields or skip them. You can set fields as required to force the model to provide a value. If there's insufficient context in the associated input prompt, the model generates responses mainly based on the data it was trained on.

If you aren't seeing the results you expect, add more context to your input prompts or revise your response schema. For example, review the model's response without structured output to see how the model responds. You can then update your response schema that better fits the model's output.

Send a prompt with a response schema

By default, all fields are optional, meaning a model might generate a response to a field. To force the model to always generate a response to a field, set the field as required.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 HttpOptions 
 response_schema 
 = 
 { 
 "type" 
 : 
 "ARRAY" 
 , 
 "items" 
 : 
 { 
 "type" 
 : 
 "OBJECT" 
 , 
 "properties" 
 : 
 { 
 "recipe_name" 
 : 
 { 
 "type" 
 : 
 "STRING" 
 }, 
 "ingredients" 
 : 
 { 
 "type" 
 : 
 "ARRAY" 
 , 
 "items" 
 : 
 { 
 "type" 
 : 
 "STRING" 
 }}, 
 }, 
 "required" 
 : 
 [ 
 "recipe_name" 
 , 
 "ingredients" 
 ], 
 }, 
 } 
 prompt 
 = 
 """ 
 List a few popular cookie recipes. 
 """ 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1" 
 )) 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 "gemini-2.5-flash" 
 , 
 contents 
 = 
 prompt 
 , 
 config 
 = 
 { 
 "response_mime_type" 
 : 
 "application/json" 
 , 
 "response_schema" 
 : 
 response_schema 
 , 
 }, 
 ) 
 print 
 ( 
 response 
 . 
 text 
 ) 
 # Example output: 
 # [ 
 #     { 
 #         "ingredients": [ 
 #             "2 1/4 cups all-purpose flour", 
 #             "1 teaspoon baking soda", 
 #             "1 teaspoon salt", 
 #             "1 cup (2 sticks) unsalted butter, softened", 
 #             "3/4 cup granulated sugar", 
 #             "3/4 cup packed brown sugar", 
 #             "1 teaspoon vanilla extract", 
 #             "2 large eggs", 
 #             "2 cups chocolate chips", 
 #         ], 
 #         "recipe_name": "Chocolate Chip Cookies", 
 #     } 
 # ] 
 

Go

Learn how to install or update the Go .

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  import 
  
 ( 
 "context" 
 "fmt" 
 "io" 
 genai 
 "google.golang.org/genai" 
 ) 
 // 
 generateWithRespSchema 
 shows 
 how 
 to 
 use 
 a 
 response 
 schema 
 to 
 generate 
 output 
 in 
 a 
 specific 
 format 
 . 
 func 
 generateWithRespSchema 
 ( 
 w 
 io 
 . 
 Writer 
 ) 
 error 
 { 
 ctx 
 := 
 context 
 . 
 Background 
 () 
 client 
 , 
 err 
 := 
 genai 
 . 
 NewClient 
 ( 
 ctx 
 , 
& genai 
 . 
 ClientConfig 
 { 
 HTTPOptions 
 : 
 genai 
 . 
 HTTPOptions 
 { 
 APIVersion 
 : 
 "v1" 
 }, 
 }) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to create genai client: %w" 
 , 
 err 
 ) 
 } 
 config 
 := 
& genai 
 . 
 GenerateContentConfig 
 { 
 ResponseMIMEType 
 : 
 "application/json" 
 , 
 // 
 See 
 the 
 OpenAPI 
 specification 
 for 
 more 
 details 
 and 
 examples 
 : 
 // 
 https 
 : 
 // 
 spec 
 . 
 openapis 
 . 
 org 
 / 
 oas 
 / 
 v3 
 .0.3 
 . 
 html 
 #schema-object 
 ResponseSchema 
 : 
& genai 
 . 
 Schema 
 { 
 Type 
 : 
 "array" 
 , 
 Items 
 : 
& genai 
 . 
 Schema 
 { 
 Type 
 : 
 "object" 
 , 
 Properties 
 : 
 map 
 [ 
 string 
 ] 
 * 
 genai 
 . 
 Schema 
 { 
 "recipe_name" 
 : 
 { 
 Type 
 : 
 "string" 
 }, 
 "ingredients" 
 : 
 { 
 Type 
 : 
 "array" 
 , 
 Items 
 : 
& genai 
 . 
 Schema 
 { 
 Type 
 : 
 "string" 
 }, 
 }, 
 }, 
 Required 
 : 
 [] 
 string 
 { 
 "recipe_name" 
 , 
 "ingredients" 
 }, 
 }, 
 }, 
 } 
 contents 
 := 
 [] 
 * 
 genai 
 . 
 Content 
 { 
 { 
 Parts 
 : 
 [] 
 * 
 genai 
 . 
 Part 
 { 
 { 
 Text 
 : 
 "List a few popular cookie recipes." 
 }, 
 }, 
 Role 
 : 
 "user" 
 }, 
 } 
 modelName 
 := 
 "gemini-2.5-flash" 
 resp 
 , 
 err 
 := 
 client 
 . 
 Models 
 . 
 GenerateContent 
 ( 
 ctx 
 , 
 modelName 
 , 
 contents 
 , 
 config 
 ) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to generate content: %w" 
 , 
 err 
 ) 
 } 
 respText 
 := 
 resp 
 . 
 Text 
 () 
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
 respText 
 ) 
 // 
 Example 
 response 
 : 
 // 
 [ 
 // 
 { 
 // 
 "ingredients" 
 : 
 [ 
 // 
 "2 1/4 cups all-purpose flour" 
 , 
 // 
 "1 teaspoon baking soda" 
 , 
 // 
 ... 
 // 
 ], 
 // 
 "recipe_name" 
 : 
 "Chocolate Chip Cookies" 
 // 
 }, 
 // 
 { 
 // 
 ... 
 // 
 }, 
 // 
 ... 
 // 
 ] 
 return 
 nil 
 } 
 

REST

Before using any of the request data, make the following replacements:

  • GENERATE_RESPONSE_METHOD : The type of response that you want the model to generate. Choose a method that generates how you want the model's response to be returned:
    • streamGenerateContent : The response is streamed as it's being generated to reduce the perception of latency to a human audience.
    • generateContent : The response is returned after it's fully generated.
  • LOCATION : The region to process the request.
  • PROJECT_ID : Your project ID .
  • MODEL_ID : The model ID of the multimodal model that you want to use.
  • ROLE : The role in a conversation associated with the content. Specifying a role is required even in singleturn use cases. Acceptable values include the following:
    • USER : Specifies content that's sent by you.
  • TEXT : The text instructions to include in the prompt.
  • RESPONSE_MIME_TYPE : The format type of the generated candidate text. For a list of supported values, see the responseMimeType parameter in the Gemini API .
  • RESPONSE_SCHEMA : Schema for the model to follow when generating responses. For more information, see the Schema reference.

HTTP method and URL:

POST https:// LOCATION 
-aiplatform.googleapis.com/v1/projects/ PROJECT_ID 
/locations/ LOCATION 
/publishers/google/models/ MODEL_ID 
: GENERATE_RESPONSE_METHOD 

Request JSON body:

{
  "contents": {
    "role": " ROLE 
",
    "parts": {
      "text": " TEXT 
"
    }
  },
  "generation_config": {
    "responseMimeType": " RESPONSE_MIME_TYPE 
",
    "responseSchema": RESPONSE_SCHEMA 
,
  }
}

To send your request, choose one of these options:

curl

Save the request body in a file named request.json , and execute the following command:

curl -X POST \
-H "Authorization: Bearer $(gcloud auth print-access-token)" \
-H "Content-Type: application/json; charset=utf-8" \
-d @request.json \
"https:// LOCATION -aiplatform.googleapis.com/v1/projects/ PROJECT_ID /locations/ LOCATION /publishers/google/models/ MODEL_ID : GENERATE_RESPONSE_METHOD "

PowerShell

Save the request body in a file named request.json , and execute the following command:

$cred = gcloud auth print-access-token
$headers = @{ "Authorization" = "Bearer $cred" }

Invoke-WebRequest `
-Method POST `
-Headers $headers `
-ContentType: "application/json; charset=utf-8" `
-InFile request.json `
-Uri "https:// LOCATION -aiplatform.googleapis.com/v1/projects/ PROJECT_ID /locations/ LOCATION /publishers/google/models/ MODEL_ID : GENERATE_RESPONSE_METHOD " | Select-Object -Expand Content

You should receive a JSON response similar to the following.

Example curl command

  LOCATION 
 = 
 "us-central1" 
 MODEL_ID 
 = 
 "gemini-2.5-flash" 
 PROJECT_ID 
 = 
 "test-project" 
 GENERATE_RESPONSE_METHOD 
 = 
 "generateContent" 
cat << 
EOF > 
request.json { 
  
 "contents" 
:  
 { 
  
 "role" 
:  
 "user" 
,  
 "parts" 
:  
 { 
  
 "text" 
:  
 "List a few popular cookie recipes." 
  
 } 
  
 } 
,  
 "generation_config" 
:  
 { 
  
 "maxOutputTokens" 
:  
 2048 
,  
 "responseMimeType" 
:  
 "application/json" 
,  
 "responseSchema" 
:  
 { 
  
 "type" 
:  
 "array" 
,  
 "items" 
:  
 { 
  
 "type" 
:  
 "object" 
,  
 "properties" 
:  
 { 
  
 "recipe_name" 
:  
 { 
  
 "type" 
:  
 "string" 
,  
 } 
,  
 } 
,  
 "required" 
:  
 [ 
 "recipe_name" 
 ] 
,  
 } 
,  
 } 
  
 } 
 } 
EOF

curl  
 \ 
-X  
POST  
 \ 
-H  
 "Authorization: Bearer 
 $( 
gcloud  
auth  
print-access-token ) 
 " 
  
 \ 
-H  
 "Content-Type: application/json" 
  
 \ 
https:// ${ 
 LOCATION 
 } 
-aiplatform.googleapis.com/v1/projects/ ${ 
 PROJECT_ID 
 } 
/locations/ ${ 
 LOCATION 
 } 
/publishers/google/models/ ${ 
 MODEL_ID 
 } 
: ${ 
 GENERATE_RESPONSE_METHOD 
 } 
  
 \ 
-d  
 '@request.json' 
 

Example schemas for JSON output

The following sections demonstrate a variety of sample prompts and response schemas. A sample model response is also included after each code sample.

Forecast the weather for each day of the week

The following example outputs a forecast object for each day of the week that includes an array of properties such as the expected temperature and humidity level for the day. Some properties are set to nullable so the model can return a null value when it doesn't have enough context to generate a meaningful response. This strategy helps reduce hallucinations.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 GenerateContentConfig 
 , 
 HttpOptions 
 response_schema 
 = 
 { 
 "type" 
 : 
 "OBJECT" 
 , 
 "properties" 
 : 
 { 
 "forecast" 
 : 
 { 
 "type" 
 : 
 "ARRAY" 
 , 
 "items" 
 : 
 { 
 "type" 
 : 
 "OBJECT" 
 , 
 "properties" 
 : 
 { 
 "Day" 
 : 
 { 
 "type" 
 : 
 "STRING" 
 , 
 "nullable" 
 : 
 True 
 }, 
 "Forecast" 
 : 
 { 
 "type" 
 : 
 "STRING" 
 , 
 "nullable" 
 : 
 True 
 }, 
 "Temperature" 
 : 
 { 
 "type" 
 : 
 "INTEGER" 
 , 
 "nullable" 
 : 
 True 
 }, 
 "Humidity" 
 : 
 { 
 "type" 
 : 
 "STRING" 
 , 
 "nullable" 
 : 
 True 
 }, 
 "Wind Speed" 
 : 
 { 
 "type" 
 : 
 "INTEGER" 
 , 
 "nullable" 
 : 
 True 
 }, 
 }, 
 "required" 
 : 
 [ 
 "Day" 
 , 
 "Temperature" 
 , 
 "Forecast" 
 , 
 "Wind Speed" 
 ], 
 }, 
 } 
 }, 
 } 
 prompt 
 = 
 """ 
 The week ahead brings a mix of weather conditions. 
 Sunday is expected to be sunny with a temperature of 77°F and a humidity level of 50%. Winds will be light at around 10 km/h. 
 Monday will see partly cloudy skies with a slightly cooler temperature of 72°F and the winds will pick up slightly to around 15 km/h. 
 Tuesday brings rain showers, with temperatures dropping to 64°F and humidity rising to 70%. 
 Wednesday may see thunderstorms, with a temperature of 68°F. 
 Thursday will be cloudy with a temperature of 66°F and moderate humidity at 60%. 
 Friday returns to partly cloudy conditions, with a temperature of 73°F and the Winds will be light at 12 km/h. 
 Finally, Saturday rounds off the week with sunny skies, a temperature of 80°F, and a humidity level of 40%. Winds will be gentle at 8 km/h. 
 """ 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1" 
 )) 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 "gemini-2.5-flash" 
 , 
 contents 
 = 
 prompt 
 , 
 config 
 = 
 GenerateContentConfig 
 ( 
 response_mime_type 
 = 
 "application/json" 
 , 
 response_schema 
 = 
 response_schema 
 , 
 ), 
 ) 
 print 
 ( 
 response 
 . 
 text 
 ) 
 # Example output: 
 # {"forecast": [{"Day": "Sunday", "Forecast": "sunny", "Temperature": 77, "Wind Speed": 10, "Humidity": "50%"}, 
 #   {"Day": "Monday", "Forecast": "partly cloudy", "Temperature": 72, "Wind Speed": 15}, 
 #   {"Day": "Tuesday", "Forecast": "rain showers", "Temperature": 64, "Wind Speed": null, "Humidity": "70%"}, 
 #   {"Day": "Wednesday", "Forecast": "thunderstorms", "Temperature": 68, "Wind Speed": null}, 
 #   {"Day": "Thursday", "Forecast": "cloudy", "Temperature": 66, "Wind Speed": null, "Humidity": "60%"}, 
 #   {"Day": "Friday", "Forecast": "partly cloudy", "Temperature": 73, "Wind Speed": 12}, 
 #   {"Day": "Saturday", "Forecast": "sunny", "Temperature": 80, "Wind Speed": 8, "Humidity": "40%"}]} 
 

Go

Learn how to install or update the Go .

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  import 
  
 ( 
 "context" 
 "fmt" 
 "io" 
 genai 
 "google.golang.org/genai" 
 ) 
 // 
 generateWithNullables 
 shows 
 how 
 to 
 use 
 the 
 response 
 schema 
 with 
 nullable 
 values 
 . 
 func 
 generateWithNullables 
 ( 
 w 
 io 
 . 
 Writer 
 ) 
 error 
 { 
 ctx 
 := 
 context 
 . 
 Background 
 () 
 client 
 , 
 err 
 := 
 genai 
 . 
 NewClient 
 ( 
 ctx 
 , 
& genai 
 . 
 ClientConfig 
 { 
 HTTPOptions 
 : 
 genai 
 . 
 HTTPOptions 
 { 
 APIVersion 
 : 
 "v1" 
 }, 
 }) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to create genai client: %w" 
 , 
 err 
 ) 
 } 
 modelName 
 := 
 "gemini-2.5-flash" 
 prompt 
 := 
 ` 
 The 
 week 
 ahead 
 brings 
 a 
 mix 
 of 
 weather 
 conditions 
 . 
 Sunday 
 is 
 expected 
 to 
 be 
 sunny 
 with 
 a 
 temperature 
 of 
 77 
 ° 
 F 
 and 
 a 
 humidity 
 level 
 of 
 50 
 %. 
 Winds 
 will 
 be 
 light 
 at 
 around 
 10 
 km 
 / 
 h 
 . 
 Monday 
 will 
 see 
 partly 
 cloudy 
 skies 
 with 
 a 
 slightly 
 cooler 
 temperature 
 of 
 72 
 ° 
 F 
 and 
 the 
 winds 
 will 
 pick 
 up 
 slightly 
 to 
 around 
 15 
 km 
 / 
 h 
 . 
 Tuesday 
 brings 
 rain 
 showers 
 , 
 with 
 temperatures 
 dropping 
 to 
 64 
 ° 
 F 
 and 
 humidity 
 rising 
 to 
 70 
 %. 
 Wednesday 
 may 
 see 
 thunderstorms 
 , 
 with 
 a 
 temperature 
 of 
 68 
 ° 
 F 
 . 
 Thursday 
 will 
 be 
 cloudy 
 with 
 a 
 temperature 
 of 
 66 
 ° 
 F 
 and 
 moderate 
 humidity 
 at 
 60 
 %. 
 Friday 
 returns 
 to 
 partly 
 cloudy 
 conditions 
 , 
 with 
 a 
 temperature 
 of 
 73 
 ° 
 F 
 and 
 the 
 Winds 
 will 
 be 
 light 
 at 
 12 
 km 
 / 
 h 
 . 
 Finally 
 , 
 Saturday 
 rounds 
 off 
 the 
 week 
 with 
 sunny 
 skies 
 , 
 a 
 temperature 
 of 
 80 
 ° 
 F 
 , 
 and 
 a 
 humidity 
 level 
 of 
 40 
 %. 
 Winds 
 will 
 be 
 gentle 
 at 
 8 
 km 
 / 
 h 
 . 
 ` 
 contents 
 := 
 [] 
 * 
 genai 
 . 
 Content 
 { 
 { 
 Parts 
 : 
 [] 
 * 
 genai 
 . 
 Part 
 { 
 { 
 Text 
 : 
 prompt 
 }, 
 }, 
 Role 
 : 
 "user" 
 }, 
 } 
 config 
 := 
& genai 
 . 
 GenerateContentConfig 
 { 
 ResponseMIMEType 
 : 
 "application/json" 
 , 
 // 
 See 
 the 
 OpenAPI 
 specification 
 for 
 more 
 details 
 and 
 examples 
 : 
 // 
 https 
 : 
 // 
 spec 
 . 
 openapis 
 . 
 org 
 / 
 oas 
 / 
 v3 
 .0.3 
 . 
 html 
 #schema-object 
 ResponseSchema 
 : 
& genai 
 . 
 Schema 
 { 
 Type 
 : 
 "object" 
 , 
 Properties 
 : 
 map 
 [ 
 string 
 ] 
 * 
 genai 
 . 
 Schema 
 { 
 "forecast" 
 : 
 { 
 Type 
 : 
 "array" 
 , 
 Items 
 : 
& genai 
 . 
 Schema 
 { 
 Type 
 : 
 "object" 
 , 
 Properties 
 : 
 map 
 [ 
 string 
 ] 
 * 
 genai 
 . 
 Schema 
 { 
 "Day" 
 : 
 { 
 Type 
 : 
 "string" 
 , 
 Nullable 
 : 
 genai 
 . 
 Ptr 
 ( 
 true 
 )}, 
 "Forecast" 
 : 
 { 
 Type 
 : 
 "string" 
 , 
 Nullable 
 : 
 genai 
 . 
 Ptr 
 ( 
 true 
 )}, 
 "Temperature" 
 : 
 { 
 Type 
 : 
 "integer" 
 , 
 Nullable 
 : 
 genai 
 . 
 Ptr 
 ( 
 true 
 )}, 
 "Humidity" 
 : 
 { 
 Type 
 : 
 "string" 
 , 
 Nullable 
 : 
 genai 
 . 
 Ptr 
 ( 
 true 
 )}, 
 "Wind Speed" 
 : 
 { 
 Type 
 : 
 "integer" 
 , 
 Nullable 
 : 
 genai 
 . 
 Ptr 
 ( 
 true 
 )}, 
 }, 
 Required 
 : 
 [] 
 string 
 { 
 "Day" 
 , 
 "Temperature" 
 , 
 "Forecast" 
 , 
 "Wind Speed" 
 }, 
 }, 
 }, 
 }, 
 }, 
 } 
 resp 
 , 
 err 
 := 
 client 
 . 
 Models 
 . 
 GenerateContent 
 ( 
 ctx 
 , 
 modelName 
 , 
 contents 
 , 
 config 
 ) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to generate content: %w" 
 , 
 err 
 ) 
 } 
 respText 
 := 
 resp 
 . 
 Text 
 () 
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
 respText 
 ) 
 // 
 Example 
 response 
 : 
 // 
 { 
 // 
 "forecast" 
 : 
 [ 
 // 
 { 
 "Day" 
 : 
 "Sunday" 
 , 
 "Forecast" 
 : 
 "Sunny" 
 , 
 "Temperature" 
 : 
 77 
 , 
 "Wind Speed" 
 : 
 10 
 , 
 "Humidity" 
 : 
 "50%" 
 }, 
 // 
 { 
 "Day" 
 : 
 "Monday" 
 , 
 "Forecast" 
 : 
 "Partly Cloudy" 
 , 
 "Temperature" 
 : 
 72 
 , 
 "Wind Speed" 
 : 
 15 
 }, 
 // 
 { 
 "Day" 
 : 
 "Tuesday" 
 , 
 "Forecast" 
 : 
 "Rain Showers" 
 , 
 "Temperature" 
 : 
 64 
 , 
 "Wind Speed" 
 : 
 null 
 , 
 "Humidity" 
 : 
 "70%" 
 }, 
 // 
 { 
 "Day" 
 : 
 "Wednesday" 
 , 
 "Forecast" 
 : 
 "Thunderstorms" 
 , 
 "Temperature" 
 : 
 68 
 , 
 "Wind Speed" 
 : 
 null 
 }, 
 // 
 { 
 "Day" 
 : 
 "Thursday" 
 , 
 "Forecast" 
 : 
 "Cloudy" 
 , 
 "Temperature" 
 : 
 66 
 , 
 "Wind Speed" 
 : 
 null 
 , 
 "Humidity" 
 : 
 "60%" 
 }, 
 // 
 { 
 "Day" 
 : 
 "Friday" 
 , 
 "Forecast" 
 : 
 "Partly Cloudy" 
 , 
 "Temperature" 
 : 
 73 
 , 
 "Wind Speed" 
 : 
 12 
 }, 
 // 
 { 
 "Day" 
 : 
 "Saturday" 
 , 
 "Forecast" 
 : 
 "Sunny" 
 , 
 "Temperature" 
 : 
 80 
 , 
 "Wind Speed" 
 : 
 8 
 , 
 "Humidity" 
 : 
 "40%" 
 } 
 // 
 ] 
 // 
 } 
 return 
 nil 
 } 
 

Classify a product

The following example includes enums where the model must classify an object's type and condition from a list of given values.

Python

Install

pip install --upgrade google-genai

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 GenerateContentConfig 
 , 
 HttpOptions 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1" 
 )) 
 response 
 = 
 client 
 . 
 models 
 . 
 generate_content 
 ( 
 model 
 = 
 "gemini-2.5-flash" 
 , 
 contents 
 = 
 "What type of instrument is an oboe?" 
 , 
 config 
 = 
 GenerateContentConfig 
 ( 
 response_mime_type 
 = 
 "text/x.enum" 
 , 
 response_schema 
 = 
 { 
 "type" 
 : 
 "STRING" 
 , 
 "enum" 
 : 
 [ 
 "Percussion" 
 , 
 "String" 
 , 
 "Woodwind" 
 , 
 "Brass" 
 , 
 "Keyboard" 
 ], 
 }, 
 ), 
 ) 
 print 
 ( 
 response 
 . 
 text 
 ) 
 # Example output: 
 # Woodwind 
 

Go

Learn how to install or update the Go .

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  import 
  
 ( 
 "context" 
 "fmt" 
 "io" 
 genai 
 "google.golang.org/genai" 
 ) 
 // 
 generateWithEnumSchema 
 shows 
 how 
 to 
 use 
 enum 
 schema 
 to 
 generate 
 output 
 . 
 func 
 generateWithEnumSchema 
 ( 
 w 
 io 
 . 
 Writer 
 ) 
 error 
 { 
 ctx 
 := 
 context 
 . 
 Background 
 () 
 client 
 , 
 err 
 := 
 genai 
 . 
 NewClient 
 ( 
 ctx 
 , 
& genai 
 . 
 ClientConfig 
 { 
 HTTPOptions 
 : 
 genai 
 . 
 HTTPOptions 
 { 
 APIVersion 
 : 
 "v1" 
 }, 
 }) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to create genai client: %w" 
 , 
 err 
 ) 
 } 
 modelName 
 := 
 "gemini-2.5-flash" 
 contents 
 := 
 [] 
 * 
 genai 
 . 
 Content 
 { 
 { 
 Parts 
 : 
 [] 
 * 
 genai 
 . 
 Part 
 { 
 { 
 Text 
 : 
 "What type of instrument is an oboe?" 
 }, 
 }, 
 Role 
 : 
 "user" 
 }, 
 } 
 config 
 := 
& genai 
 . 
 GenerateContentConfig 
 { 
 ResponseMIMEType 
 : 
 "text/x.enum" 
 , 
 ResponseSchema 
 : 
& genai 
 . 
 Schema 
 { 
 Type 
 : 
 "STRING" 
 , 
 Enum 
 : 
 [] 
 string 
 { 
 "Percussion" 
 , 
 "String" 
 , 
 "Woodwind" 
 , 
 "Brass" 
 , 
 "Keyboard" 
 }, 
 }, 
 } 
 resp 
 , 
 err 
 := 
 client 
 . 
 Models 
 . 
 GenerateContent 
 ( 
 ctx 
 , 
 modelName 
 , 
 contents 
 , 
 config 
 ) 
 if 
 err 
 != 
 nil 
 { 
 return 
 fmt 
 . 
 Errorf 
 ( 
 "failed to generate content: %w" 
 , 
 err 
 ) 
 } 
 respText 
 := 
 resp 
 . 
 Text 
 () 
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
 respText 
 ) 
 // 
 Example 
 response 
 : 
 // 
 Woodwind 
 return 
 nil 
 } 
 

Node.js

Install

npm install @google/genai

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  const 
  
 { 
 GoogleGenAI 
 , 
  
 Type 
 } 
  
 = 
  
 require 
 ( 
 '@google/genai' 
 ); 
 const 
  
 GOOGLE_CLOUD_PROJECT 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_PROJECT 
 ; 
 const 
  
 GOOGLE_CLOUD_LOCATION 
  
 = 
  
 process 
 . 
 env 
 . 
 GOOGLE_CLOUD_LOCATION 
  
 || 
  
 'global' 
 ; 
 async 
  
 function 
  
 generateContent 
 ( 
  
 projectId 
  
 = 
  
 GOOGLE_CLOUD_PROJECT 
 , 
  
 location 
  
 = 
  
 GOOGLE_CLOUD_LOCATION 
 ) 
  
 { 
  
 const 
  
 ai 
  
 = 
  
 new 
  
 GoogleGenAI 
 ({ 
  
 vertexai 
 : 
  
 true 
 , 
  
 project 
 : 
  
 projectId 
 , 
  
 location 
 : 
  
 location 
 , 
  
 }); 
  
 const 
  
 responseSchema 
  
 = 
  
 { 
  
 type 
 : 
  
 Type 
 . 
 STRING 
 , 
  
 enum 
 : 
  
 [ 
 'Percussion' 
 , 
  
 'String' 
 , 
  
 'Woodwind' 
 , 
  
 'Brass' 
 , 
  
 'Keyboard' 
 ], 
  
 }; 
  
 const 
  
 response 
  
 = 
  
 await 
  
 ai 
 . 
 models 
 . 
 generateContent 
 ({ 
  
 model 
 : 
  
 'gemini-2.5-flash' 
 , 
  
 contents 
 : 
  
 'What type of instrument is an oboe?' 
 , 
  
 config 
 : 
  
 { 
  
 responseMimeType 
 : 
  
 'text/x.enum' 
 , 
  
 responseSchema 
 : 
  
 responseSchema 
 , 
  
 }, 
  
 }); 
  
 console 
 . 
 log 
 ( 
 response 
 . 
 text 
 ); 
  
 return 
  
 response 
 . 
 text 
 ; 
 } 
 

Java

Learn how to install or update the Java .

To learn more, see the SDK reference documentation .

Set environment variables to use the Gen AI SDK with Vertex AI:

 # Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values 
 # with appropriate values for your project. 
 export 
  
 GOOGLE_CLOUD_PROJECT 
 = 
 GOOGLE_CLOUD_PROJECT 
 export 
  
 GOOGLE_CLOUD_LOCATION 
 = 
 global 
 export 
  
 GOOGLE_GENAI_USE_VERTEXAI 
 = 
True
  import 
  
 com.google.genai.Client 
 ; 
 import 
  
 com.google.genai.types.GenerateContentConfig 
 ; 
 import 
  
 com.google.genai.types.GenerateContentResponse 
 ; 
 import 
  
 com.google.genai.types.HttpOptions 
 ; 
 import 
  
 com.google.genai.types.Schema 
 ; 
 import 
  
 com.google.genai.types.Type 
 ; 
 import 
  
 java.util.List 
 ; 
 public 
 class 
  
 ControlledGenerationWithEnumSchema 
 { 
 public 
 static 
 void 
 main 
 ( 
 String 
 [] 
 args 
 ) 
 { 
 // 
 TODO 
 ( 
 developer 
 ): 
 Replace 
 these 
 variables 
 before 
 running 
 the 
 sample 
 . 
 String 
 contents 
 = 
 "What type of instrument is an oboe?" 
 ; 
 String 
 modelId 
 = 
 "gemini-2.5-flash" 
 ; 
 generateContent 
 ( 
 modelId 
 , 
 contents 
 ); 
 } 
 // 
 Generates 
 content 
 with 
 an 
 enum 
 response 
 schema 
 public 
 static 
 String 
 generateContent 
 ( 
 String 
 modelId 
 , 
 String 
 contents 
 ) 
 { 
 // 
 Initialize 
 client 
 that 
 will 
 be 
 used 
 to 
 send 
 requests 
 . 
 This 
 client 
 only 
 needs 
 to 
 be 
 created 
 // 
 once 
 , 
 and 
 can 
 be 
 reused 
 for 
 multiple 
 requests 
 . 
 try 
 ( 
 Client 
 client 
 = 
 Client 
 . 
 builder 
 () 
 . 
 location 
 ( 
 "global" 
 ) 
 . 
 vertexAI 
 ( 
 true 
 ) 
 . 
 httpOptions 
 ( 
 HttpOptions 
 . 
 builder 
 () 
 . 
 apiVersion 
 ( 
 "v1" 
 ) 
 . 
 build 
 ()) 
 . 
 build 
 ()) 
 { 
 // 
 Define 
 the 
 response 
 schema 
 with 
 an 
 enum 
 . 
 Schema 
 responseSchema 
 = 
 Schema 
 . 
 builder 
 () 
 . 
 type 
 ( 
 Type 
 . 
 Known 
 . 
 STRING 
 ) 
 . 
 enum_ 
 ( 
 List 
 . 
 of 
 ( 
 "Percussion" 
 , 
 "String" 
 , 
 "Woodwind" 
 , 
 "Brass" 
 , 
 "Keyboard" 
 )) 
 . 
 build 
 (); 
 GenerateContentConfig 
 config 
 = 
 GenerateContentConfig 
 . 
 builder 
 () 
 . 
 responseMimeType 
 ( 
 "text/x.enum" 
 ) 
 . 
 responseSchema 
 ( 
 responseSchema 
 ) 
 . 
 build 
 (); 
 GenerateContentResponse 
 response 
 = 
 client 
 . 
 models 
 . 
 generateContent 
 ( 
 modelId 
 , 
 contents 
 , 
 config 
 ); 
 System 
 . 
 out 
 . 
 print 
 ( 
 response 
 . 
 text 
 ()); 
 // 
 Example 
 response 
 : 
 // 
 Woodwind 
 return 
 response 
 . 
 text 
 (); 
 } 
 } 
 } 
 
Create a Mobile Website
View Site in Mobile | Classic
Share by: