In each call to a model, you can send along a model configuration to control how the model generates a response. Each model offers different configuration options.
You can also experiment with prompts and model configurations using Google AI Studio .Jump to Gemini config options Jump to Imagen config options
Configure Gemini models
Click your Gemini API provider to view provider-specific content and code on this page.
This section shows you how to set up a configuration for use with Gemini models and provides a description of each parameter .
Set up a model configuration ( Gemini )
Config for general use cases
The configuration is maintained for the lifetime of the instance. If you want to
use a different config, create a new GenerativeModel
instance with that
config.
Swift
Set the values of the parameters in a GenerationConfig
as part of creating a GenerativeModel
instance.
import
FirebaseAI
// Set parameter values in a `GenerationConfig`.
// IMPORTANT: Example values shown here. Make sure to update for your use case.
let
config
=
GenerationConfig
(
candidateCount
:
1
,
temperature
:
0.9
,
topP
:
0.1
,
topK
:
16
,
maxOutputTokens
:
200
,
stopSequences
:
[
"red"
]
)
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `GenerativeModel` instance
let
model
=
FirebaseAI
.
firebaseAI
(
backend
:
.
googleAI
()).
generativeModel
(
modelName
:
" GEMINI_MODEL_NAME
"
,
generationConfig
:
config
)
// ...
Kotlin
Set the values of the parameters in a GenerationConfig
as part of creating a GenerativeModel
instance.
// ...
// Set parameter values in a `GenerationConfig`.
// IMPORTANT: Example values shown here. Make sure to update for your use case.
val
config
=
generationConfig
{
candidateCount
=
1
maxOutputTokens
=
200
stopSequences
=
listOf
(
"red"
)
temperature
=
0.9f
topK
=
16
topP
=
0.1f
}
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `GenerativeModel` instance
val
model
=
Firebase
.
ai
(
backend
=
GenerativeBackend
.
googleAI
()).
generativeModel
(
modelName
=
" GEMINI_MODEL_NAME
"
,
generationConfig
=
config
)
// ...
Java
Set the values of the parameters in a GenerationConfig
as part of creating a GenerativeModel
instance.
// ...
// Set parameter values in a `GenerationConfig`.
// IMPORTANT: Example values shown here. Make sure to update for your use case.
GenerationConfig
.
Builder
configBuilder
=
new
GenerationConfig
.
Builder
();
configBuilder
.
candidateCount
=
1
;
configBuilder
.
maxOutputTokens
=
200
;
configBuilder
.
stopSequences
=
List
.
of
(
"red"
);
configBuilder
.
temperature
=
0.9f
;
configBuilder
.
topK
=
16
;
configBuilder
.
topP
=
0.1f
;
GenerationConfig
config
=
configBuilder
.
build
();
// Specify the config as part of creating the `GenerativeModel` instance
GenerativeModelFutures
model
=
GenerativeModelFutures
.
from
(
FirebaseAI
.
getInstance
(
GenerativeBackend
.
googleAI
())
.
generativeModel
(
" GEMINI_MODEL_NAME
"
,
config
);
);
// ...
Web
Set the values of the parameters in a GenerationConfig
as part of creating a GenerativeModel
instance.
// ...
// Initialize the Gemini Developer API backend service
const
ai
=
getAI
(
firebaseApp
,
{
backend
:
new
GoogleAIBackend
()
});
// Set parameter values in a `GenerationConfig`.
// IMPORTANT: Example values shown here. Make sure to update for your use case.
const
generationConfig
=
{
candidate_count
:
1
,
maxOutputTokens
:
200
,
stopSequences
:
[
"red"
],
temperature
:
0.9
,
topP
:
0.1
,
topK
:
16
,
};
// Specify the config as part of creating the `GenerativeModel` instance
const
model
=
getGenerativeModel
(
ai
,
{
model
:
" GEMINI_MODEL_NAME
"
,
generationConfig
});
// ...
Dart
Set the values of the parameters in a GenerationConfig
as part of creating a GenerativeModel
instance.
// ...
// Set parameter values in a `GenerationConfig`.
// IMPORTANT: Example values shown here. Make sure to update for your use case.
final
generationConfig
=
GenerationConfig
(
candidateCount:
1
,
maxOutputTokens:
200
,
stopSequences:
[
"red"
],
temperature:
0.9
,
topP:
0.1
,
topK:
16
,
);
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `GenerativeModel` instance
final
model
=
FirebaseAI
.
googleAI
().
generativeModel
(
model:
' GEMINI_MODEL_NAME
'
,
config:
generationConfig
,
);
// ...
Unity
Set the values of the parameters in a GenerationConfig
as part of creating a GenerativeModel
instance.
// ...
// Set parameter values in a `GenerationConfig`.
// IMPORTANT: Example values shown here. Make sure to update for your use case.
var
generationConfig
=
new
GenerationConfig
(
candidateCount
:
1
,
maxOutputTokens
:
200
,
stopSequences
:
new
string
[]
{
"red"
},
temperature
:
0.9f
,
topK
:
16
,
topP
:
0.1f
);
// Specify the config as part of creating the `GenerativeModel` instance
var
ai
=
FirebaseAI
.
GetInstance
(
FirebaseAI
.
Backend
.
GoogleAI
());
var
model
=
ai
.
GetGenerativeModel
(
modelName
:
" GEMINI_MODEL_NAME
"
,
generationConfig
:
generationConfig
);
You can find a description of each parameter in the next section of this page.
Config for the Gemini Live API
The configuration is maintained for the lifetime of the instance. If you want to
use a different config, create a new LiveModel
instance with that
config.
Swift
The Live API is not yet supported for Apple platform apps, but check back soon!
Kotlin
Set the values of parameters in a LiveGenerationConfig
as part of creating a LiveModel
instance.
// ...
// Set parameter values in a `LiveGenerationConfig` (example values shown here)
val
config
=
liveGenerationConfig
{
maxOutputTokens
=
200
responseModality
=
ResponseModality
.
AUDIO
speechConfig
=
SpeechConfig
(
voice
=
Voices
.
FENRIR
)
temperature
=
0.9f
topK
=
16
topP
=
0.1f
}
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `LiveModel` instance
val
model
=
Firebase
.
ai
(
backend
=
GenerativeBackend
.
googleAI
()).
liveModel
(
modelName
=
" GEMINI_MODEL_NAME
"
,
generationConfig
=
config
)
// ...
Java
Set the values of parameters in a LiveGenerationConfig
as part of creating a LiveModel
instance.
// ...
// Set parameter values in a `LiveGenerationConfig` (example values shown here)
LiveGenerationConfig
.
Builder
configBuilder
=
new
LiveGenerationConfig
.
Builder
();
configBuilder
.
setMaxOutputTokens
(
200
);
configBuilder
.
setResponseModalities
(
ResponseModality
.
AUDIO
);
configBuilder
.
setSpeechConfig
(
new
SpeechConfig
(
Voices
.
FENRIR
));
configBuilder
.
setTemperature
(
0.9f
);
configBuilder
.
setTopK
(
16
);
configBuilder
.
setTopP
(
0.1f
);
LiveGenerationConfig
config
=
configBuilder
.
build
();
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `LiveModel` instance
LiveModelFutures
model
=
LiveModelFutures
.
from
(
FirebaseAI
.
getInstance
(
GenerativeBackend
.
googleAI
())
.
generativeModel
(
" GEMINI_MODEL_NAME
"
,
config
);
);
// ...
Web
Set the values of parameters in the LiveGenerationConfig
during initialization of the LiveGenerativeModel
instance:
// ...
// Initialize the Gemini Developer API backend service
const
ai
=
getAI
(
firebaseApp
,
{
backend
:
new
GoogleAIBackend
()
});
// Set parameter values in a `LiveGenerationConfig` (example values shown here)
const
generationConfig
=
{
maxOutputTokens
:
200
,
responseModalities
:
[
ResponseModality
.
AUDIO
],
speechConfig
:
{
voiceConfig
:
{
prebuiltVoiceConfig
:
{
voiceName
:
"Fenrir"
},
},
},
temperature
:
0.9
,
topP
:
0.1
,
topK
:
16
,
};
// Specify the config as part of creating the `LiveGenerativeModel` instance
const
model
=
getLiveGenerativeModel
(
ai
,
{
model
:
" GEMINI_MODEL_NAME
"
,
generationConfig
,
});
// ...
Dart
Set the values of parameters in a LiveGenerationConfig
as part of creating a LiveModel
instance.
// ...
// Set parameter values in a `LiveGenerationConfig` (example values shown here)
final
generationConfig
=
LiveGenerationConfig
(
maxOutputTokens:
200
,
responseModalities:
[
ResponseModality
.
audio
],
speechConfig:
SpeechConfig
(
voiceName:
'Fenrir'
),
temperature:
0.9
,
topP:
0.1
,
topK:
16
,
);
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `LiveModel` instance
final
model
=
FirebaseAI
.
googleAI
().
liveModel
(
model:
' GEMINI_MODEL_NAME
'
,
config:
generationConfig
,
);
// ...
Unity
Set the values of parameters in a LiveGenerationConfig
as part of creating a LiveModel
instance.
// ...
// Set parameter values in a `LiveGenerationConfig` (example values shown here)
var
liveGenerationConfig
=
new
LiveGenerationConfig
(
maxOutputTokens
:
200
,
responseModalities
:
new
[]
{
ResponseModality
.
Audio
},
speechConfig
:
SpeechConfig
.
UsePrebuiltVoice
(
"Fenrir"
),
temperature
:
0.9f
,
topK
:
16
,
topP
:
0.1f
);
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `LiveModel` instance
var
ai
=
FirebaseAI
.
GetInstance
(
FirebaseAI
.
Backend
.
GoogleAI
());
var
model
=
ai
.
GetLiveModel
(
modelName
:
" GEMINI_MODEL_NAME
"
,
liveGenerationConfig
:
liveGenerationConfig
);
You can find a description of each parameter in the next section of this page.
Description of parameters ( Gemini )
Here is a high-level overview of the available parameters, as applicable. You can find a comprehensive list of parameters and their values in the Gemini Developer API documentation.
Parameter | Description | Default value |
---|---|---|
Audio timestamp
audioTimestamp
|
A boolean that enables timestamp understanding for audio-only input files. Only applicable when using |
false
|
Candidate count
candidateCount
|
Specifies the number of response variations to return. For each request, you're charged for the output tokens of all candidates, but you're only charged once for the input tokens. Supported values: Only applicable when using |
1
|
Frequency penalty
frequencyPenalty
|
Controls the probability of including tokens that repeatedly appear in
the generated response. Positive values penalize tokens that repeatedly appear in the generated content, decreasing the probability of repeating content. |
--- |
Max output tokens
maxOutputTokens
|
Specifies the maximum number of tokens that can be generated in the response. | --- |
Presence penalty
presencePenalty
|
Controls the probability of including tokens that already appear in
the generated response. Positive values penalize tokens that already appear in the generated content, increasing the probability of generating more diverse content. |
--- |
Stop sequences
stopSequences
|
Specifies a list of strings that tells the model to stop generating content if one of the strings is encountered in the response. Only applicable when using a |
--- |
Temperature
temperature
|
Controls the degree of randomness in the response. Lower temperatures result in more deterministic responses, and higher temperatures result in more diverse or creative responses. |
Depends on the model |
Top-K
topK
|
Limits the number of highest probability words used in the
generated content. A top-K value of 1
means the next selected token should be the most probable
among all tokens in the model's vocabulary,
while a top-K value of n
means that the next token should
be selected from among the n
most probable
tokens
(all based on the temperature that's set). |
Depends on the model |
Top-P
topP
|
Controls diversity of generated content. Tokens are selected from the most probable (see top-K above) to least probable until the sum of their probabilities equals the top-P value. |
Depends on the model |
Response modality
responseModality
|
Specifies the type of streamed output when using the Live API or native multimodal output by a Gemini model, for example text, audio, or images. Only applicable when using the Live API
and a |
--- |
Speech (voice)
speechConfig
|
Specifies the voice used for the streamed audio output when using the Live API . Only applicable when using the Live API
and a |
Puck
|
- Generating structured output (like JSON)
is controlled by using the
responseMimeType
andresponseSchema
parameters. - Setting the thinking budget
is controlled by using the
thinkingBudget
parameter (only applicable for Gemini 2.5 models).
Configure Imagen models
Click your Imagen API provider to view provider-specific content and code on this page.
This section shows you how to set up a configuration for use with Imagen models and provides a description of each parameter .
Set up a model configuration ( Imagen )
The configuration is maintained for the lifetime of the instance. If you want to
use a different config, create a new ImagenModel
instance with that
config.
Swift
Set the values of the parameters in an ImagenGenerationConfig
as part of creating an ImagenModel
instance.
import
FirebaseAI
// Set parameter values in a `ImagenGenerationConfig` (example values shown here)
let
config
=
ImagenGenerationConfig
(
negativePrompt
:
"frogs"
,
numberOfImages
:
2
,
aspectRatio
:
.
landscape16x9
,
imageFormat
:
.
jpeg
(
compressionQuality
:
100
),
addWatermark
:
false
)
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `ImagenModel` instance
let
model
=
FirebaseAI
.
firebaseAI
(
backend
:
.
googleAI
()).
imagenModel
(
modelName
:
" IMAGEN_MODEL_NAME
"
,
generationConfig
:
config
)
// ...
Kotlin
Set the values of the parameters in an ImagenGenerationConfig
as part of creating an ImagenModel
instance.
// ...
// Set parameter values in a `ImagenGenerationConfig` (example values shown here)
val
config
=
ImagenGenerationConfig
{
negativePrompt
=
"frogs"
,
numberOfImages
=
2
,
aspectRatio
=
ImagenAspectRatio
.
LANDSCAPE_16x9
,
imageFormat
=
ImagenImageFormat
.
jpeg
(
compressionQuality
=
100
),
addWatermark
=
false
}
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `GenerativeModel` instance
val
model
=
Firebase
.
ai
(
backend
=
GenerativeBackend
.
vertexAI
()).
imagenModel
(
modelName
=
" IMAGEN_MODEL_NAME
"
,
generationConfig
=
config
)
// ...
Java
Set the values of the parameters in an ImagenGenerationConfig
as part of creating an ImagenModel
instance.
// ...
// Set parameter values in a `ImagenGenerationConfig` (example values shown here)
ImagenGenerationConfig
config
=
new
ImagenGenerationConfig
.
Builder
()
.
setNegativePrompt
(
"frogs"
)
.
setNumberOfImages
(
2
)
.
setAspectRatio
(
ImagenAspectRatio
.
LANDSCAPE_16x9
)
.
setImageFormat
(
ImagenImageFormat
.
jpeg
(
100
))
.
setAddWatermark
(
false
)
.
build
();
// Specify the config as part of creating the `ImagenModel` instance
ImagenModelFutures
model
=
ImagenModelFutures
.
from
(
FirebaseAI
.
getInstance
(
GenerativeBackend
.
googleAI
())
.
imagenModel
(
" IMAGEN_MODEL_NAME
"
,
config
);
);
// ...
Web
Set the values of the parameters in an ImagenGenerationConfig
as part of creating an ImagenModel
instance.
// ...
// Initialize the Gemini Developer API backend service
const
ai
=
getAI
(
firebaseApp
,
{
backend
:
new
GoogleAIBackend
()
});
// Set parameter values in a `ImagenGenerationConfig` (example values shown here)
const
generationConfig
=
{
negativePrompt
:
"frogs"
,
numberOfImages
:
2
,
aspectRatio
:
ImagenAspectRatio
.
LANDSCAPE_16x9
,
imageFormat
:
ImagenImageFormat
.
jpeg
(
100
),
addWatermark
:
false
};
// Specify the config as part of creating the `ImagenModel` instance
const
model
=
getImagenModel
(
ai
,
{
model
:
" IMAGEN_MODEL_NAME
"
,
generationConfig
});
// ...
Dart
Set the values of the parameters in an ImagenGenerationConfig
as part of creating an ImagenModel
instance.
// ...
// Set parameter values in a `ImagenGenerationConfig` (example values shown here)
final
generationConfig
=
ImagenGenerationConfig
(
negativePrompt:
'frogs'
,
numberOfImages:
2
,
aspectRatio:
ImagenAspectRatio
.
landscape16x9
,
imageFormat:
ImagenImageFormat
.
jpeg
(
compressionQuality:
100
)
addWatermark:
false
);
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `ImagenModel` instance
final
model
=
FirebaseAI
.
googleAI
().
imagenModel
(
model:
' IMAGEN_MODEL_NAME
'
,
config:
generationConfig
,
);
// ...
Unity
Set the values of the parameters in an ImagenGenerationConfig
as part of creating an ImagenModel
instance.
using
Firebase.AI
;
// Set parameter values in a `ImagenGenerationConfig` (example values shown here)
var
config
=
new
ImagenGenerationConfig
(
numberOfImages
:
2
,
aspectRatio
:
ImagenAspectRatio
.
Landscape16x9
,
imageFormat
:
ImagenImageFormat
.
Jpeg
(
100
)
);
// Initialize the Gemini Developer API backend service
// Specify the config as part of creating the `ImagenModel` instance
var
model
=
FirebaseAI
.
GetInstance
(
FirebaseAI
.
Backend
.
GoogleAI
()).
GetImagenModel
(
modelName
:
"imagen-4.0-generate-001"
,
generationConfig
:
config
);
// ...
You can find a description of each parameter in the next section of this page.
Description of parameters ( Imagen )
Here is a high-level overview of the available parameters, as applicable. You can find a comprehensive list of parameters and their values in the Google Cloud documentation.
Parameter | Description | Default value |
---|---|---|
A description of what you want to omit in generated images This parameter is not yet supported by |
--- | |
Number of results
numberOfImages
|
The number of generated images returned for each request | default is one image for Imagen 3 models |
The ratio of width to height of generated images | default is square (1:1) | |
The output options, like the image format (MIME type) and level of compression of generated images | default MIME type is PNG default compression is 75 (if MIME type is set to JPEG) |
|
Watermark
addWatermark
|
Whether to add a non-visible digital watermark (called a SynthID ) to generated images | default is true
for Imagen 3
models |
Whether to allow generation of people by the model | default depends on the model |
Other options to control content generation
- Learn more about prompt design so that you can influence the model to generate output specific to your needs.
- Use safety settings to adjust the likelihood of getting responses that may be considered harmful, including hate speech and sexually explicit content.
- Set system instructions to steer the behavior of the model. This feature is like a preamble that you add before the model gets exposed to any further instructions from the end user.
- Pass a response schema along with the prompt to specify a specific output schema. This feature is most commonly used when generating JSON output , but it can also be used for classification tasks (like when you want the model to use specific labels or tags).