Response message for [PredictionService.GenerateContent].
candidates[]
object ( Candidate
)
Output only. Generated candidates.
modelVersion
string
Output only. The model version used to generate the response.
createTime
string ( Timestamp
format)
Output only. timestamp when the request is made to the server.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
responseId
string
Output only. responseId is used to identify each response. It is the encoding of the eventId.
promptFeedback
object ( PromptFeedback
)
Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.
| JSON representation |
|---|
{ "candidates" : [ { object ( |
Candidate
A response candidate generated from the model.
index
integer
Output only. The 0-based index of this candidate in the list of generated responses. This is useful for distinguishing between multiple candidates when candidateCount
> 1.
content
object ( Content
)
Output only. The content of the candidate.
avgLogprobs
number
Output only. The average log probability of the tokens in this candidate. This is a length-normalized score that can be used to compare the quality of candidates of different lengths. A higher average log probability suggests a more confident and coherent response.
logprobsResult
object ( LogprobsResult
)
Output only. The detailed log probability information for the tokens in this candidate. This is useful for debugging, understanding model uncertainty, and identifying potential "hallucinations".
finishReason
enum ( FinishReason
)
Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating.
| JSON representation |
|---|
{ "index" : integer , "content" : { object ( |
LogprobsResult
The log probabilities of the tokens generated by the model.
This is useful for understanding the model's confidence in its predictions and for debugging. For example, you can use log probabilities to identify when the model is making a less confident prediction or to explore alternative responses that the model considered. A low log probability can also indicate that the model is "hallucinating" or generating factually incorrect information.
topCandidates[]
object ( TopCandidates
)
A list of the top candidate tokens at each decoding step. The length of this list is equal to the total number of decoding steps.
chosenCandidates[]
object ( Candidate
)
A list of the chosen candidate tokens at each decoding step. The length of this list is equal to the total number of decoding steps. Note that the chosen candidate might not be in topCandidates
.
| JSON representation |
|---|
{ "topCandidates" : [ { object ( |
TopCandidates
A list of the top candidate tokens and their log probabilities at each decoding step. This can be used to see what other tokens the model considered.
candidates[]
object ( Candidate
)
The list of candidate tokens, sorted by log probability in descending order.
| JSON representation |
|---|
{
"candidates"
:
[
{
object (
|
Candidate
A single token and its associated log probability.
token
string
The token's string representation.
tokenId
integer
The token's numerical id. While the token
field provides the string representation of the token, the tokenId
is the numerical representation that the model uses internally. This can be useful for developers who want to build custom logic based on the model's vocabulary.
logProbability
number
The log probability of this token. A higher value indicates that the model was more confident in this token. The log probability can be used to assess the relative likelihood of different tokens and to identify when the model is uncertain.
| JSON representation |
|---|
{ "token" : string , "tokenId" : integer , "logProbability" : number } |
FinishReason
The reason why the model stopped generating tokens. If this field is empty, the model has not stopped generating.
| Enums | |
|---|---|
FINISH_REASON_UNSPECIFIED
|
The finish reason is unspecified. |
STOP
|
The model reached a natural stopping point or a configured stop sequence. |
MAX_TOKENS
|
The model generated the maximum number of tokens allowed by the maxOutputTokens
parameter. |
SAFETY
|
The model stopped generating because the content potentially violates safety policies. NOTE: When streaming, the content
field is empty if content filters block the output. |
RECITATION
|
The model stopped generating because the content may be a recitation from a source. |
OTHER
|
The model stopped generating for a reason not otherwise specified. |
BLOCKLIST
|
The model stopped generating because the content contains a term from a configured blocklist. |
PROHIBITED_CONTENT
|
The model stopped generating because the content may be prohibited. |
SPII
|
The model stopped generating because the content may contain sensitive personally identifiable information (SPII). |
MALFORMED_FUNCTION_CALL
|
The model generated a function call that is syntactically invalid and can't be parsed. |
MODEL_ARMOR
|
The model response was blocked by Model Armor. |
IMAGE_SAFETY
|
The generated image potentially violates safety policies. |
IMAGE_PROHIBITED_CONTENT
|
The generated image may contain prohibited content. |
IMAGE_RECITATION
|
The generated image may be a recitation from a source. |
IMAGE_OTHER
|
The image generation stopped for a reason not otherwise specified. |
UNEXPECTED_TOOL_CALL
|
The model generated a function call that is semantically invalid. This can happen, for example, if function calling is not enabled or the generated function is not in the function declaration. |
NO_IMAGE
|
The model was expected to generate an image, but didn't. |
SafetyRating
A safety rating for a piece of content.
The safety rating contains the harm category and the harm probability level.
category
enum ( HarmCategory
)
Output only. The harm category of this rating.
probability
enum ( HarmProbability
)
Output only. The probability of harm for this category.
probabilityScore
number
Output only. The probability score of harm for this category.
severity
enum ( HarmSeverity
)
Output only. The severity of harm for this category.
severityScore
number
Output only. The severity score of harm for this category.
blocked
boolean
Output only. Indicates whether the content was blocked because of this rating.
overwrittenThreshold
enum ( HarmBlockThreshold
)
Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.
| JSON representation |
|---|
{ "category" : enum ( |
HarmProbability
The probability of harm for a given category.
| Enums | |
|---|---|
HARM_PROBABILITY_UNSPECIFIED
|
The harm probability is unspecified. |
NEGLIGIBLE
|
The harm probability is negligible. |
LOW
|
The harm probability is low. |
MEDIUM
|
The harm probability is medium. |
HIGH
|
The harm probability is high. |
HarmSeverity
The severity of harm for a given category.
| Enums | |
|---|---|
HARM_SEVERITY_UNSPECIFIED
|
The harm severity is unspecified. |
HARM_SEVERITY_NEGLIGIBLE
|
The harm severity is negligible. |
HARM_SEVERITY_LOW
|
The harm severity is low. |
HARM_SEVERITY_MEDIUM
|
The harm severity is medium. |
HARM_SEVERITY_HIGH
|
The harm severity is high. |
CitationMetadata
A collection of citations that apply to a piece of generated content.
citations[]
object ( Citation
)
Output only. A list of citations for the content.
| JSON representation |
|---|
{
"citations"
:
[
{
object (
|
Citation
A citation for a piece of generatedcontent.
startIndex
integer
Output only. The start index of the citation in the content.
endIndex
integer
Output only. The end index of the citation in the content.
uri
string
Output only. The URI of the source of the citation.
title
string
Output only. The title of the source of the citation.
license
string
Output only. The license of the source of the citation.
publicationDate
object ( Date
)
Output only. The publication date of the source of the citation.
| JSON representation |
|---|
{
"startIndex"
:
integer
,
"endIndex"
:
integer
,
"uri"
:
string
,
"title"
:
string
,
"license"
:
string
,
"publicationDate"
:
{
object (
|
UrlContextMetadata
metadata returned when the model uses the urlContext
tool to get information from a user-provided URL.
| JSON representation |
|---|
{
"urlMetadata"
:
[
{
object (
|
UrlMetadata
The metadata for a single URL retrieval.
retrievedUrl
string
The URL retrieved by the tool.
urlRetrievalStatus
enum ( UrlRetrievalStatus
)
The status of the URL retrieval.
| JSON representation |
|---|
{
"retrievedUrl"
:
string
,
"urlRetrievalStatus"
:
enum (
|
UrlRetrievalStatus
The status of a URL retrieval.
| Enums | |
|---|---|
URL_RETRIEVAL_STATUS_UNSPECIFIED
|
Default value. This value is unused. |
URL_RETRIEVAL_STATUS_SUCCESS
|
The URL was retrieved successfully. |
URL_RETRIEVAL_STATUS_ERROR
|
The URL retrieval failed. |
PromptFeedback
Content filter results for a prompt sent in the request. Note: This is sent only in the first stream chunk and only if no candidates were generated due to content violations.
blockReason
enum ( BlockedReason
)
Output only. The reason why the prompt was blocked.
| JSON representation |
|---|
{ "blockReason" : enum ( |
BlockedReason
The reason why the prompt was blocked.
| Enums | |
|---|---|
BLOCKED_REASON_UNSPECIFIED
|
The blocked reason is unspecified. |
SAFETY
|
The prompt was blocked for safety reasons. |
OTHER
|
The prompt was blocked for other reasons. For example, it may be due to the prompt's language, or because it contains other harmful content. |
BLOCKLIST
|
The prompt was blocked because it contains a term from the terminology blocklist. |
PROHIBITED_CONTENT
|
The prompt was blocked because it contains prohibited content. |
MODEL_ARMOR
|
The prompt was blocked by Model Armor. |
IMAGE_SAFETY
|
The prompt was blocked because it contains content that is unsafe for image generation. |
JAILBREAK
|
The prompt was blocked as a jailbreak attempt. |
UsageMetadata
Usage metadata about the content generation request and response. This message provides a detailed breakdown of token usage and other relevant metrics.
promptTokenCount
integer
The total number of tokens in the prompt. This includes any text, images, or other media provided in the request. When cachedContent
is set, this also includes the number of tokens in the cached content.
candidatesTokenCount
integer
The total number of tokens in the generated candidates.
totalTokenCount
integer
The total number of tokens for the entire request. This is the sum of promptTokenCount
, candidatesTokenCount
, toolUsePromptTokenCount
, and thoughtsTokenCount
.
toolUsePromptTokenCount
integer
Output only. The number of tokens in the results from tool executions, which are provided back to the model as input, if applicable.
thoughtsTokenCount
integer
Output only. The number of tokens that were part of the model's generated "thoughts" output, if applicable.
cachedContentTokenCount
integer
Output only. The number of tokens in the cached content that was used for this request.
promptTokensDetails[]
object ( ModalityTokenCount
)
Output only. A detailed breakdown of the token count for each modality in the prompt.
cacheTokensDetails[]
object ( ModalityTokenCount
)
Output only. A detailed breakdown of the token count for each modality in the cached content.
candidatesTokensDetails[]
object ( ModalityTokenCount
)
Output only. A detailed breakdown of the token count for each modality in the generated candidates.
toolUsePromptTokensDetails[]
object ( ModalityTokenCount
)
Output only. A detailed breakdown by modality of the token counts from the results of tool executions, which are provided back to the model as input.
trafficType
enum ( TrafficType
)
Output only. The traffic type for this request.
| JSON representation |
|---|
{ "promptTokenCount" : integer , "candidatesTokenCount" : integer , "totalTokenCount" : integer , "toolUsePromptTokenCount" : integer , "thoughtsTokenCount" : integer , "cachedContentTokenCount" : integer , "promptTokensDetails" : [ { object ( |
TrafficType
The type of traffic that this request was processed with, indicating which quota is consumed.
| Enums | |
|---|---|
TRAFFIC_TYPE_UNSPECIFIED
|
Unspecified request traffic type. |
ON_DEMAND
|
The request was processed using Pay-As-You-Go quota. |
PROVISIONED_THROUGHPUT
|
type for Provisioned Throughput traffic. |

