GenerateContentResponse

Response message for [PredictionService.GenerateContent].

Fields
candidates[] object ( Candidate )

Output only. Generated candidates.

modelVersion string

Output only. The model version used to generate the response.

createTime string ( Timestamp format)

Output only. timestamp when the request is made to the server.

Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z" , "2014-10-02T15:01:23.045123456Z" or "2014-10-02T15:01:23+05:30" .

responseId string

Output only. responseId is used to identify each response. It is the encoding of the eventId.

promptFeedback object ( PromptFeedback )

Output only. Content filter results for a prompt sent in the request. Note: Sent only in the first stream chunk. Only happens when no candidates were generated due to content violations.

JSON representation
 { 
 "candidates" 
 : 
 [ 
 { 
 object (  Candidate 
 
) 
 } 
 ] 
 , 
 "modelVersion" 
 : 
 string 
 , 
 "createTime" 
 : 
 string 
 , 
 "responseId" 
 : 
 string 
 , 
 "promptFeedback" 
 : 
 { 
 object (  PromptFeedback 
 
) 
 } 
 , 
 "usageMetadata" 
 : 
 { 
 object (  UsageMetadata 
 
) 
 } 
 } 

Candidate

A response candidate generated from the model.

Fields
index integer

Output only. Index of the candidate.

content object ( Content )

Output only. Content parts of the candidate.

avgLogprobs number

Output only. Average log probability score of the candidate.

logprobsResult object ( LogprobsResult )

Output only. log-likelihood scores for the response tokens and top tokens

finishReason enum ( FinishReason )

Output only. The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

safetyRatings[] object ( SafetyRating )

Output only. List of ratings for the safety of a response candidate.

There is at most one rating per category.

finishMessage string

Output only. Describes the reason the mode stopped generating tokens in more detail. This is only filled when finishReason is set.

JSON representation
 { 
 "index" 
 : 
 integer 
 , 
 "content" 
 : 
 { 
 object (  Content 
 
) 
 } 
 , 
 "avgLogprobs" 
 : 
 number 
 , 
 "logprobsResult" 
 : 
 { 
 object (  LogprobsResult 
 
) 
 } 
 , 
 "finishReason" 
 : 
 enum (  FinishReason 
 
) 
 , 
 "safetyRatings" 
 : 
 [ 
 { 
 object (  SafetyRating 
 
) 
 } 
 ] 
 , 
 "citationMetadata" 
 : 
 { 
 object (  CitationMetadata 
 
) 
 } 
 , 
 "groundingMetadata" 
 : 
 { 
 object (  GroundingMetadata 
 
) 
 } 
 , 
 "urlContextMetadata" 
 : 
 { 
 object (  UrlContextMetadata 
 
) 
 } 
 , 
 "finishMessage" 
 : 
 string 
 } 

LogprobsResult

Logprobs result

Fields
topCandidates[] object ( TopCandidates )

Length = total number of decoding steps.

chosenCandidates[] object ( Candidate )

Length = total number of decoding steps. The chosen candidates may or may not be in topCandidates.

JSON representation
 { 
 "topCandidates" 
 : 
 [ 
 { 
 object (  TopCandidates 
 
) 
 } 
 ] 
 , 
 "chosenCandidates" 
 : 
 [ 
 { 
 object (  Candidate 
 
) 
 } 
 ] 
 } 

TopCandidates

Candidates with top log probabilities at each decoding step.

Fields
candidates[] object ( Candidate )

Sorted by log probability in descending order.

JSON representation
 { 
 "candidates" 
 : 
 [ 
 { 
 object (  Candidate 
 
) 
 } 
 ] 
 } 

Candidate

Candidate for the logprobs token and score.

Fields
token string

The candidate's token string value.

tokenId integer

The candidate's token id value.

logProbability number

The candidate's log probability.

JSON representation
 { 
 "token" 
 : 
 string 
 , 
 "tokenId" 
 : 
 integer 
 , 
 "logProbability" 
 : 
 number 
 } 

FinishReason

The reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

Enums
FINISH_REASON_UNSPECIFIED The finish reason is unspecified.
STOP token generation reached a natural stopping point or a configured stop sequence.
MAX_TOKENS token generation reached the configured maximum output tokens.
SAFETY token generation stopped because the content potentially contains safety violations. NOTE: When streaming, content is empty if content filters blocks the output.
RECITATION The token generation stopped because of potential recitation.
OTHER All other reasons that stopped the token generation.
BLOCKLIST token generation stopped because the content contains forbidden terms.
PROHIBITED_CONTENT token generation stopped for potentially containing prohibited content.
SPII token generation stopped because the content potentially contains Sensitive Personally Identifiable Information (SPII).
MALFORMED_FUNCTION_CALL The function call generated by the model is syntaxtically invalid (e.g. the function call generated is not parsable).
MODEL_ARMOR The model response was blocked by Model Armor.
IMAGE_SAFETY token generation stopped because generated images has safety violations.
IMAGE_PROHIBITED_CONTENT Image generation stopped because generated images has other prohibited content.
IMAGE_RECITATION Image generation stopped due to recitation.
IMAGE_OTHER Image generation stopped because of other miscellaneous issue.
UNEXPECTED_TOOL_CALL The function call generated by the model is semantically invalid (e.g. a function call is generated when function calling is not enabled or the function is not in the function declaration).

SafetyRating

Safety rating corresponding to the generated content.

Fields
category enum ( HarmCategory )

Output only. Harm category.

probability enum ( HarmProbability )

Output only. Harm probability levels in the content.

probabilityScore number

Output only. Harm probability score.

severity enum ( HarmSeverity )

Output only. Harm severity levels in the content.

severityScore number

Output only. Harm severity score.

blocked boolean

Output only. Indicates whether the content was filtered out because of this rating.

overwrittenThreshold enum ( HarmBlockThreshold )

Output only. The overwritten threshold for the safety category of Gemini 2.0 image out. If minors are detected in the output image, the threshold of each safety category will be overwritten if user sets a lower threshold.

JSON representation
 { 
 "category" 
 : 
 enum (  HarmCategory 
 
) 
 , 
 "probability" 
 : 
 enum (  HarmProbability 
 
) 
 , 
 "probabilityScore" 
 : 
 number 
 , 
 "severity" 
 : 
 enum (  HarmSeverity 
 
) 
 , 
 "severityScore" 
 : 
 number 
 , 
 "blocked" 
 : 
 boolean 
 , 
 "overwrittenThreshold" 
 : 
 enum (  HarmBlockThreshold 
 
) 
 } 

HarmProbability

Harm probability levels in the content.

Enums
HARM_PROBABILITY_UNSPECIFIED Harm probability unspecified.
NEGLIGIBLE Negligible level of harm.
LOW Low level of harm.
MEDIUM Medium level of harm.
HIGH High level of harm.

HarmSeverity

Harm severity levels.

Enums
HARM_SEVERITY_UNSPECIFIED Harm severity unspecified.
HARM_SEVERITY_NEGLIGIBLE Negligible level of harm severity.
HARM_SEVERITY_LOW Low level of harm severity.
HARM_SEVERITY_MEDIUM Medium level of harm severity.
HARM_SEVERITY_HIGH High level of harm severity.

A collection of source attributions for a piece of content.

Fields
citations[] object ( Citation )

Output only. List of citations.

JSON representation
 { 
 "citations" 
 : 
 [ 
 { 
 object (  Citation 
 
) 
 } 
 ] 
 } 

Citation

Source attributions for content.

Fields
startIndex integer

Output only. Start index into the content.

endIndex integer

Output only. End index into the content.

uri string

Output only. url reference of the attribution.

title string

Output only. title of the attribution.

license string

Output only. License of the attribution.

publicationDate object ( Date )

Output only. Publication date of the attribution.

JSON representation
 { 
 "startIndex" 
 : 
 integer 
 , 
 "endIndex" 
 : 
 integer 
 , 
 "uri" 
 : 
 string 
 , 
 "title" 
 : 
 string 
 , 
 "license" 
 : 
 string 
 , 
 "publicationDate" 
 : 
 { 
 object (  Date 
 
) 
 } 
 } 

metadata related to url context retrieval tool.

Fields
JSON representation
 { 
 "urlMetadata" 
 : 
 [ 
 { 
 object (  UrlMetadata 
 
) 
 } 
 ] 
 } 

Context of the a single url retrieval.

Fields
retrievedUrl string

Retrieved url by the tool.

urlRetrievalStatus enum ( UrlRetrievalStatus )

status of the url retrieval.

JSON representation
 { 
 "retrievedUrl" 
 : 
 string 
 , 
 "urlRetrievalStatus" 
 : 
 enum (  UrlRetrievalStatus 
 
) 
 } 

UrlRetrievalStatus

status of the url retrieval.

Enums
URL_RETRIEVAL_STATUS_UNSPECIFIED Default value. This value is unused.
URL_RETRIEVAL_STATUS_SUCCESS url retrieval is successful.
URL_RETRIEVAL_STATUS_ERROR url retrieval is failed due to error.

PromptFeedback

Content filter results for a prompt sent in the request. Note: This is sent only in the first stream chunk and only if no candidates were generated due to content violations.

Fields
blockReason enum ( BlockedReason )

Output only. The reason why the prompt was blocked.

safetyRatings[] object ( SafetyRating )

Output only. A list of safety ratings for the prompt. There is one rating per category.

blockReasonMessage string

Output only. A readable message that explains the reason why the prompt was blocked.

JSON representation
 { 
 "blockReason" 
 : 
 enum (  BlockedReason 
 
) 
 , 
 "safetyRatings" 
 : 
 [ 
 { 
 object (  SafetyRating 
 
) 
 } 
 ] 
 , 
 "blockReasonMessage" 
 : 
 string 
 } 

BlockedReason

The reason why the prompt was blocked.

Enums
BLOCKED_REASON_UNSPECIFIED The blocked reason is unspecified.
SAFETY The prompt was blocked for safety reasons.
OTHER The prompt was blocked for other reasons. For example, it may be due to the prompt's language, or because it contains other harmful content.
BLOCKLIST The prompt was blocked because it contains a term from the terminology blocklist.
PROHIBITED_CONTENT The prompt was blocked because it contains prohibited content.
MODEL_ARMOR The prompt was blocked by Model Armor.
IMAGE_SAFETY The prompt was blocked because it contains content that is unsafe for image generation.

Usage metadata about the content generation request and response. This message provides a detailed breakdown of token usage and other relevant metrics.

Fields
promptTokenCount integer

The total number of tokens in the prompt. This includes any text, images, or other media provided in the request. When cachedContent is set, this also includes the number of tokens in the cached content.

candidatesTokenCount integer

The total number of tokens in the generated candidates.

totalTokenCount integer

The total number of tokens for the entire request. This is the sum of promptTokenCount , candidatesTokenCount , toolUsePromptTokenCount , and thoughtsTokenCount .

toolUsePromptTokenCount integer

Output only. The number of tokens in the results from tool executions, which are provided back to the model as input, if applicable.

thoughtsTokenCount integer

Output only. The number of tokens that were part of the model's generated "thoughts" output, if applicable.

cachedContentTokenCount integer

Output only. The number of tokens in the cached content that was used for this request.

promptTokensDetails[] object ( ModalityTokenCount )

Output only. A detailed breakdown of the token count for each modality in the prompt.

cacheTokensDetails[] object ( ModalityTokenCount )

Output only. A detailed breakdown of the token count for each modality in the cached content.

candidatesTokensDetails[] object ( ModalityTokenCount )

Output only. A detailed breakdown of the token count for each modality in the generated candidates.

toolUsePromptTokensDetails[] object ( ModalityTokenCount )

Output only. A detailed breakdown by modality of the token counts from the results of tool executions, which are provided back to the model as input.

trafficType enum ( TrafficType )

Output only. The traffic type for this request.

JSON representation
 { 
 "promptTokenCount" 
 : 
 integer 
 , 
 "candidatesTokenCount" 
 : 
 integer 
 , 
 "totalTokenCount" 
 : 
 integer 
 , 
 "toolUsePromptTokenCount" 
 : 
 integer 
 , 
 "thoughtsTokenCount" 
 : 
 integer 
 , 
 "cachedContentTokenCount" 
 : 
 integer 
 , 
 "promptTokensDetails" 
 : 
 [ 
 { 
 object (  ModalityTokenCount 
 
) 
 } 
 ] 
 , 
 "cacheTokensDetails" 
 : 
 [ 
 { 
 object (  ModalityTokenCount 
 
) 
 } 
 ] 
 , 
 "candidatesTokensDetails" 
 : 
 [ 
 { 
 object (  ModalityTokenCount 
 
) 
 } 
 ] 
 , 
 "toolUsePromptTokensDetails" 
 : 
 [ 
 { 
 object (  ModalityTokenCount 
 
) 
 } 
 ] 
 , 
 "trafficType" 
 : 
 enum (  TrafficType 
 
) 
 } 

TrafficType

The type of traffic that this request was processed with, indicating which quota is consumed.

Enums
TRAFFIC_TYPE_UNSPECIFIED Unspecified request traffic type.
ON_DEMAND The request was processed using Pay-As-You-Go quota.
PROVISIONED_THROUGHPUT type for Provisioned Throughput traffic.
Design a Mobile Site
View Site in Mobile | Classic
Share by: