- 1.44.0 (latest)
- 1.43.0
- 1.42.0
- 1.41.0
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.0
- 1.36.0
- 1.35.0
- 1.34.0
- 1.33.0
- 1.32.1
- 1.31.0
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class UsageMetadata.
Usage metadata about the content generation request and response.
This message provides a detailed breakdown of token usage and other relevant metrics.
Generated from protobuf message google.cloud.aiplatform.v1.UsageMetadata
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ prompt_token_count
int
The total number of tokens in the prompt. This includes any text, images, or other media provided in the request. When cached_content
is set, this also includes the number of tokens in the cached content.
↳ candidates_token_count
int
The total number of tokens in the generated candidates.
↳ total_token_count
int
The total number of tokens for the entire request. This is the sum of prompt_token_count
, candidates_token_count
, tool_use_prompt_token_count
, and thoughts_token_count
.
↳ tool_use_prompt_token_count
int
Output only. The number of tokens in the results from tool executions, which are provided back to the model as input, if applicable.
↳ thoughts_token_count
int
Output only. The number of tokens that were part of the model's generated "thoughts" output, if applicable.
↳ cached_content_token_count
int
Output only. The number of tokens in the cached content that was used for this request.
↳ prompt_tokens_details
array< ModalityTokenCount
>
Output only. A detailed breakdown of the token count for each modality in the prompt.
↳ cache_tokens_details
array< ModalityTokenCount
>
Output only. A detailed breakdown of the token count for each modality in the cached content.
↳ candidates_tokens_details
array< ModalityTokenCount
>
Output only. A detailed breakdown of the token count for each modality in the generated candidates.
↳ tool_use_prompt_tokens_details
array< ModalityTokenCount
>
Output only. A detailed breakdown by modality of the token counts from the results of tool executions, which are provided back to the model as input.
↳ traffic_type
int
Output only. The traffic type for this request.
getPromptTokenCount
The total number of tokens in the prompt. This includes any text, images,
or other media provided in the request. When cached_content
is set,
this also includes the number of tokens in the cached content.
int
setPromptTokenCount
The total number of tokens in the prompt. This includes any text, images,
or other media provided in the request. When cached_content
is set,
this also includes the number of tokens in the cached content.
var
int
$this
getCandidatesTokenCount
The total number of tokens in the generated candidates.
int
setCandidatesTokenCount
The total number of tokens in the generated candidates.
var
int
$this
getTotalTokenCount
The total number of tokens for the entire request. This is the sum of prompt_token_count
, candidates_token_count
, tool_use_prompt_token_count
, and thoughts_token_count
.
int
setTotalTokenCount
The total number of tokens for the entire request. This is the sum of prompt_token_count
, candidates_token_count
, tool_use_prompt_token_count
, and thoughts_token_count
.
var
int
$this
getToolUsePromptTokenCount
Output only. The number of tokens in the results from tool executions, which are provided back to the model as input, if applicable.
int
setToolUsePromptTokenCount
Output only. The number of tokens in the results from tool executions, which are provided back to the model as input, if applicable.
var
int
$this
getThoughtsTokenCount
Output only. The number of tokens that were part of the model's generated "thoughts" output, if applicable.
int
setThoughtsTokenCount
Output only. The number of tokens that were part of the model's generated "thoughts" output, if applicable.
var
int
$this
getCachedContentTokenCount
Output only. The number of tokens in the cached content that was used for this request.
int
setCachedContentTokenCount
Output only. The number of tokens in the cached content that was used for this request.
var
int
$this
getPromptTokensDetails
Output only. A detailed breakdown of the token count for each modality in the prompt.
setPromptTokensDetails
Output only. A detailed breakdown of the token count for each modality in the prompt.
$this
getCacheTokensDetails
Output only. A detailed breakdown of the token count for each modality in the cached content.
setCacheTokensDetails
Output only. A detailed breakdown of the token count for each modality in the cached content.
$this
getCandidatesTokensDetails
Output only. A detailed breakdown of the token count for each modality in the generated candidates.
setCandidatesTokensDetails
Output only. A detailed breakdown of the token count for each modality in the generated candidates.
$this
getToolUsePromptTokensDetails
Output only. A detailed breakdown by modality of the token counts from the results of tool executions, which are provided back to the model as input.
setToolUsePromptTokensDetails
Output only. A detailed breakdown by modality of the token counts from the results of tool executions, which are provided back to the model as input.
$this
getTrafficType
Output only. The traffic type for this request.
setTrafficType
Output only. The traffic type for this request.
$this

