- 1.35.0 (latest)
- 1.34.0
- 1.33.0
- 1.32.1
- 1.31.0
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class CountTokensRequest.
Request message for PredictionService.CountTokens .
Generated from protobuf message google.cloud.aiplatform.v1.CountTokensRequest
Namespace
Google \ Cloud \ AIPlatform \ V1Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ endpoint
string
Required. The name of the Endpoint requested to perform token counting. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
↳ model
string
Optional. The name of the publisher model requested to serve the prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
↳ instances
array< Google\Protobuf\Value
>
Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.
↳ contents
↳ system_instruction
Content
Optional. The user provided system instructions for the model. Note: only text should be used in parts and content in each part will be in a separate paragraph.
↳ tools
array< Tool
>
Optional. A list of Tools
the model may use to generate the next response. A Tool
is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model.
↳ generation_config
getEndpoint
Required. The name of the Endpoint requested to perform token counting.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
string
setEndpoint
Required. The name of the Endpoint requested to perform token counting.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
var
string
$this
getModel
Optional. The name of the publisher model requested to serve the
prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
string
setModel
Optional. The name of the publisher model requested to serve the
prediction. Format: projects/{project}/locations/{location}/publishers/*/models/*
var
string
$this
getInstances
Optional. The instances that are the input to token counting call.
Schema is identical to the prediction schema of the underlying model.
setInstances
Optional. The instances that are the input to token counting call.
Schema is identical to the prediction schema of the underlying model.
$this
getContents
Optional. Input content.
setContents
Optional. Input content.
$this
getSystemInstruction
Optional. The user provided system instructions for the model.
Note: only text should be used in parts and content in each part will be in a separate paragraph.
hasSystemInstruction
clearSystemInstruction
setSystemInstruction
Optional. The user provided system instructions for the model.
Note: only text should be used in parts and content in each part will be in a separate paragraph.
$this
getTools
Optional. A list of Tools
the model may use to generate the next
response.
A Tool
is a piece of code that enables the system to interact with
external systems to perform an action, or set of actions, outside of
knowledge and scope of the model.
setTools
Optional. A list of Tools
the model may use to generate the next
response.
A Tool
is a piece of code that enables the system to interact with
external systems to perform an action, or set of actions, outside of
knowledge and scope of the model.
$this
getGenerationConfig
Optional. Generation config that the model will use to generate the response.
hasGenerationConfig
clearGenerationConfig
setGenerationConfig
Optional. Generation config that the model will use to generate the response.
$this
static::build
endpoint
string
Required. The name of the Endpoint requested to perform token counting.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
Please see LlmUtilityServiceClient::endpointName()
for help formatting this field.
instances
array< Google\Protobuf\Value
>
Optional. The instances that are the input to token counting call. Schema is identical to the prediction schema of the underlying model.