- 1.35.0 (latest)
- 1.34.0
- 1.33.0
- 1.32.1
- 1.31.0
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class PredictRequest.
Request message for PredictionService.Predict .
Generated from protobuf message google.cloud.aiplatform.v1.PredictRequest
Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ endpoint
string
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint}
↳ instances
array< Google\Protobuf\Value
>
Required. The instances that are the input to the prediction call. A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri .
↳ parameters
Google\Protobuf\Value
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri .
getEndpoint
Required. The name of the Endpoint requested to serve the prediction.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
string
setEndpoint
Required. The name of the Endpoint requested to serve the prediction.
Format: projects/{project}/locations/{location}/endpoints/{endpoint}
var
string
$this
getInstances
Required. The instances that are the input to the prediction call.
A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri .
setInstances
Required. The instances that are the input to the prediction call.
A DeployedModel may have an upper limit on the number of instances it supports per request, and when it is exceeded the prediction call errors in case of AutoML Models, or, in case of customer created Models, the behaviour is as documented by that Model. The schema of any single instance may be specified via Endpoint's DeployedModels' Model's PredictSchemata's instance_schema_uri .
$this
getParameters
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri .
hasParameters
clearParameters
setParameters
The parameters that govern the prediction. The schema of the parameters may be specified via Endpoint's DeployedModels' Model's PredictSchemata's parameters_schema_uri .
$this