Full name: projects.locations.endpoints.rawPredict
Perform an online prediction with an arbitrary HTTP payload.
The response includes the following HTTP headers:
-  X-Vertex-AI-Endpoint-id: id of theEndpointthat served this prediction.
-  X-Vertex-AI-Deployed-Model-id: id of the Endpoint'sDeployedModelthat served this prediction.
Endpoint
posthttps:  
 
 
Where {service-endpoint} 
is one of the supported service endpoints 
.
Path parameters
endpoint 
 
  string 
 
Required. The name of the Endpoint requested to serve the prediction. Format: projects/{project}/locations/{location}/endpoints/{endpoint} 
Request body
The request body contains data with the following structure:
httpBody 
 
  object (  HttpBody 
 
) 
 
The prediction input. Supports HTTP headers and arbitrary data payload.
A  DeployedModel 
 
may have an upper limit on the number of instances it supports per request. When this limit it is exceeded for an AutoML model, the  endpoints.rawPredict 
 
method returns an error. When this limit is exceeded for a custom-trained model, the behavior varies depending on the model.
You can specify the schema for each instance in the  predictSchemata.instance_schema_uri 
 
field when you create a  Model 
 
. This schema applies when you deploy the Model 
as a DeployedModel 
to an  Endpoint 
 
and use the endpoints.rawPredict 
method.
Response body
If successful, the response is a generic HTTP response whose format is defined by the method.

