- 1.35.0 (latest)
- 1.34.0
- 1.33.0
- 1.32.1
- 1.31.0
- 1.30.0
- 1.26.0
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
- 1.15.0
- 1.14.0
- 1.13.1
- 1.12.0
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.0
- 1.7.0
- 1.6.0
- 1.5.0
- 1.4.0
- 1.3.0
- 1.2.0
- 1.1.0
- 1.0.0
- 0.39.0
- 0.38.0
- 0.37.1
- 0.32.0
- 0.31.0
- 0.30.0
- 0.29.0
- 0.28.0
- 0.27.0
- 0.26.2
- 0.25.0
- 0.24.0
- 0.23.0
- 0.22.0
- 0.21.0
- 0.20.0
- 0.19.0
- 0.18.0
- 0.17.0
- 0.16.0
- 0.15.0
- 0.13.0
- 0.12.0
- 0.11.1
- 0.10.0
Reference documentation and code samples for the Google Cloud Ai Platform V1 Client class DeployedModel.
A deployment of a Model. Endpoints contain one or more DeployedModels.
Generated from protobuf message google.cloud.aiplatform.v1.DeployedModel
Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ dedicated_resources
Google\Cloud\AIPlatform\V1\DedicatedResources
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
↳ automatic_resources
Google\Cloud\AIPlatform\V1\AutomaticResources
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
↳ id
string
Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID. This value should be 1-10 characters, and valid characters are /[0-9]/.
↳ model
string
Required. The resource name of the Model that this is the deployment of. Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be deployed.
↳ model_version_id
string
Output only. The version ID of the model that is deployed.
↳ display_name
string
The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
↳ create_time
↳ explanation_spec
Google\Cloud\AIPlatform\V1\ExplanationSpec
Explanation configuration for this DeployedModel. When deploying a Model using EndpointService.DeployModel , this value overrides the value of Model.explanation_spec . All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
↳ service_account
string
The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project. Users deploying the Model must have the iam.serviceAccounts.actAs
permission on this service account.
↳ disable_container_logging
bool
For custom-trained Models and AutoML Tabular Models, the container of the DeployedModel instances will send stderr
and stdout
streams to Stackdriver Logging by default. Please note that the logs incur cost, which are subject to Cloud Logging pricing
. User can disable container logging by setting this flag to true.
↳ enable_access_logging
bool
If true, online prediction access logs are sent to StackDriver Logging. These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
↳ private_endpoints
Google\Cloud\AIPlatform\V1\PrivateEndpoints
Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
getDedicatedResources
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
hasDedicatedResources
setDedicatedResources
A description of resources that are dedicated to the DeployedModel, and that need a higher degree of manual configuration.
$this
getAutomaticResources
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
hasAutomaticResources
setAutomaticResources
A description of resources that to large degree are decided by Vertex AI, and require only a modest additional configuration.
$this
getId
Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.
This value should be 1-10 characters, and valid characters are /[0-9]/.
string
setId
Immutable. The ID of the DeployedModel. If not provided upon deployment, Vertex AI will generate a value for this ID.
This value should be 1-10 characters, and valid characters are /[0-9]/.
var
string
$this
getModel
Required. The resource name of the Model that this is the deployment of.
Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be deployed.
string
setModel
Required. The resource name of the Model that this is the deployment of.
Note that the Model may be in a different location than the DeployedModel's Endpoint. The resource name may contain version id or version alias to specify the version, if no version is specified, the default version will be deployed.
var
string
$this
getModelVersionId
Output only. The version ID of the model that is deployed.
string
setModelVersionId
Output only. The version ID of the model that is deployed.
var
string
$this
getDisplayName
The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
string
setDisplayName
The display name of the DeployedModel. If not provided upon creation, the Model's display_name is used.
var
string
$this
getCreateTime
Output only. Timestamp when the DeployedModel was created.
hasCreateTime
clearCreateTime
setCreateTime
Output only. Timestamp when the DeployedModel was created.
$this
getExplanationSpec
Explanation configuration for this DeployedModel.
When deploying a Model using EndpointService.DeployModel , this value overrides the value of Model.explanation_spec . All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
hasExplanationSpec
clearExplanationSpec
setExplanationSpec
Explanation configuration for this DeployedModel.
When deploying a Model using EndpointService.DeployModel , this value overrides the value of Model.explanation_spec . All fields of explanation_spec are optional in the request. If a field of explanation_spec is not populated, the value of the same field of Model.explanation_spec is inherited. If the corresponding Model.explanation_spec is not populated, all fields of the explanation_spec will be used for the explanation configuration.
$this
getServiceAccount
The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.
Users deploying the Model must have the iam.serviceAccounts.actAs
permission on this service account.
string
setServiceAccount
The service account that the DeployedModel's container runs as. Specify the email address of the service account. If this service account is not specified, the container runs as a service account that doesn't have access to the resource project.
Users deploying the Model must have the iam.serviceAccounts.actAs
permission on this service account.
var
string
$this
getDisableContainerLogging
For custom-trained Models and AutoML Tabular Models, the container of the
DeployedModel instances will send stderr
and stdout
streams to
Stackdriver Logging by default. Please note that the logs incur cost,
which are subject to Cloud Logging
pricing
.
User can disable container logging by setting this flag to true.
bool
setDisableContainerLogging
For custom-trained Models and AutoML Tabular Models, the container of the
DeployedModel instances will send stderr
and stdout
streams to
Stackdriver Logging by default. Please note that the logs incur cost,
which are subject to Cloud Logging
pricing
.
User can disable container logging by setting this flag to true.
var
bool
$this
getEnableAccessLogging
If true, online prediction access logs are sent to StackDriver Logging.
These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
bool
setEnableAccessLogging
If true, online prediction access logs are sent to StackDriver Logging.
These logs are like standard server access logs, containing information like timestamp and latency for each prediction request. Note that Stackdriver logs may incur a cost, especially if your project receives prediction requests at a high queries per second rate (QPS). Estimate your costs before enabling this option.
var
bool
$this
getPrivateEndpoints
Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
hasPrivateEndpoints
clearPrivateEndpoints
setPrivateEndpoints
Output only. Provide paths for users to send predict/explain/health requests directly to the deployed model services running on Cloud via private services access. This field is populated if network is configured.
$this
getPredictionResources
string