Vertex AI is a machine learning (ML) platform that lets you train and deploy ML models and AI applications. Vertex AI combines data engineering, data science, and ML engineering workflows, enabling team collaboration using a common toolset. For more information, see Introduction to Vertex AI .
This document describes the connections and parameters you can configure when using App Design Center to enable Vertex AI APIs. The configuration parameters are based on the terraform-google-project-factory Terraform module.
Component connections
The following table includes the components that you can connect to a Vertex AI component, and the resulting updates to your application and its generated Terraform code.
Connected component
Application updates
Background information
- The Compute Engine instances can interact with Vertex AI services.
- The roles/aiplatform.userrole is added to the Compute Engine instance template service account.
- The service account can interact with Vertex AI services.
- The roles/aiplatform.userrole is added to the service account.
- The Cloud Run service can interact with Vertex AI services.
- The roles/aiplatform.userrole is added to the Cloud Run service account.
Required configuration parameters
If your template includes a Vertex AI component, you must configure the following parameters before you deploy.
| Parameter name | Description and constraints | Background information | 
|---|---|---|
| Project ID | The project where you want to enable Vertex AI APIs. | Configure components | 
Optional configuration parameters
The following parameters are optional. To display advanced parameters, in the Configurationarea, select Show advanced fields.
Feature
Parameter name
Description and constraint information
Background information
Enable APIs
Activate APIs
Activate API Identities
API
Disable Service on Destroy
Disable Service Dependent Services

