Connect from Google Kubernetes Engine (GKE) to AlloyDB for PostgreSQL

This tutorial describes how to set up a connection from an application running in Google Kubernetes Engine autopilot cluster to an AlloyDB instance.

AlloyDB is a fully-managed, PostgreSQL-compatible database service in Google Cloud.

Google Kubernetes Engine helps you automatically deploy, scale, and manage Kubernetes.

Objectives

  • Build a Docker image for AlloyDB.
  • Run an application in Google Kubernetes Engine.
  • Connect to an AlloyDB instance using AlloyDB Auth Proxy and internal IP.

Costs

This tutorial uses billable components of Google Cloud, including:

  • AlloyDB
  • Google Kubernetes Engine
  • Artifact Registry

Use the pricing calculator to generate a cost estimate based on your projected usage.

Before you begin

Console

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project .

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Go to project selector

  5. Verify that billing is enabled for your Google Cloud project .

  6. Enable the Cloud APIs necessary to create and connect to AlloyDB for PostgreSQL.

    Enable the APIs

    1. In the Confirm projectstep, click Next to confirm the name of the project you are going to make changes to.

    2. In the Enable APIsstep, click Enable to enable the following:

      • AlloyDB API
      • Artifact Registry API
      • Compute Engine API
      • Cloud Resource Manager API
      • Cloud Build API
      • Container Registry API
      • Kubernetes Engine API
      • Service Networking API

For the purpose of this tutorial, use the sample vote-collecting web application named gke-alloydb-app .

Launch Cloud Shell

Cloud Shell is a shell environment for managing resources hosted on Google Cloud.

Cloud Shell comes preinstalled with the Google Cloud CLI and kubectl command-line tools. The gcloud CLI provides the primary command-line interface for Google Cloud. kubectl provides the primary command-line interface for running commands against Kubernetes clusters.

Console

To launch Cloud Shell, complete the following steps.

  1. Go to the Google Cloud console.

    Google Cloud console

  2. ClickActivate Cloud Shell button Activate Cloud Shellat the top of the Google Cloud console.

  3. In the Authorize Cloud Shelldialog, click Authorize.

    A Cloud Shell session opens inside a frame lower on the console. Use this shell to run gcloud and kubectl commands.

    1. Before you run commands, set your default project in the Google Cloud CLI using the following command:

       gcloud  
      config  
       set 
        
      project  
       PROJECT_ID 
       
      

      Replace PROJECT_ID with your project ID .

Create an AlloyDB cluster and its primary instance

Your AlloyDB cluster comprises a number of nodes within a Google Virtual Private Cloud (VPC). When you create a cluster, you also configure private services access between one of your VPCs and the Google-managed VPC containing your new cluster. We recommend that you use an internal IP access to avoid exposure of the database to the public internet.

To connect to an AlloyDB for PostgreSQL cluster from outside its configured VPC, you configure Private Service Access configuration in the VPC for AlloyDB and use the default VPC network to run queries from an application deployed on a GKE cluster.

gcloud

  1. In the Cloud Shell, check if the unused IP addresses (IPv4) range is already assigned to service peering:

     gcloud  
    services  
    vpc-peerings  
    list  
    --network = 
    default 
    

    Skip the next step if your output looks similar to the following:

     network: projects/493573376485/global/networks/default
    peering: servicenetworking-googleapis-com
    reservedPeeringRanges:
    - default-ip-range
    service: services/servicenetworking.googleapis.com 
    

    In this output, the value of reservedPeeringRanges is default-ip-range , which you can use as IP_RANGE_NAME to create a private connection in step 3.

  2. (Skip when using the default value of reservedPeeringRanges ) To allocate unused IP addresses in the VPC, use the following command:

     gcloud  
    compute  
    addresses  
    create  
     IP_RANGE_NAME 
      
     \ 
      
    --global  
     \ 
      
    --purpose = 
    VPC_PEERING  
     \ 
      
    --prefix-length = 
     16 
      
     \ 
      
    --description = 
     "VPC private service access" 
      
     \ 
      
    --network = 
    default 
    

    Replace IP_RANGE_NAME with your name for available internal IP addresses within an AlloyDB subnet, such as alloydb-gke-psa-01 .

  3. To configure service access using the allocated IP range, run the following command:

     gcloud  
    services  
    vpc-peerings  
    connect  
     \ 
      
    --service = 
    servicenetworking.googleapis.com  
     \ 
      
    --ranges = 
     IP_RANGE_NAME 
      
     \ 
      
    --network = 
    default 
    
  4. To deploy the AlloyDB cluster, run the following command:

     gcloud  
    alloydb  
    clusters  
    create  
     CLUSTER_ID 
      
     \ 
      
    --database-version = 
    POSTGRES_ VERSION 
      
     \ 
      
    --password = 
     CLUSTER_PASSWORD 
      
     \ 
      
    --network = 
    default  
     \ 
      
    --region = 
     REGION 
      
     \ 
      
    --project = 
     PROJECT_ID 
     
    

    Replace the following:

    • CLUSTER_ID : the ID of the cluster that you are creating. It must begin with a lowercase letter and can contain lowercase letters, numbers, and hyphens, such as alloydb-cluster .
    • VERSION : the major version of PostgreSQL that you want the cluster's database servers to be compatible with. Choose one of the following:

      • 14 : for compatibility with PostgreSQL 14

      • 15 : for compatibility with PostgreSQL 15

      • 16 : for compatibility with PostgreSQL 16, which is the default PostgreSQL version supported

        For more information about restrictions that apply to using PostgreSQL 16 in Preview, see Preview PostgreSQL 16 compatibility .

    • CLUSTER_PASSWORD : the password to use for the default postgres user.

    • PROJECT_ID : the ID of your Google Cloud project where you want to place the cluster.

    • REGION : the name of the region where the AlloyDB cluster is created, such as us-central1 .

  5. To deploy the AlloyDB primary instance, run the following:

     gcloud  
    alloydb  
    instances  
    create  
     INSTANCE_ID 
      
     \ 
      
    --instance-type = 
    PRIMARY  
     \ 
      
    --cpu-count = 
     NUM_CPU 
      
     \ 
      
    --region = 
     REGION 
      
     \ 
      
    --cluster = 
     CLUSTER_ID 
      
     \ 
      
    --project = 
     PROJECT_ID 
     
    

    Replace the following:

    • INSTANCE_ID with the name of the AlloyDB instance of your choice, such as alloydb-primary .
    • CLUSTER_ID with the name of the AlloyDB cluster, such as alloydb-cluster .
    • NUM_CPU with the number of virtual processing units, such as 2 .
    • PROJECT_ID with the ID of your Google Cloud project.
    • REGION with the name of the region where the AlloyDB cluster is created, such as us-central1 .

    Wait for the AlloyDB instance to get created. This can take several minutes.

Connect to your primary instance and create an AlloyDB database and user

Console

  1. If you're not in your newly created cluster Overviewpage, then in the Google Cloud console, go to the Clusterspage.

    Go to Clusters

  2. To display the cluster Overviewpage, click the CLUSTER_ID cluster name.

  3. In the navigation menu, click AlloyDB Studio.

  4. On the Sign in to AlloyDB Studiopage, do the following:

    1. In the Databaselist, select postgres .

    2. In the Userlist, select postgres .

    3. In the Passwordfield, enter CLUSTER_PASSWORD you created in Create an AlloyDB cluster and its primary instance .

    4. Click Authenticate. The Explorerpane displays a list of the objects in your database.

  5. In the Editor 1tab, complete the following:

    1. Create an AlloyDB database:

        CREATE 
        
       DATABASE 
        
        DATABASE_NAME 
       
       ; 
       
      

      Replace DATABASE_NAME with the name of your choice, such as tutorial_db .

    2. Click Run. Wait for the Statement executed successfully message to display in the Resultspane.

    3. Click Clear.

    4. Create an AlloyDB database user and password:

        CREATE 
        
       USER 
        
        USERNAME 
       
        
       WITH 
        
       PASSWORD 
        
       ' DATABASE_PASSWORD 
      ' 
       ; 
       
      

      Replace the following:

      • USERNAME : the name of the AlloyDB user, such as tutorial_user .

      • DATABASE_PASSWORD : the password for your AlloyDB database, such as tutorial .

    5. Click Run. Wait for the Statement executed successfully message to display in the Resultspane.

  6. In the Explorerpane of the AlloyDB Studio, click Switch user/database.

  7. On the Sign in to AlloyDB Studiopage, do the following:

    1. In the Databaselist, select DATABASE_NAME , such as tutorial_db .

    2. In the Userlist, select postgres .

    3. In the Passwordfield, enter CLUSTER_PASSWORD you created in Create an AlloyDB cluster and its primary instance .

    4. Click Authenticate. The Explorerpane displays a list of the objects in your database.

  8. In the Editor 1tab, complete the following:

    1. Grant all permissions to the AlloyDB database user:

        GRANT 
        
       ALL 
        
       PRIVILEGES 
        
       ON 
        
       DATABASE 
        
       " DATABASE_NAME 
      " 
        
       to 
        
       " USERNAME 
      " 
       ; 
       
      
    2. Click Run. Wait for the Statement executed successfully message to display in the Resultspane.

    3. Click Clear.

    4. Grant permissions to the AlloyDB database user on the public schema:

        GRANT 
        
       CREATE 
        
       ON 
        
       SCHEMA 
        
       public 
        
       TO 
        
       " USERNAME 
      " 
       ; 
       
      
    5. Click Run. Wait for the Statement executed successfully message to display in the Resultspane.

  9. Take note of the database name, username, and password. You use this information in Create a Kubernetes secret .

Create a GKE Autopilot cluster

A cluster contains at least one cluster control plane machine and multiple worker machines called nodes . Nodes are Compute Engine virtual machine (VM) instances that run the Kubernetes processes required to make them part of the cluster. You deploy applications to clusters, and the applications run on the nodes.

Console

  1. In the Google Cloud console, go to the Kubernetes Clusterspage.

    Go to Kubernetes Clusters

  2. Click Create.

  3. Specify GKE_CLUSTER_ID for your Autopilot cluster in the Namefield of the Cluster Basicspage, such as ap-cluster .

  4. In the Regionfield, select REGION , such as us-central1 .

  5. Click Create.

    Wait for the GKE cluster to get created. This can take several minutes.

gcloud

Create an Autopilot cluster:

 gcloud  
container  
clusters  
create-auto  
 GKE_CLUSTER_ID 
  
 \ 
  
--location = 
 REGION 
 

Replace the following:

  • GKE_CLUSTER_ID : the name of the Autopilot cluster, such as ap-cluster .
  • REGION : the name of the region where the GKE cluster is deployed, such as us-central1 .

Wait for the GKE cluster to get created. This can take several minutes.

Connect to AlloyDB using the AlloyDB Auth Proxy

We recommend that you use AlloyDB Auth Proxy to connect to AlloyDB. The AlloyDB Auth Proxy provides strong encryption and authentication using Identity and Access Management (IAM), which can help keep your database secure.

When you connect using the AlloyDB Auth Proxy, it is added to your Pod using the sidecar container pattern. The AlloyDB Auth Proxy container is in the same Pod as your application, which enables the application to connect to the AlloyDB Auth Proxy using localhost , increasing security and performance.

Create and grant roles to Google service accounts

In Google Cloud, applications use service accounts to make authorized API calls by authenticating as the service account itself. When an application authenticates as a service account, it has access to all resources that the service account has permission to access.

To run the AlloyDB Auth Proxy in Google Kubernetes Engine, you create a Google service account to represent your application. We recommend that you create a service account that is unique to each application, instead of using the same service account everywhere. This model is more secure because it lets you limit permissions on a per-application basis.

Console

  1. In the Google Cloud console, go to the IAMpage.

    Go to IAM

  2. On the Permissions for project " PROJECT_ID "page, find the row containing the default compute service account PROJECT_NUMBER -compute@developer.gserviceaccount.com and click Edit principalin that row.

    To get the PROJECT_NUMBER that is an automatically generated unique identifier for your project, do the following:

    1. Go to the Dashboard page in the Google Cloud console.

      Go to Dashboard

    2. Click the Select fromdrop-down list at the top of the page. In the Select fromwindow that appears, select your project.

    The PROJECT_NUMBER is displayed on the project Dashboard Project infocard.

  3. Click Add another role.

  4. To grant the roles/artifactregistry.reader role, click Select a roleand choose Artifact Registry from By product or service, and choose Artifact Registry Reader from Roles.

  5. Click Save. The principal is granted the role.

  6. To create a service account for the GKE sample application, go to the Service accountspage. Go to Service accounts

  7. Select your project.

  8. On the Service accounts for project " PROJECT_ID "page, click Create service account.

  9. In the Service accounts detailssection of the Create service accountpage, enter GSA_NAME in the Service account namefield, such as gke-alloydb-gsa .

  10. Click Create and continue.

    The Grant this service account access to project (optional)section of the Create service accountpage appears.

  11. To grant the roles/alloydb.client role, do the following:

    1. Click Select a role.
    2. Choose Cloud AlloyDB from By product or service.
    3. Choose Cloud AlloyDB Client from Roles.
  12. Click Add another role.

  13. To grant the roles/serviceusage.serviceUsageConsumer role, click Select a roleand choose Service Usage from By product or service, and choose Service Usage Consumer from Roles.

  14. Click Done. The Google service account is granted roles.

gcloud

  1. To grant required permissions to the default Google service account so that the Compute Engine can read from the Artifact Registry, run the following:

      PROGECT_NUM 
     = 
     $( 
    gcloud  
    projects  
    describe  
     PROJECT_ID 
      
    --format = 
     "value(projectNumber)" 
     ) 
    gcloud  
    projects  
    add-iam-policy-binding  
     PROJECT_ID 
      
    --member = 
     "serviceAccount: 
     $PROGECT_NUM 
     -compute@developer.gserviceaccount.com" 
      
    --role = 
     "roles/artifactregistry.reader" 
     
    
  2. To create a Google service account for your application, create an IAM service account:

     gcloud  
    iam  
    service-accounts  
    create  
     GSA_NAME 
      
     \ 
    --display-name = 
     "gke-tutorial-service-account" 
     
    

    Replace GSA_NAME with the name of your new IAM service account, such as gke-alloydb-gsa .

  3. To grant alloydb.client and serviceusage.serviceUsageConsumer roles to your application GSA, use the following commands:

     gcloud  
    projects  
    add-iam-policy-binding  
     PROJECT_ID 
      
    --member = 
    serviceAccount: GSA_NAME 
    @ PROJECT_ID 
    .iam.gserviceaccount.com  
    --role = 
     "roles/alloydb.client" 
    gcloud  
    projects  
    add-iam-policy-binding  
     PROJECT_ID 
      
    --member = 
    serviceAccount: GSA_NAME 
    @ PROJECT_ID 
    .iam.gserviceaccount.com  
    --role = 
     "roles/serviceusage.serviceUsageConsumer" 
     
    

Configure Workload Identity Federation for GKE for the sample application

You need to configure GKE to provide the service account to the AlloyDB Auth Proxy using the Workload Identity Federation for GKE feature. This method lets you bind a Kubernetes service account to a Google service account. The Google service account then becomes accessible to applications using the matching Kubernetes service account.

A Google service account is an IAM identity that represents your application in Google Cloud. A Kubernetes service account is an identity that represents your application in a Google Kubernetes Engine cluster.

Workload Identity Federation for GKE binds a Kubernetes service account to a Google service account. This binding causes any deployments with that Kubernetes service account to authenticate as the Google service account in their interactions with Google Cloud.

gcloud

  1. In the Google Cloud console, open Cloud Shell.

    Open Cloud Shell

  2. In the Cloud Shell, get credentials for your cluster:

     gcloud  
    container  
    clusters  
    get-credentials  
     GKE_CLUSTER_ID 
      
    --region  
     REGION 
      
    --project  
     PROJECT_ID 
     
    

    This command configures kubectl to use the GKE cluster that you created.

  3. In the editor of your choice, complete the following steps:

    1. Open service-account.yaml using nano, for example:

       nano  
      service-account.yaml 
      
    2. In the editor, paste the following content:

         
       apiVersion 
       : 
        
       v1 
        
       kind 
       : 
        
       ServiceAccount 
        
       metadata 
       : 
        
       name 
       : 
        
        KSA_NAME 
       
       
      

      Replace KSA_NAME with the service account name, such as ksa-alloydb .

    3. Press Control+O , hit ENTER to save the changes, and press Control+X to exit the editor.

  4. Create a Kubernetes service account for your sample application:

     kubectl  
    apply  
    -f  
    service-account.yaml 
    
  5. Grant permissions for your Kubernetes service account to impersonate the Google service account by creating an IAM policy binding between the two service accounts:

     gcloud  
    iam  
    service-accounts  
    add-iam-policy-binding  
     \ 
      
    --role = 
     "roles/iam.workloadIdentityUser" 
      
     \ 
      
    --member = 
     "serviceAccount: PROJECT_ID 
    .svc.id.goog[default/ KSA_NAME 
    ]" 
      
     \ 
      
     GSA_NAME 
    @ PROJECT_ID 
    .iam.gserviceaccount.com 
    
  6. Add the iam.gke.io/gcp-service-account= GSA_NAME @ PROJECT_ID annotation to the Kubernetes service account, using the email address of the Google service account:

     kubectl  
    annotate  
    serviceaccount  
     \ 
      
     KSA_NAME 
      
     \ 
      
    iam.gke.io/gcp-service-account = 
     GSA_NAME 
    @ PROJECT_ID 
    .iam.gserviceaccount.com 
    

Populate the Artifact Registry with an image of the sample application

gcloud

  1. In Cloud Shell, use the following command to clone the repository with the sample gke-alloydb-app application code from GitHub:

       
    git  
    clone  
    https://github.com/GoogleCloudPlatform/alloydb-auth-proxy && 
     cd 
      
    alloydb-auth-proxy/examples/go 
    
  2. Create a repository in the Artifact Registry for Docker images:

     gcloud  
    artifacts  
    repositories  
    create  
     REPOSITORY_ID 
      
    --location  
     REGION 
      
    --repository-format = 
    docker  
    --project  
     PROJECT_ID 
     
    

    Replace the following:

    • PROJECT_ID : the ID of your project.
    • REPOSITORY_ID : the name of your repository, such as gke-alloydb-sample-app .
  3. In the Authorize Cloud Shelldialog, click Authorize. This prompt doesn't appear if you have done this step previously.

  4. To build a Docker container and publish it to the Artifact Registry, use the following command:

       
    gcloud  
    builds  
    submit  
    --tag  
     REGION 
    -docker.pkg.dev/ PROJECT_ID 
    / REPOSITORY_ID 
    / SAMPLE_APPLICATION 
      
    --project  
     PROJECT_ID 
     
    

    Replace the following:

    • PROJECT_ID : the ID of your project.
    • REPOSITORY_ID : the name of your repository, such as gke-alloydb-sample-app .
    • SAMPLE_APPLICATION : the name of your sample web application, such as gke-alloydb-app .

Create a Kubernetes secret

You create Kubernetes secrets for the database, user, and user password to be used by the sample application. The values of each secret are based on the values specified in the Connect to your primary instance and create an AlloyDB database and user step of this tutorial. For more information, see Secrets .

gcloud

Use a Kubernetes SECRET , such as gke-alloydb-secret to store the connection information:

 kubectl  
create  
secret  
generic  
 SECRET 
  
 \ 
  
--from-literal = 
 database 
 = 
 DATABASE_NAME 
  
 \ 
  
--from-literal = 
 username 
 = 
 USERNAME 
  
 \ 
  
--from-literal = 
 password 
 = 
 DATABASE_PASSWORD 
 

Deploy and run the AlloyDB Proxy in a sidecar pattern

We recommend that you run the AlloyDB Proxy in a sidecar pattern as an additional container sharing a Pod with your application for the following reasons:

  • Prevents your SQL traffic from being exposed locally. The AlloyDB Proxy provides encryption on outgoing connections, but you need to limit exposure for incoming connections.
  • Prevents a single point of failure. Each application's access to your database is independent from the others, making it more resilient.
  • Limits access to the AlloyDB Proxy, allowing you to use IAM permissions per application rather than exposing the database to the entire cluster.
  • Lets you scope resource requests more accurately. Because the AlloyDB Proxy consumes resources linearly to usage, this pattern lets you more accurately scope and request resources to match your applications as it scales.
  • Lets you configure your application to connect using 127.0.0.1 on the DB_PORT you specified in the command section.

After you create a GKE cluster and build a container image for your application, deploy your containerized application to the GKE cluster.

gcloud

In this tutorial, you deploy the sample vote-collecting web application, gke-alloydb-app , that uses AlloyDB as the datastore.

  1. Get the instance connection INSTANCE_URI for the AlloyDB primary instance you want AlloyDB proxy to connect to:

       
    gcloud  
    alloydb  
    instances  
    describe  
     INSTANCE_ID 
      
     \ 
      
    --cluster = 
     CLUSTER_ID 
      
     \ 
      
    --region = 
     REGION 
      
     \ 
      
    --format = 
     "value(name)" 
     
    

    Replace the following:

    • INSTANCE_ID : name for the instance, such as alloydb-primary .
    • CLUSTER_ID : name for the cluster, such as alloydb-cluster .

    The output contains the INSTANCE_URI you specify in the proxy_sidecar_deployment.yaml definition file in step 2.b of this section.

  2. In the editor of your choice, for example, nano, complete the following steps:

    1. Open proxy_sidecar_deployment.yaml using the editor of your choice, for example, nano:

       nano  
      proxy_sidecar_deployment.yaml 
      
    2. In the editor, paste the following content:

        apiVersion 
       : 
        
       apps/v1 
       kind 
       : 
        
       Deployment 
       metadata 
       : 
        
       name 
       : 
        
       gke-alloydb 
       spec 
       : 
        
       selector 
       : 
        
       matchLabels 
       : 
        
       app 
       : 
        
        SAMPLE_APPLICATION 
       
        
       template 
       : 
        
       metadata 
       : 
        
       labels 
       : 
        
       app 
       : 
        
        SAMPLE_APPLICATION 
       
        
       spec 
       : 
        
       serviceAccountName 
       : 
        
        KSA_NAME 
       
        
       containers 
       : 
        
       - 
        
       name 
       : 
        
        SAMPLE_APPLICATION 
       
        
       # Replace <PROJECT_ID> and <REGION> with your project ID and region. 
        
       image 
       : 
        
        REGION 
       
      -docker.pkg.dev/ PROJECT_ID 
      / REPOSITORY_ID 
      / SAMPLE_APPLICATION 
      :latest  
       imagePullPolicy 
       : 
        
       Always 
        
       # This app listens on port 8080 for web traffic by default. 
        
       ports 
       : 
        
       - 
        
       containerPort 
       : 
        
       8080 
        
       env 
       : 
        
       - 
        
       name 
       : 
        
       PORT 
        
       value 
       : 
        
       "8080" 
        
       # This project uses environment variables to determine 
        
       # how you would like to run your application 
        
       # To use the Go connector (recommended) - use INSTANCE NAME 
        
       # To use TCP - Setting INSTANCE_HOST will use TCP (e.g., 127.0.0.1) 
        
       - 
        
       name 
       : 
        
       INSTANCE_HOST 
        
       value 
       : 
        
       "127.0.0.1" 
        
       - 
        
       name 
       : 
        
       DB_PORT 
        
       value 
       : 
        
       "5432" 
        
       # To use Automatic IAM Authentication (recommended) 
        
       # use DB_IAM_USER instead of DB_USER 
        
       # you may also remove the DB_PASS environment variable 
        
       - 
        
       name 
       : 
        
       DB_USER 
        
       valueFrom 
       : 
        
       secretKeyRef 
       : 
        
       name 
       : 
        
        SECRET 
       
        
       key 
       : 
        
       username 
        
       - 
        
       name 
       : 
        
       DB_PASS 
        
       valueFrom 
       : 
        
       secretKeyRef 
       : 
        
       name 
       : 
        
        SECRET 
       
        
       key 
       : 
        
       password 
        
       - 
        
       name 
       : 
        
       DB_NAME 
        
       valueFrom 
       : 
        
       secretKeyRef 
       : 
        
       name 
       : 
        
        SECRET 
       
        
       key 
       : 
        
       database 
        
       # If you are using the Go connector (recommended), you can 
        
       # remove alloydb-proxy (everything below this line) 
        
       - 
        
       name 
       : 
        
       alloydb-proxy 
        
       # This uses the latest version of the AlloyDB Auth proxy 
        
       # It is recommended to use a specific version for production environments. 
        
       # See: https://github.com/GoogleCloudPlatform/alloydb-auth-proxy 
        
       image 
       : 
        
       gcr.io/alloydb-connectors/alloydb-auth-proxy:1.10.1 
        
       command 
       : 
        
       - 
        
       "/alloydb-auth-proxy" 
        
       #AlloyDB instance name as parameter for the AlloyDB proxy 
        
       # Use <INSTANCE_URI> 
        
       - 
        
       " INSTANCE_URI 
      " 
        
       securityContext 
       : 
        
       # The default AlloyDB Auth proxy image runs as the 
        
       # "nonroot" user and group (uid: 65532) by default. 
        
       runAsNonRoot 
       : 
        
       true 
        
       resources 
       : 
        
       requests 
       : 
        
       # The proxy's memory use scales linearly with the number of active 
        
       # connections. Fewer open connections will use less memory. Adjust 
        
       # this value based on your application's requirements. 
        
       memory 
       : 
        
       "2Gi" 
        
       # The proxy's CPU use scales linearly with the amount of IO between 
        
       # the database and the application. Adjust this value based on your 
        
       # application's requirements. 
        
       cpu 
       : 
        
       "1" 
       
      

      Replace INSTANCE_URI with the path to your AlloyDB primary instance from step 1, such as projects/ PROJECT_ID /locations/ REGION /clusters/ CLUSTER_ID /instances/ INSTANCE_ID .

    3. Press Control+O , hit ENTER to save the changes, and press Control+X to exit the editor.

  3. To deploy the gke-alloydb-app application, apply the proxy_sidecar_deployment.yaml definition file that you created in the previous step:

     kubectl  
    apply  
    -f  
    proxy_sidecar_deployment.yaml 
    
  4. Verify that the status for both containers in the Pod is running :

     kubectl  
    get  
    pods 
    

    Sample output:

     NAME                          READY   STATUS    RESTARTS   AGE
     gke-alloydb-8d59bb4cc-62xgh   2/2     Running   0          2m53s 
    
  5. To connect to the sample gke-alloydb-app application, use a service — for example, an external HTTP load balancer. In the editor of your choice, follow these steps:

    1. Open service.yaml using nano, for example:

       nano  
      service.yaml 
      
    2. In the nano editor, paste the following content:

        apiVersion 
       : 
        
       v1 
       kind 
       : 
        
       Service 
       metadata 
       : 
        
       name 
       : 
        
        SAMPLE_APPLICATION 
       
       spec 
       : 
        
       type 
       : 
        
       LoadBalancer 
        
       selector 
       : 
        
       app 
       : 
        
        SAMPLE_APPLICATION 
       
        
       ports 
       : 
        
       - 
        
       port 
       : 
        
       80 
        
       targetPort 
       : 
        
       8080 
       
      

      Replace SAMPLE_APPLICATION with the name of your sample web application, such as gke-alloydb-app .

    3. Press Control+O , hit ENTER to save the changes, and press Control+X to exit the editor.

  6. To deploy the service gke-alloydb-app application, apply the service.yaml file:

       
    kubectl  
    apply  
    -f  
    service.yaml 
    
  7. To get the service details including the external IP address of the service, use the following command:

     kubectl  
    get  
    service 
    

    Sample output:

     NAME              TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)        AGE
    gke-alloydb-app   LoadBalancer   34.118.229.246   35.188.16.172   80:32712/TCP   45s
    kubernetes        ClusterIP      34.118.224.1     <none>          443/TCP        85m 
    
  8. Use the value of the external IP from the previous step to access the sample application at the following URL:

     http:// EXTERNAL-IP 
     
    

Sample configuration files

proxy_sidecar_deployment.yaml

  # Copyright 2024 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #      http://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
 apiVersion 
 : 
  
 apps/v1 
 kind 
 : 
  
 Deployment 
 metadata 
 : 
  
 name 
 : 
  
< YOUR-DEPLOYMENT-NAME 
> spec 
 : 
  
 selector 
 : 
  
 matchLabels 
 : 
  
 app 
 : 
  
< YOUR-APPLICATION-NAME 
>  
 template 
 : 
  
 metadata 
 : 
  
 labels 
 : 
  
 app 
 : 
  
< YOUR-APPLICATION-NAME 
>  
 spec 
 : 
  
 serviceAccountName 
 : 
  
< YOUR-KSA-NAME 
>  
 containers 
 : 
  
 # Your application container goes here. 
  
 - 
  
 name 
 : 
  
< YOUR-APPLICATION-NAME 
>  
 image 
 : 
  
< YOUR-APPLICATION-IMAGE-URL 
>  
 env 
 : 
  
 - 
  
 name 
 : 
  
 DB_HOST 
  
 # The port value here (5432) should match the --port flag below. 
  
 value 
 : 
  
 "localhost:5342" 
  
 - 
  
 name 
 : 
  
 DB_USER 
  
 valueFrom 
 : 
  
 secretKeyRef 
 : 
  
 name 
 : 
  
< YOUR-DB-SECRET 
>  
 key 
 : 
  
 username 
  
 - 
  
 name 
 : 
  
 DB_PASS 
  
 valueFrom 
 : 
  
 secretKeyRef 
 : 
  
 name 
 : 
  
< YOUR-DB-SECRET 
>  
 key 
 : 
  
 password 
  
 - 
  
 name 
 : 
  
 DB_NAME 
  
 valueFrom 
 : 
  
 secretKeyRef 
 : 
  
 name 
 : 
  
< YOUR-DB-SECRET 
>  
 key 
 : 
  
 database 
  
 # The Auth Proxy sidecar goes here. 
  
 - 
  
 name 
 : 
  
 alloydb-auth-proxy 
  
 # Make sure you have automation that upgrades this version regularly. 
  
 # A new version of the Proxy is released monthly with bug fixes, 
  
 # security updates, and new features. 
  
 image 
 : 
  
 gcr.io/alloydb-connectors/alloydb-auth-proxy:1.10.1 
  
 args 
 : 
  
 # If you're connecting over public IP, enable this flag. 
  
 # - "--public-ip" 
  
 # If you're connecting with PSC, enable this flag: 
  
 # - "--psc" 
  
 # If you're using auto IAM authentication, enable this flag: 
  
 # - "--auto-iam-authn" 
  
 # Enable structured logging with Google's LogEntry format: 
  
 - 
  
 "--structured-logs" 
  
 # Listen on localhost:5432 by default. 
  
 - 
  
 "--port=5432" 
  
 # Specify your instance URI, e.g., 
  
 # "projects/myproject/locations/us-central1/clusters/mycluster/instances/myinstance" 
  
 - 
  
 "<INSTANCE-URI>" 
  
 securityContext 
 : 
  
 # The default AlloyDB Auth Proxy image runs as the "nonroot" user and 
  
 # group (uid: 65532) by default. 
  
 runAsNonRoot 
 : 
  
 true 
  
 # You should use resource requests/limits as a best practice to prevent 
  
 # pods from consuming too many resources and affecting the execution of 
  
 # other pods. You should adjust the following values based on what your 
  
 # application needs. For details, see 
  
 # https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/ 
  
 resources 
 : 
  
 requests 
 : 
  
 # The proxy's memory use scales linearly with the number of active 
  
 # connections. Fewer open connections will use less memory. Adjust 
  
 # this value based on your application's requirements. 
  
 memory 
 : 
  
 "2Gi" 
  
 # The proxy's CPU use scales linearly with the amount of IO between 
  
 # the database and the application. Adjust this value based on your 
  
 # application's requirements. 
  
 cpu 
 : 
  
 "1" 
 

service.yaml

  # Copyright 2024 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #      http://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
 apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 Service 
 metadata 
 : 
  
 name 
 : 
  
< YOUR-SERVICE-NAME 
> spec 
 : 
  
 type 
 : 
  
 LoadBalancer 
  
 selector 
 : 
  
 app 
 : 
  
< YOUR-APPLICATION-NAME 
>  
 ports 
 : 
  
 - 
  
 port 
 : 
  
 80 
  
 targetPort 
 : 
  
 8080 
 

service-account.yaml

  # Copyright 2024 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #      http://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
 apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 ServiceAccount 
 metadata 
 : 
  
 name 
 : 
  
< YOUR-KSA-NAME 
>  
 # TODO(developer): replace this value 
 

Clean up

To avoid incurring charges to your Google Cloud account for the resources used in this tutorial, either delete the project that contains the resources, or keep the project and delete the individual resources.

Delete the project

The easiest way to eliminate billing is to delete the project that you created for the tutorial.

To delete the project:

  1. In the Google Cloud console, go to the Manage resourcespage.

    Manage resources

  2. In the project list, select the project that you want to delete, and then click Delete.

  3. In the dialog, type your PROJECT_ID , and then click Shut downto delete the project.

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: