Install AlloyDB Omni on Kubernetes

Select a documentation version:

This page provides an overview of the AlloyDB Omni Kubernetes operator, with instructions for using it to deploy AlloyDB Omni onto a Kubernetes cluster. This page assumes basic familiarity with Kubernetes operation.

For instructions on installing AlloyDB Omni onto a standard Linux environment, see Install AlloyDB Omni .

Overview

To deploy AlloyDB Omni onto a Kubernetes cluster, install the AlloyDB Omni operator, an extension to the Kubernetes API provided by Google.

You configure and control a Kubernetes-based AlloyDB Omni database cluster by pairing declarative manifest files with the kubectl utility, just like any other Kubernetes-based deployment. You don't use the AlloyDB Omni CLI , which is intended for deployments onto individual Linux machines and not Kubernetes clusters.

AlloyDB Omni operator 1.1.0 compatibility

The AlloyDB Omni operator version 1.1.0 is not compatible with versions 15.5.3 and 15.5.4 of AlloyDB Omni. If you use one of these versions of AlloyDB Omni, you might receive an error similar to the following:

 Error from server (Forbidden): error when creating "[...]/dbcluster.yaml": admission webhook "vdbcluster.alloydbomni.dbadmin.goog" denied the request: unsupported database version 15.5.3 

Before you begin

You need access to the following:

Each node in the Kubernetes cluster must have the following:

  • A minimum of two x86 or AMD64 CPUs.
  • At least 8GB of RAM.
  • Linux kernel version 4.18 or later.
  • Control group v2 (cgroup v2) enabled.

Install the AlloyDB Omni operator

To install the AlloyDB Omni operator, follow these steps:

  1. Define several environment variables:

      export 
      
     GCS_BUCKET 
     = 
    alloydb-omni-operator 
      export 
      
     HELM_PATH 
     = 
     $( 
    gcloud  
    storage  
    cat  
    gs:// $GCS_BUCKET 
    /latest ) 
     
      export 
      
     OPERATOR_VERSION 
     = 
     " 
     ${ 
     HELM_PATH 
     %%/* 
     } 
     " 
     
    
  2. Download the AlloyDB Omni operator:

     gcloud  
    storage  
    cp  
    gs:// $GCS_BUCKET 
    / $HELM_PATH 
      
    ./  
    --recursive 
    
  3. Install the AlloyDB Omni operator:

     helm  
    install  
    alloydbomni-operator  
    alloydbomni-operator- ${ 
     OPERATOR_VERSION 
     } 
    .tgz  
     \ 
    --create-namespace  
     \ 
    --namespace  
    alloydb-omni-system  
     \ 
    --atomic  
     \ 
    --timeout  
    5m 
    

    Successful installation displays the following output:

     NAME: alloydbomni-operator
    LAST DEPLOYED: CURRENT_TIMESTAMP 
    NAMESPACE: alloydb-omni-system
    STATUS: deployed
    REVISION: 1
    TEST SUITE: None 
    
  4. Clean up by deleting the downloaded AlloyDB Omni operator installation file. The file is named alloydbomni-operator- VERSION_NUMBER .tgz , and is located in your current working directory.

Configure GDC connected storage

To install the AlloyDB Omni operator on GDC connected , you need to follow additional steps to configure storage because GDC connected clusters don't set a default storage class. You must set a default storage class before you create an AlloyDB Omni database cluster.

To learn how to set Symcloud Storage as the default storage class, see Set Symcloud Storage as the default storage class .

For more information about changing the default for all other storage classes, see Change the default StorageClass .

Red Hat OpenShift reconciliation steps

If you use Red Hat OpenShift 4.12 or later, you must complete the following steps after you install the AlloyDB Omni operator and before you create an AlloyDB Omni database cluster on the Kubernetes cluster. Otherwise, you can skip these steps.

  1. Add permissions to update AlloyDB Omni instance finalizers by editing the system:controller:statefulset-controller cluster role as follows:

     kubectl  
    edit  
    clusterrole  
    system:controller:statefulset-controller 
    
  2. In the text editor, append the following to the end of the cluster role:

      - 
      
     apiGroups 
     : 
      
     - 
      
     alloydbomni.internal.dbadmin.goog 
      
     resources 
     : 
      
     - 
      
     instances/finalizers 
      
     verbs 
     : 
      
     - 
      
     update 
     - 
      
     apiGroups 
     : 
      
     - 
      
     alloydbomni.internal.dbadmin.goog 
      
     resources 
     : 
      
     - 
      
     backuprepositories/finalizers 
      
     verbs 
     : 
      
     - 
      
     update 
     
    

    The StatefulSet controller must have additional permissions to update instance finalizers added to the cluster role since Red Hat OpenShift has OwnerReferencesPermissionEnforcement enabled. Without the permission to update instance finalizers, the StatefulSet controller fails to create the database Persistent Volume Claim (PVC) with the following error message found in the database StatefulSet events:

     Warning  FailedCreate  [...] cannot set blockOwnerDeletion if an ownerReference refers to a resource you can't set finalizers on 
    
  3. Add permissions to update AlloyDB Omni DBInstance finalizers by editing the fleet-manager-role cluster role:

     kubectl  
    edit  
    clusterrole  
    fleet-manager-role 
    
  4. In the text editor, append the following to the end of the cluster role:

      - 
      
     apiGroups 
     : 
      
     - 
      
     alloydbomni.dbadmin.goog 
      
     resources 
     : 
      
     - 
      
     dbinstances/finalizers 
      
     verbs 
     : 
      
     - 
      
     update 
     
    
  5. Add the anyuid security context constraint to the default service account in your Red Hat OpenShift project as follows:

    oc adm policy add-scc-to-user anyuid system:serviceaccount: OPENSHIFT_PROJECT 
    :default

    You must allow the default service account to use the anyuid security context constraint since, within the database Pod, the init container runs as root and the other containers run with specific user IDs. Without permission to use anyuid , the StatefulSet controller fails to create the database PVC with the following error message found in the database StatefulSet events:

     Warning  FailedCreate  [...]    unable to validate against any security context constraint 
    

Create a database cluster

An AlloyDB Omni database cluster contains all the storage and compute resources needed to run an AlloyDB Omni server, including the primary server, any replicas, and all of your data.

After you install the AlloyDB Omni operator on your Kubernetes cluster, you can create an AlloyDB Omni database cluster on the Kubernetes cluster by applying a manifest similar to the following:

  apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 Secret 
 metadata 
 : 
  
 name 
 : 
  
 db-pw- DB_CLUSTER_NAME 
 
 type 
 : 
  
 Opaque 
 data 
 : 
  
  DB_CLUSTER_NAME 
 
 : 
  
 " ENCODED_PASSWORD 
" 
 --- 
 apiVersion 
 : 
  
 alloydbomni.dbadmin.goog/v1 
 kind 
 : 
  
 DBCluster 
 metadata 
 : 
  
 name 
 : 
  
  DB_CLUSTER_NAME 
 
 spec 
 : 
  
 databaseVersion 
 : 
  
 "15.7.0" 
  
 primarySpec 
 : 
  
 adminUser 
 : 
  
 passwordRef 
 : 
  
 name 
 : 
  
 db-pw- DB_CLUSTER_NAME 
 
  
 resources 
 : 
  
 cpu 
 : 
  
  CPU_COUNT 
 
  
 memory 
 : 
  
  MEMORY_SIZE 
 
  
 disks 
 : 
  
 - 
  
 name 
 : 
  
 DataDisk 
  
 size 
 : 
  
  DISK_SIZE 
 
 

Replace the following:

  • DB_CLUSTER_NAME : the name of this database cluster—for example, my-db-cluster .

  • ENCODED_PASSWORD : the database login password for the default postgres user role, encoded as a base64 string—for example, Q2hhbmdlTWUxMjM= for ChangeMe123 .

  • CPU_COUNT : the number of CPUs available to each database instance in this database cluster.

  • MEMORY_SIZE : the amount of memory per database instance of this database cluster. We recommend setting this to 8 gigabytes per CPU. For example, if you set cpu to 2 earlier in this manifest, then we recommend setting memory to 16Gi .

  • DISK_SIZE : the disk size per database instance—for example, 10Gi .

After you apply this manifest, your Kubernetes cluster contains an AlloyDB Omni database cluster with the specified memory, CPU, and storage configuration. To establish a test connection with the new database cluster, see Connect using the preinstalled psql .

For more information about Kubernetes manifests and how to apply them, see Managing resources .

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: