This page provides various YAML configuration examples for deploying and managing AlloyDB Omni on Kubernetes.
DBCluster examples
Minimal DBCluster
A basic configuration to get a DBCluster running.
# This is a minimal DBCluster spec. See v1_dbcluster_full.yaml for more configurations. apiVersion : v1 kind : Secret metadata : name : db-pw-dbcluster-sample type : Opaque data : dbcluster-sample : "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion : alloydbomni.dbadmin.goog/v1 kind : DBCluster metadata : name : dbcluster-sample spec : databaseVersion : "17.7.0" primarySpec : adminUser : passwordRef : name : db-pw-dbcluster-sample resources : memory : 5Gi cpu : 1 disks : - name : DataDisk size : 10Gi
Full DBCluster
A comprehensive DBCluster configuration showing more options.
apiVersion : v1 kind : Secret metadata : name : db-pw-dbcluster-sample type : Opaque data : dbcluster-sample : "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion : alloydbomni.dbadmin.goog/v1 kind : DBCluster metadata : name : dbcluster-sample spec : allowExternalIncomingTraffic : true availability : healthcheckPeriodSeconds : 30 # default is 30secs, new feature in 1.2.0. minimum value is 1 and the maximum value is 86400 autoFailoverTriggerThreshold : 3 # after which failover is triggered autoHealTriggerThreshold : 3 enableAutoFailover : true enableAutoHeal : true enableStandbyAsReadReplica : true numberOfStandbys : 1 controlPlaneAgentsVersion : 1.6.0 databaseVersion : "17.7.0" databaseImageOSType : UBI9 isDeleted : false mode : "" primarySpec : adminUser : passwordRef : name : db-pw-dbcluster-sample allowExternalIncomingTrafficToInstance : false auditLogTarget : {} dbLoadBalancerOptions : annotations : networking.gke.io/load-balancer-type : "internal" lb.company.com/enabled : "true" gcp : {} features : columnarSpillToDisk : cacheSize : 50Gi ultraFastCache : cacheSize : 100Gi # either generic volume or local volume genericVolume : storageClass : "local-storage" # localVolume: # path: "/mnt/disks/raid/0" # nodeAffinity: # required: # nodeSelectorTerms: # - matchExpressions: # - key: "cloud.google.com/gke-local-nvme-ssd" # operator: "In" # values: # - "true" googleMLExtension : config : vertexAIKeyRef : vertex-ai-key-alloydb # secret used to enable AlloyDB Omni to access AlloyDB AI features vertexAIRegion : us-central1 # default resources : cpu : "12" disks : - name : DataDisk size : 1000Gi storageClass : px-ceph - name : LogDisk size : 10Gi storageClass : px-ceph - name : ObsDisk size : 4Gi storageClass : px-ceph - name : BackupDisk size : 10Gi storageClass : px-ceph memory : 100Gi walArchiveSetting : location : wal/log # enable WAL archiving and archive logs to /archive/wal/log sidecarRef : name : cv-sidecar-config # provide a sidecar config that is referenced here parameters : google_columnar_engine.enabled : "on" google_columnar_engine.memory_size_in_mb : "256" google_storage.parallel_log_replay_enabled : 'off' google_pg_auth.enable_auth : 'false' shared_preload_libraries : "pg_cron,pg_bigm3" archive_mode : 'on' archive_timeout : '300' work_mem : '4MB' # operator default values # shared_preload_libraries='g_stats,google_columnar_engine,google_db_advisor,google_job_scheduler,pg_stat_statements,pglogical,pgaudit' log_rotation_age : "2" # rotate every two minutes. Set to "0" to disable age-based rotation. If unset, no age-based rotation log_rotation_size : "400000" # rotate every 400,000kb. set to "0" to disable size-based rotation. If unset, rotate every 200,000kb schedulingconfig : tolerations : - effect : NoSchedule key : alloydb-node-type operator : Exists nodeaffinity : # requiredDuringSchedulingIgnoredDuringExecution: strong condition, not being able to meet this would stop pods being scheduled preferredDuringSchedulingIgnoredDuringExecution : nodeSelectorTerms : - matchExpressions : - key : alloydb-node-type operator : In values : - database podAffinity : preferredDuringSchedulingIgnoredDuringExecution : - weight : 1 podAffinityTerm : labelSelector : matchExpressions : - key : app operator : In values : - store topologyKey : "kubernetes.io/hostname" podAntiAffinity : preferredDuringSchedulingIgnoredDuringExecution : - weight : 1 podAffinityTerm : labelSelector : matchExpressions : - key : security operator : In values : - S1 topologyKey : "topology.kubernetes.io/zone" services : Logging : true Monitoring : true --- apiVersion : v1 kind : PersistentVolume metadata : name : "example-local-pv" spec : capacity : storage : 375Gi accessModes : - "ReadWriteOnce" persistentVolumeReclaimPolicy : "Retain" storageClassName : "local-storage" local : path : "/mnt/disks/raid/0" nodeAffinity : required : nodeSelectorTerms : - matchExpressions : # following example key applies to an operator that is deployed on # Google Cloud and uses the local ssd option - key : "cloud.google.com/gke-local-nvme-ssd" operator : "In" values : - "true" --- apiVersion : alloydbomni.dbadmin.goog/v1 kind : DBInstance metadata : name : dbcluster-sample-rp-1 spec : instanceType : ReadPool dbcParent : name : dbcluster-sample nodeCount : 2 resources : memory : 6Gi cpu : 2 disks : - name : DataDisk size : 15Gi schedulingconfig : tolerations : - key : "node-role.kubernetes.io/control-plane" operator : "Exists" effect : "NoSchedule" nodeaffinity : preferredDuringSchedulingIgnoredDuringExecution : - weight : 1 preference : matchExpressions : - key : another-node-label-key operator : In values : - another-node-label-value podAffinity : preferredDuringSchedulingIgnoredDuringExecution : - weight : 1 podAffinityTerm : labelSelector : matchExpressions : - key : app operator : In values : - store topologyKey : "kubernetes.io/hostname" podAntiAffinity : preferredDuringSchedulingIgnoredDuringExecution : - weight : 1 podAffinityTerm : labelSelector : matchExpressions : - key : security operator : In values : - S1 topologyKey : "topology.kubernetes.io/zone"
DBCluster with ML agent
Example of configuring the ML agent within a DBCluster.
apiVersion : v1 kind : Secret metadata : name : db-pw-dbcluster-sample type : Opaque data : dbcluster-sample : "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion : v1 kind : Secret metadata : name : vertex-ai-key-alloydb type : Opaque data : private-key.json : "" --- apiVersion : alloydbomni.dbadmin.goog/v1 kind : DBCluster metadata : name : dbcluster-sample spec : databaseVersion : "17.7.0" primarySpec : features : googleMLExtension : enabled : true config : vertexAIKeyRef : vertex-ai-key-alloydb vertexAIRegion : us-central1 adminUser : passwordRef : name : db-pw-dbcluster-sample resources : memory : 5Gi cpu : 1 disks : - name : DataDisk size : 10Gi
DBCluster with load balancer
apiVersion : v1 kind : Secret metadata : name : db-pw-dbcluster-sample type : Opaque data : dbcluster-sample : "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion : alloydbomni.dbadmin.goog/v1 kind : DBCluster metadata : name : dbcluster-sample spec : databaseVersion : "17.7.0" primarySpec : adminUser : passwordRef : name : db-pw-dbcluster-sample resources : memory : 5Gi cpu : 1 disks : - name : DataDisk size : 10Gi dbLoadBalancerOptions : annotations : # Creates internal LoadBalancer in GKE. networking.gke.io/load-balancer-type : "internal" allowExternalIncomingTraffic : true
DBCluster with Commvault sidecar
apiVersion : v1 kind : Secret metadata : name : db-pw-dbcluster-sample type : Opaque data : dbcluster-sample : "Q2hhbmdlTWUxMjM=" # Password is ChangeMe123 --- apiVersion : alloydbomni.dbadmin.goog/v1 kind : DBCluster metadata : name : dbcluster-sample spec : databaseVersion : "17.7.0" primarySpec : adminUser : passwordRef : name : db-pw-dbcluster-sample resources : memory : 5Gi cpu : 1 disks : - name : DataDisk size : 10Gi - name : LogDisk size : 10Gi walArchiveSetting : location : wal/log # enable WAL archiving and archive logs to /archive/wal/log sidecarRef : name : cv-sidecar-config
Backup and Restore
Backup plan
Example of scheduling full and incremental backups.
apiVersion : alloydbomni.dbadmin.goog/v1 kind : BackupPlan metadata : name : backupplan1 spec : dbclusterRef : dbcluster-sample backupRetainDays : 14 paused : false backupSchedules : # Full backup at 00:00 on every Sunday. full : "0 0 * * 0" # Incremental backup at 21:00 every day. incremental : "0 21 * * *"
Restore from backup
apiVersion : alloydbomni.dbadmin.goog/v1 kind : Restore metadata : name : restore1 spec : sourceDBCluster : dbcluster-sample backup : backup1
Clone
apiVersion : alloydbomni.dbadmin.goog/v1 kind : Restore metadata : name : clone1 spec : sourceDBCluster : dbcluster-sample pointInTime : "2024-02-23T19:59:43Z" clonedDBClusterConfig : dbclusterName : new-dbcluster-sample
High Availability and Data Resilience
Failover
Example of performing an unplanned failover to a standby instance.
apiVersion : alloydbomni.dbadmin.goog/v1 kind : Failover metadata : name : failover-sample spec : dbclusterRef : dbcluster-sample
Switchover
Example of performing a controlled switchover to a standby instance.
apiVersion : alloydbomni.dbadmin.goog/v1 kind : Switchover metadata : name : switchover-sample spec : dbclusterRef : dbcluster-sample
Monitoring and connection pooling
PgBouncer configuration
apiVersion : alloydbomni.dbadmin.goog/v1 kind : PgBouncer metadata : name : mypgbouncer spec : allowSuperUserAccess : true dbclusterRef : dbcluster-sample replicaCount : 1 parameters : pool_mode : transaction ignore_startup_parameters : extra_float_digits default_pool_size : "15" max_client_conn : "800" max_db_connections : "160" podSpec : resources : memory : 1Gi cpu : 1 image : "gcr.io/alloydb-omni-staging/g-pgbouncer:1.4.0" serviceOptions : type : "ClusterIP"
Sidecar example
apiVersion : alloydbomni.dbadmin.goog/v1 kind : Sidecar metadata : name : sidecar-sample spec : sidecars : - image : busybox name : sidecar-sample volumeMounts : - name : obsdisk mountPath : /logs command : [ "/bin/sh" ] args : - -c - | while [ true ] do date set -x ls -lh /logs/diagnostic set +x done

