Expand Cassandra persistent volumes

This process allows you to expand the persistent volumes used by the Apigee hybrid Cassandra database to accommodate greater storage needs without needing to create new nodes just to provide more storage.

Overview

The Apigee hybrid cassandra component uses persistent volumes to store data. The size of the persistent volume is defined during installation and initial configuration. This initial storage size is an immutable value and cannot be changed. Therefore, any new node added to the cluster will use the same persistent volume size.

It is possible to increase the size of the existing persistent volume by making the changes directly on the Persistent volume Claim, but new nodes will still use the smaller initial persistent volume size.

If your hybrid Cassandra database is nearing its storage capacity, you can use this procedure to expand the existing persistent volumes and allow new nodes to expand their persistent volumes as well.

Expand Cassandra persistent volumes

  1. Update the volume size to the desired size:
    kubectl -n apigee edit pvc
  2. Check the updated volume capacity:
    kubectl get pvc -n apigee
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    cassandra-data-apigee-cassandra-default-0   Bound    pvc-92234ba7-941b-4dab-82c6-8a5288a2c8d4   100Gi      RWO            standard       21m
    cassandra-data-apigee-cassandra-default-1   Bound    pvc-6be911fc-91f7-465d-a02e-933428ee10b2   100Gi      RWO            standard       20m
    cassandra-data-apigee-cassandra-default-2   Bound    pvc-14ba34e4-fd5c-4d59-8413-a331dcad3404   100Gi      RWO            standard       19m
  3. Backup, delete and recreate the statefulset with the new storage size. The following comands creates a configuration file apigee-cassandra-default.yaml you can use to capture the current Cassandra configuration. You then modify and apply this configuration:
    1. kubectl -n apigee get sts apigee-cassandra-default -o yaml > apigee-cassandra-default.yaml
    2. kubectl -n apigee delete sts --cascade=orphan apigee-cassandra-default
    3. Check that the delete operation is complete:
      kubectl get sts -n apigee

      Your output should look like:

      No resources found in apigee namespace.
    4. Update the storage size in the apigee-cassandra-default.yaml file with the new storage size. This must match the size you intend to apply in your overrides.yaml . For example:
      resources:
              requests:
                storage: 100Gi
    5. Re-apply the statefulset configuration with the updated storage size:
      kubectl apply -f apigee-cassandra-default.yaml
    6. Verify that statefulset was re-created correctly:
      kubectl get sts -n apigee

      Your output should look something like:

      NAME                       READY   AGE
      apigee-cassandra-default   3/3     6m56s
  4. Update the your overrides file with new volume size that you specified when you edited the pvc:
     cassandra 
     : 
      
     storage 
     : 
      
     capacity 
     : 
      
     100 
     Gi 
    
  5. Apply the updated configuration to the cluster:
    ../apigeectl apply --datastore -f overrides/overrides.yaml
    Parsing file: config/values.yaml
    Parsing file: overrides/overrides.yaml
    cleansing older AD's (v1alpha1) istio resources...
    
    Invoking "kubectl apply" with YAML config...
    
    apigeedatastore.apigee.cloud.google.com/apigee-cassandra unchanged
  6. Check if the newly created sts has the updated storage size:
    kubectl get sts -n apigee apigee-cassandra -o yaml |grep storage
     storage 
     : 
      
     100 
     Gi 
    
  7. Check if C* pods data volume got updated with new size:

    kubectl exec -n apigee -it apigee-cassandra-default-0 -- df -h|grep "/opt/apigee/data"
    /dev/sdb         99G   69M   99G   1% /opt/apigee/data