Scaling Cassandra

This topic discusses how to scale up Cassandra horizontally and vertically, and how to scale down Cassandra.

Scaling Cassandra horizontally

To scale up Cassandra horizontally

  1. Make sure that your apigee-data node pool has additional capacity, as needed, before scaling Cassandra. See also Configuring dedicated node pools .
  2. Set the value of the cassandra.replicaCount configuration property in your overrides file. The value of replicaCount must be a multiple of 3 . To determine your desired replicaCount value, consider the following:
    • Estimate the traffic demands for your proxies.
    • Load test and make reasonable predictions of your CPU utilization.
    • You can specify different replicaCount values in different regions.
    • You can expand the replicaCount in the future in your overrides file.

    For information about this property, see the Configuration property reference . See also Manage runtime plane components .

  3. Apply the changes. For example:
    $APIGEE_HOME/apigeectl apply --datastore -f overrides/overrides.yaml

Scaling Cassandra vertically

This section explains how to scale the Cassandra pods vertically to accommodate higher CPU and memory requirements.

Overview

For an Apigee hybrid production deployment, we recommend that you create at least two separate node pools: one for stateful services (Cassandra) and one for stateless (runtime) services. For example, see GKE production cluster requirements .

For the stateful Cassandra node pool, we recommend starting with 8 CPU cores and 30 GB of memory. Once the node pool is provisioned, these settings cannot be changed. See also Configure Cassandra for production .

If you need to scale up the Cassandra pods to accommodate higher CPU and memory requirements, follow the steps described in this topic.

Scaling up the Cassandra pods

Follow these steps to increase the CPU and memory for the stateful node pool used for Cassandra:

  1. Follow your Kubernetes platform's instructions to add a new node pool to the cluster. Supported platforms are listed in the installation instructions .
  2. Verify that the new node pool is ready:
    kubectl get nodes -l NODE_POOL_LABEL_NAME 
    = NODE_POOL_LABEL_VALUE 
    

    Example command:

    kubectl get nodes -l cloud.google.com/gke-nodepool=apigee-data-new

    Example output:

    NAME                                                STATUS   ROLES    AGE     VERSION
    gke-apigee-data-new-441387c2-2h5n   Ready    <none>   4m28s   v1.14.10-gke.17
    gke-apigee-data-new-441387c2-6941   Ready    <none>   4m28s   v1.14.10-gke.17
    gke-apigee-data-new-441387c2-nhgc   Ready    <none>   4m29s   v1.14.10-gke.17
  3. Update your overrides file to use the new node pool for Cassandra and update the pod resources to the increased CPU count and memory size that you wish to use. For example, for a GKE cluster, use a configuration similar to the following. If you are on another Kubernetes platform, you need to adjust the apigeeData.key value accordingly:
     nodeSelector 
     : 
      
     requiredForScheduling 
     : 
      
     true 
      
     apigeeData 
     : 
      
     key 
     : 
      
     " NODE_POOL_LABEL_NAME 
    " 
      
     value 
     : 
      
     " NODE_POOL_LABEL_VALUE 
    " 
     cassandra 
     : 
      
     resources 
     : 
      
     requests 
     : 
      
     cpu 
     : 
      
      NODE_POOL_CPU_NUMBER 
     
      
     memory 
     : 
      
      NODE_POOL_MEMORY_SIZE 
     
    

    For example:

     nodeSelector 
     : 
      
     requiredForScheduling 
     : 
      
     true 
      
     apigeeData 
     : 
      
     key 
     : 
      
     "cloud.google.com/gke-nodepool" 
      
     value 
     : 
      
     "apigee-data-new" 
     cassandra 
     : 
      
     resources 
     : 
      
     requests 
     : 
      
     cpu 
     : 
      
     14 
      
     memory 
     : 
      
     16 
     Gi 
    
  4. Apply the overrides file to the cluster:
    $APIGEECTL_HOME/apigeectl apply -f ./overrides/overrides.yaml --datastore

When you complete these steps, the Cassandra pods will begin rolling over to the new node pool.

Scaling down Cassandra

Apigee hybrid employs a ring of Cassandra nodes as a StatefulSet . Cassandra provides persistent storage for certain Apigee entities on the runtime plane. For more information about Cassandra, see About the runtime plane .

Cassandra is a resource intensive service and should not be deployed on a pod with any other hybrid services. Depending on the load, you might want to scale the number of Cassandra nodes in the ring down in your cluster.

The general process for scaling down a Cassandra ring is:

  1. Make sure the Cassandra cluster is healthy and has enough storage to support scaling down.
  2. Update the cassandra.replicaCount property in overrides.yaml .
  3. Apply the configuration update.
  4. Delete the persistent volume claim or volume, depending on your cluster configuration.

What you need to know

  • If any node other than the nodes to be decommissioned is unhealthy, do not proceed. Kubernetes will not be able to downscale the pods from the cluster.
  • Always scale down or up by a factor of three nodes.

Downscale Cassandra

  1. Validate that the cluster is healthy and all the nodes are up and running, as the following example shows:
    kubectl get pods -n yourNamespace 
    -l app=apigee-cassandra
    NAME                 READY   STATUS    RESTARTS   AGE
    apigee-cassandra-default-0   1/1     Running   0          2h
    apigee-cassandra-default-1   1/1     Running   0          2h
    apigee-cassandra-default-2   1/1     Running   0          2h
    apigee-cassandra-default-3   1/1     Running   0          16m
    apigee-cassandra-default-4   1/1     Running   0          14m
    apigee-cassandra-default-5   1/1     Running   0          13m
    apigee-cassandra-default-6   1/1     Running   0          9m
    apigee-cassandra-default-7   1/1     Running   0          9m
    apigee-cassandra-default-8   1/1     Running   0          8m
    kubectl -n yourNamespace 
    exec -it apigee-cassandra-default-0 nodetool status
    Datacenter: us-east1
    ====================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address      Load       Tokens       Owns (effective)  Host ID                               Rack
    UN  10.16.2.6    690.17 KiB  256          48.8%             b02089d1-0521-42e1-bbed-900656a58b68  ra-1
    UN  10.16.4.6    705.55 KiB  256          51.6%             dc6b7faf-6866-4044-9ac9-1269ebd85dab  ra-1 to
    UN  10.16.11.11  674.36 KiB  256          48.3%             c7906366-6c98-4ff6-a4fd-17c596c33cf7  ra-1
    UN  10.16.1.11   697.03 KiB  256          49.8%             ddf221aa-80aa-497d-b73f-67e576ff1a23  ra-1
    UN  10.16.5.13   703.64 KiB  256          50.9%             2f01ac42-4b6a-4f9e-a4eb-4734c24def95  ra-1
    UN  10.16.8.15   700.42 KiB  256          50.6%             a27f93af-f8a0-4c88-839f-2d653596efc2  ra-1
    UN  10.16.11.3   697.03 KiB  256          49.8%             dad221ff-dad1-de33-2cd3-f1.672367e6f  ra-1
    UN  10.16.14.16  704.04 KiB  256          50.9%             1feed042-a4b6-24ab-49a1-24d4cef95473  ra-1
    UN  10.16.16.1   699.82 KiB  256          50.6%             beef93af-fee0-8e9d-8bbf-efc22d653596  ra-1
  2. Determine if the Cassandra cluster has enough storage to support scaling down. After scaling down, the Cassandra nodes should have no more than 75% of their storage full.

    For example, if your cluster has 6 Cassandra nodes and they are all approximately 50% full, downscaling to three nodes would leave all three at 100%, which would not leave any room for continued operation.

    If however, you have 9 Cassandra nodes, all approximately 50% full, downscaling to 6 nodes would leave each remaining node approximately 75% full. You can downscale.

  3. Update or add the cassandra.replicaCount property in your overrides.yaml file. For example, if the current node count is 9, change it to 6:
     cassandra 
     : 
      
     replicaCount 
     : 
      
     6 
      
     # 
      
    
  4. Apply the configuration change to your cluster:
    ./apigeectl apply --datastore -f overrides/override.yaml
    namespace/apigee unchanged
    secret/ssl-cassandra unchanged
    storageclass.storage.k8s.io/apigee-gcepd unchanged
    service/apigee-cassandra unchanged
    statefulset.apps/apigee-cassandra configured
  5. Verify that all of the remaining Cassandra nodes are running:
    kubectl get pods -n yourNamespace 
    -l app=apigee-cassandra
    NAME                 READY   STATUS    RESTARTS   AGE
    apigee-cassandra-default-0   1/1     Running   0          3h
    apigee-cassandra-default-1   1/1     Running   0          3h
    apigee-cassandra-default-2   1/1     Running   0          2h
    apigee-cassandra-default-3   1/1     Running   0          25m
    apigee-cassandra-default-4   1/1     Running   0          24m
    apigee-cassandra-default-5   1/1     Running   0          23m
  6. Verify that the cassandra.replicaCount value equals the number of nodes returned by the nodetool status command.

    For example, if you scaled Cassandra down to six nodes:

    kubectl exec apigee-cassandra-default-0 -n apigee  -- nodetool -u JMX_user 
    -pw JMX_password 
    status
    Datacenter: us-east1
    ====================
    Status=Up/Down
    |/ State=Normal/Leaving/Joining/Moving
    --  Address      Load         Tokens       Owns (effective)  Host ID                               Rack
    UN  10.16.2.6    1009.17 KiB  256          73.8%             b02089d1-0521-42e1-bbed-900656a58b68  ra-1
    UN  10.16.4.6    1.65.55 KiB  256          75.6%             dc6b7faf-6866-4044-9ac9-1269ebd85dab  ra-1 to
    UN  10.16.11.11  999.36 KiB   256          72.8%             c7906366-6c98-4ff6-a4fd-17c596c33cf7  ra-1
    UN  10.16.1.11   1017.03 KiB  256          74.2%             ddf221aa-80aa-497d-b73f-67e576ff1a23  ra-1
    UN  10.16.5.13   1061.64 KiB  256          75.9%             2f01ac42-4b6a-4f9e-a4eb-4734c24def95  ra-1
    UN  10.16.8.15   1049.42 KiB  256          74.9%             a27f93af-f8a0-4c88-839f-2d653596efc2  ra-1
  7. After the Cassandra cluster is down-scaled, verify that pvcs (PersistentVolumeClaim) correspond to the remaining Cassandra nodes.

    Get the names of the pvcs:

    kubectl get pvc -n yourNamespace 
    NAME                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    cassandra-data-apigee-cassandra-default-0   Bound    pvc-f9c2a5b9-818c-11e9-8862-42010a8e014a   100Gi      RWO            apigee-gcepd   7h
    cassandra-data-apigee-cassandra-default-1   Bound    pvc-2956cb78-818d-11e9-8862-42010a8e014a   100Gi      RWO            apigee-gcepd   7h
    cassandra-data-apigee-cassandra-default-2   Bound    pvc-79de5407-8190-11e9-8862-42010a8e014a   100Gi      RWO            apigee-gcepd   7h
    cassandra-data-apigee-cassandra-default-3   Bound    pvc-d29ba265-81a2-11e9-8862-42010a8e014a   100Gi      RWO            apigee-gcepd   5h
    cassandra-data-apigee-cassandra-default-4   Bound    pvc-0675a0ff-81a3-11e9-8862-42010a8e014a   100Gi      RWO            apigee-gcepd   5h
    cassandra-data-apigee-cassandra-default-5   Bound    pvc-354afa95-81a3-11e9-8862-42010a8e014a   100Gi      RWO            apigee-gcepd   5h

    In this example you should not see pvcs corresponding with the three down-scaled nodes:

    • cassandra-data-apigee-cassandra-8
    • cassandra-data-apigee-cassandra-7
    • cassandra-data-apigee-cassandra-6