Cassandra troubleshooting guide

This topic discusses steps you can take to troubleshoot and fix problems with the Cassandra datastore. Cassandra is a persistent datastore that runs in the cassandra component of the hybrid runtime architecture . See also Runtime service configuration overview .

Cassandra pods are stuck in the Pending state

Symptom

When starting up, the Cassandra pods remain in the Pendingstate.

Error message

When you use kubectl to view the pod states, you see that one or more Cassandra pods are stuck in the Pending state. The Pending state indicates that Kubernetes is unable to schedule the pod on a node: the pod cannot be created. For example:

 kubectl get pods -n namespace 
 
NAME                                     READY   STATUS      RESTARTS   AGE
adah-resources-install-4762w             0/4     Completed   0          10m
apigee-cassandra-default-0               0/1     Pending     0          10m
...

Possible causes

A pod stuck in the Pending state can have multiple causes. For example:

Cause Description
Insufficient resources There is not enough CPU or memory available to create the pod.
Volume not created The pod is waiting for the persistent volume to be created.
Missing Amazon EBS CSI driver The required Amazon EBS CSI driver is not installed.

Diagnosis

Use kubectl to describe the pod to determine the source of the error. For example:

kubectl -n namespace 
describe pods pod_name 

For example:

kubectl describe pods apigee-cassandra-default-0 -n apigee

The output may show one of these possible problems:

  • If the problem is insufficient resources, you will see a Warning message that indicates insufficient CPU or memory.
  • If the error message indicates that the pod has unbound immediate PersistentVolumeClaims (PVC), it means the pod is not able to create its Persistent volume .

Resolution

Insufficient resources

Modify the Cassandra node pool so that it has sufficient CPU and memory resources. See Resizing a node pool for details.

Persistent volume not created

If you determine a persistent volume issue, describe the PersistentVolumeClaim (PVC) to determine why it is not being created:

  1. List the PVCs in the cluster:
    kubectl -n namespace 
    get pvc
    
    NAME                                        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    cassandra-data-apigee-cassandra-default-0   Bound    pvc-b247faae-0a2b-11ea-867b-42010a80006e   10Gi       RWO            standard       15m
    ...
  2. Describe the PVC for the pod that is failing. For example, the following command describes the PVC bound to the pod apigee-cassandra-default-0 :
    kubectl -n apigee describe pvc cassandra-data-apigee-cassandra-default-0
    
    Events:
      Type     Reason              Age                From                         Message
      ----     ------              ----               ----                         -------
      Warning  ProvisioningFailed  3m (x143 over 5h)  persistentvolume-controller  storageclass.storage.k8s.io "apigee-sc" not found

    Note that in this example, the StorageClass named apigee-sc does not exist. To resolve this problem, create the missing StorageClass in the cluster, as explained in Change the default StorageClass .

See also Debugging Pods .

Missing Amazon EBS CSI driver

If the hybrid instance is running on an EKS cluster, make sure the EKS cluster is using the Amazon EBS container storage interface (CSI) driver. See Amazon EBS CSI migration frequently asked questions for details.

Cassandra pods are stuck in the CrashLoopBackoff state

Symptom

When starting up, the Cassandra pods remain in the CrashLoopBackoffstate.

Error message

When you use kubectl to view the pod states, you see that one or more Cassandra pods are in the CrashLoopBackoff state. This state indicates that Kubernetes is unable to create the pod. For example:

 kubectl get pods -n namespace 
 
NAME                                     READY   STATUS            RESTARTS   AGE
adah-resources-install-4762w             0/4     Completed         0          10m
apigee-cassandra-default-0               0/1     CrashLoopBackoff  0          10m
...

Possible causes

A pod stuck in the CrashLoopBackoff state can have multiple causes. For example:

Cause Description
Data center differs from previous data center This error indicates that the Cassandra pod has a persistent volume that has data from a previous cluster, and the new pods are not able to join the old cluster. This usually happens when stale persistent volumes persist from the previous Cassandra cluster on the same Kubernetes node. This problem can occur if you delete and recreate Cassandra in the cluster.
Truststore directory not found This error indicates that the Cassandra pod is not able to create a TLS connection. This usually happens when the provided keys and certificates are invalid, missing, or have other issues.

Diagnosis

Check the Cassandra error log to determine the cause of the problem.

  1. List the pods to get the ID of the Cassandra pod that is failing:
    kubectl get pods -n namespace 
    
  2. Check the failing pod's log:
    kubectl logs pod_id 
    -n namespace 
    

Resolution

Look for the following clues in the pod's log:

Data center differs from previous data center

If you see this log message:

Cannot start node if snitch's data center (us-east1) differs from previous data center
  • Check if there are any stale or old PVC in the cluster and delete them.
  • If this is a fresh install, delete all the PVCs and re-try the setup. For example:
     kubectl -n namespace 
    get pvc 
     kubectl -n namespace 
    delete pvc cassandra-data-apigee-cassandra-default-0 
    

Truststore directory not found

If you see this log message:

Caused by: java.io.FileNotFoundException: /apigee/cassandra/ssl/truststore.p12
(No such file or directory)

Verify the key and certificates if provided in your overrides file are correct and valid. For example:

 cassandra 
 : 
  
 sslRootCAPath 
 : 
  
  path_to_root_ca 
 - 
 file 
 
  
 sslCertPath 
 : 
  
  path 
 - 
 to 
 - 
 tls 
 - 
 cert 
 - 
 file 
 
  
 sslKeyPath 
 : 
  
  path 
 - 
 to 
 - 
 tls 
 - 
 key 
 - 
 file 
 

Create a client container for debugging

This section explains how to create a client container from which you can access Cassandra debugging utilities such as cqlsh . These utilities allow you to query Cassandra tables and can be useful for debugging purposes.

Create the client container

To create the client container, follow these steps:

  1. The container uses the TLS certificate from the apigee-cassandra-user-setup pod. The first step is to fetch this certificate name:
    kubectl get secrets -n apigee --field-selector type=kubernetes.io/tls | grep apigee-cassandra-user-setup | awk '{print $1}'

    This command returns the certificate name. For example: apigee-cassandra-user-setup-rg-hybrid-b7d3b9c-tls .

  2. Open a new file and paste the following pod spec into it:
     apiVersion 
     : 
      
     v1 
     kind 
     : 
      
     Pod 
     metadata 
     : 
      
     labels 
     : 
      
     name 
     : 
      
      cassandra 
     - 
     client 
     - 
     name 
     
      
     # 
      
     For 
      
     example 
     : 
      
     my 
     - 
     cassandra 
     - 
     client 
      
     namespace 
     : 
      
     apigee 
     spec 
     : 
      
     containers 
     : 
      
     - 
      
     name 
     : 
      
      cassandra 
     - 
     client 
     - 
     name 
     
      
     image 
     : 
      
     "gcr.io/apigee-release/hybrid/apigee-hybrid-cassandra-client:1.8.8" 
      
     imagePullPolicy 
     : 
      
     Always 
      
     command 
     : 
      
     - 
      
     sleep 
      
     - 
      
     "3600" 
      
     env 
     : 
      
     - 
      
     name 
     : 
      
     CASSANDRA_SEEDS 
      
     value 
     : 
      
     apigee 
     - 
     cassandra 
     - 
     default 
     . 
     apigee 
     . 
     svc 
     . 
     cluster 
     . 
     local 
      
     - 
      
     name 
     : 
      
     APIGEE_DML_USER 
      
     valueFrom 
     : 
      
     secretKeyRef 
     : 
      
     key 
     : 
      
     dml 
     . 
     user 
      
     name 
     : 
      
     apigee 
     - 
     datastore 
     - 
     default 
     - 
     creds 
      
     - 
      
     name 
     : 
      
     APIGEE_DML_PASSWORD 
      
     valueFrom 
     : 
      
     secretKeyRef 
     : 
      
     key 
     : 
      
     dml 
     . 
     password 
      
     name 
     : 
      
     apigee 
     - 
     datastore 
     - 
     default 
     - 
     creds 
      
     volumeMounts 
     : 
      
     - 
      
     mountPath 
     : 
      
     /opt/apigee/ss 
     l 
      
     name 
     : 
      
     tls 
     - 
     volume 
      
     readOnly 
     : 
      
     true 
      
     volumes 
     : 
      
     - 
      
     name 
     : 
      
     tls 
     - 
     volume 
      
     secret 
     : 
      
     defaultMode 
     : 
      
     420 
      
     secretName 
     : 
      
      your 
     - 
     secret 
     - 
     name 
     
      
     # 
      
     For 
      
     example 
     : 
      
     apigee 
     - 
     cassandra 
     - 
     user 
     - 
     setup 
     - 
     rg 
     - 
     hybrid 
     - 
     b7d3b9c 
     - 
     tls 
      
     restartPolicy 
     : 
      
     Never 
    
  3. Save the file with a .yaml extension. For example: my-spec.yaml .
  4. Apply the spec to your cluster:
    kubectl apply -f your-spec-file 
    .yaml -n apigee
  5. Log in to the container:
    kubectl exec -n apigee cassandra-client -it -- bash
  6. Connect to the Cassandra cqlsh interface with the following command. Enter the command exactly as shown:
    cqlsh ${CASSANDRA_SEEDS} -u ${APIGEE_DML_USER} -p ${APIGEE_DML_PASSWORD} --ssl

Deleting the client pod

Use this command to delete the Cassandra client pod:

kubectl delete pods -n apigee cassandra-client

Additional resources

See Introduction to Apigee and Apigee hybrid playbooks .

Design a Mobile Site
View Site in Mobile | Classic
Share by: