This document describes known issues for version 1.6 of Google Distributed Cloud.
ClientConfig custom resource
gkectl update
reverts any manual changes that you have made to the ClientConfig
custom resource. We strongly recommend that you back up the ClientConfig
resource after every manual change.
kubectl describe CSINode
and gkectl diagnose snapshot
kubectl describe CSINode
and gkectl diagnose snapshot
sometimes fail due to
the OSS Kubernetes issue
on dereferencing nil pointer fields.
OIDC and the CA certificate
The OIDC provider doesn't use the common CA by default. You must explicitly supply the CA certificate.
Upgrading the admin cluster from 1.5 to 1.6.0 breaks 1.5 user clusters that use
an OIDC provider and have no value for authentication.oidc.capath
in the user cluster configuration file
.
To work around this issue, run the following script.:
USER_CLUSTER_KUBECONFIG= YOUR_USER_CLUSTER_KUBECONFIG IDENTITY_PROVIDER= YOUR_OIDC_PROVIDER_ADDRESS openssl s_client -showcerts -verify 5 -connect $IDENTITY_PROVIDER:443 < /dev/null | awk '/BEGIN CERTIFICATE/,/END CERTIFICATE/{ if(/BEGIN CERTIFICATE/){i++}; out="tmpcert"i".pem"; print >out}' ROOT_CA_ISSUED_CERT=$(ls tmpcert*.pem | tail -1) ROOT_CA_CERT="/etc/ssl/certs/$(openssl x509 -in $ROOT_CA_ISSUED_CERT -noout -issuer_hash).0" cat tmpcert*.pem $ROOT_CA_CERT > certchain.pem CERT=$(echo $(base64 certchain.pem) | sed 's\ \\g') rm tmpcert1.pem tmpcert2.pem kubectl --kubeconfig $USER_CLUSTER_KUBECONFIG patch clientconfig default -n kube-public --type json -p "[{ \"op\": \"replace\", \"path\": \"/spec/authentication/0/oidc/certificateAuthorityData\", \"value\":\"${CERT}\"}]"
Replace the following:
-
YOUR_OIDC_IDENTITY_PROVICER : The address of your OIDC provider:
-
YOUR_YOUR_USER_CLUSTER_KUBECONFIG : The path of your user cluster kubeconfig file.
gkectl check-config validation fails: can't find F5 BIG-IP partitions
- Symptoms
-
Validation fails because F5 BIG-IP partitions can't be found, even though they exist.
- Potential causes
-
An issue with the F5 BIG-IP API can cause validation to fail.
- Resolution
-
Try running
gkectl check-configagain.
Disruption for workloads with PodDisruptionBudgets
Upgrading clusters can cause disruption or downtime for workloads that use PodDisruptionBudgets (PDBs).
Nodes fail to complete their upgrade process
If you have PodDisruptionBudget
objects configured that are unable to
allow any additional disruptions, node upgrades might fail to upgrade to the
control plane version after repeated attempts. To prevent this failure, we
recommend that you scale up the Deployment
or HorizontalPodAutoscaler
to
allow the node to drain while still respecting the PodDisruptionBudget
configuration.
To see all PodDisruptionBudget
objects that do not allow any disruptions:
kubectl get poddisruptionbudget --all-namespaces -o jsonpath='{range .items[?(@.status.disruptionsAllowed==0)]}{.metadata.name}/{.metadata.namespace}{"\n"}{end}'
Renewal of certificates might be required before an admin cluster upgrade
Before you begin the admin cluster upgrade process, you should make sure that your admin cluster certificates are currently valid, and renew these certificates if they are not.
Admin cluster certificate renewal process
-
Make sure that OpenSSL is installed on the admin workstation before you begin.
-
Set the
KUBECONFIGvariable:KUBECONFIG= ABSOLUTE_PATH_ADMIN_CLUSTER_KUBECONFIGReplace ABSOLUTE_PATH_ADMIN_CLUSTER_KUBECONFIG with the absolute path to the admin cluster kubeconfig file.
-
Get the IP address and SSH keys for the admin master node:
kubectl --kubeconfig "${KUBECONFIG}" get secrets -n kube-system sshkeys \ -o jsonpath='{.data.vsphere_tmp}' | base64 -d > \ ~/.ssh/admin-cluster.key && chmod 600 ~/.ssh/admin-cluster.key export MASTER_NODE_IP=$(kubectl --kubeconfig "${KUBECONFIG}" get nodes -o \ jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}' \ --selector='node-role.kubernetes.io/master') -
Check if the certificates are expired:
ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}" \ "sudo kubeadm alpha certs check-expiration"If the certificates are expired, you must renew them before upgrading the admin cluster.
-
Because the admin cluster kubeconfig file also expires if the admin certificates expire, you should back up this file before expiration.
-
Back up the admin cluster kubeconfig file:
ssh -i ~/.ssh/admin-cluster.key ubuntu@"${MASTER_NODE_IP}"
"sudo cat /etc/kubernetes/admin.conf" > new_admin.conf vi "${KUBECONFIG}"
-

