This page describes how to delete an Google Distributed Cloud user cluster.
Overview
Google Distributed Cloud supports deletion of user clusters via gkectl
.
If the cluster is unhealthy (for example, if its control plane is unreachable or
the cluster failed to bootstrap), refer to Deleting an unhealthy user cluster
.
Deleting a user cluster
To delete a user cluster, run the following command:
gkectl delete cluster \ --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \ --cluster [CLUSTER_NAME]
where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.
If you are using the Seesaw bundled load balancer, delete the load balancer .
Known issue
In version 1.1.2, there is a known issue that results in this error if you are using a vSAN datastore:
Error deleting machine object xxx; Failed to delete machine xxx: failed to ensure disks detached: failed to convert disk path "" to UUID path: failed to convert full path "ds:///vmfs/volumes/vsan:52ed29ed1c0ccdf6-0be2c78e210559c7/": ServerFaultCode: A general system error occurred: Invalid fault
See the workaround in the release notes.
Deleting an unhealthy user cluster
You can pass in --force
to delete a user cluster if the cluster is unhealthy.
A user cluster might be unhealthy if its control plane is unreachable, if the
cluster fails to bootstrap, or if gkectl delete cluster
fails to delete the
cluster.
To force delete a cluster:
gkectl delete cluster \ --kubeconfig [ADMIN_CLUSTER_KUBECONFIG] \ --cluster [CLUSTER_NAME] \ --force
where [ADMIN_CLUSTER_KUBECONFIG] is the admin cluster's kubeconfig file, and [CLUSTER_NAME] is the name of the user cluster you want to delete.
Cleaning up external resources
After a forced deletion, some resources might be leftover in F5 or vSphere. The following sections explain how to clean up these leftover resources.
Cleaning up a user cluster's VMs in vSphere
To verify that the user cluster's VMs are deleted, perform the following steps:
-
From the vSphere Web Client's left-hand Navigatormenu, click the Hosts and Clustersmenu.
-
Find the resource pool for your admin cluster. This is the value of
vCenter.resourcePool
in your admin cluster configuration file. -
Under the resource pool, locate VMs prefixed with the name of your user cluster. These are the control-plane nodes for your user cluster. There will be one or three of these depending on whether your user cluster has a high-availability control plane.
-
Find the resource pool for your user cluster. This is the value of
vCenter.resourcePool
in your user cluster configuration file. If your user cluster configuration file does not specify a resource pool, it is inherited from the admin cluster. -
Under the resource pool, locate VMs prefixed with the name of a node pool in your user cluster. These are the worker nodes in your user cluster.
-
For each control-plane node and each worker node:
-
From the vSphere Web Client, right-click the VM and select Power> Power Off.
-
After the VM is powered off, right-click the VM and select Delete from Disk.
-
Cleaning up a user cluster's F5 partition
If there are any entries remaining in the user cluster's partition, perform the following steps:
- From the F5 BIG-IP console, in the top-right corner of the console, switch to the user cluster partition you want to clean up.
- Select Local Traffic> Virtual Servers> Virtual Server List.
- In the Virtual Serversmenu, remove all the virtual IPs.
- Select Pools, then delete all the pools.
- Select Nodes, then delete all the nodes.
After you have finished
After gkectl
finishes deleting the user cluster, you can delete the user
cluster's kubeconfig.