Update clusters

After you create a cluster with bmctl , you can update the custom resources of that cluster. The configuration file is stored as bmctl-workspace/CLUSTER-NAME/CLUSTER-NAME.yaml unless you specified a different location.

Add or remove nodes in a cluster

In Google Distributed Cloud, you add or remove nodes in a cluster by editing the cluster's node pool definitions. You can use the bmctl command to change these definitions.

There are three different kinds of node pools in Google Distributed Cloud: control plane, load balancer, and worker node pools.

View node status

You can view the status of nodes and their respective node pools with the kubectl get command.

For example, the following command shows the status of the node pools in the cluster namespace my-cluster :

 kubectl  
-n  
 my-cluster 
  
get  
nodepools.baremetal.cluster.gke.io 

The system returns results similar to the following:

 NAME                    READY   RECONCILING   STALLED   UNDERMAINTENANCE   UNKNOWN
my-cluster              3       0             0         0                  0
my-cluster-lb           2       0             0         0                  0
np1                     3       0             0         0                  0 

If you need more information on diagnosing your clusters, see Create snapshots for diagnosing clusters .

Change nodes

Most node changes are specified in the cluster config file, which is then applied to the cluster. We recommend you use the cluster config file as the primary source for updating your cluster. It is a best practice to store your config file in a version control system to track changes for troubleshooting purposes. For all cluster types, use the bmctl update command to update your cluster with your node changes.

The Google Distributed Cloud cluster config file includes a header section with credential information. The credential entries and the rest of the config file are valid YAML, but the credential entries are not valid for the cluster resource. Use bmctl update credentials for credential updates.

When you remove nodes from a cluster, they are first drained of any pods. Nodes will not be removed from the cluster if pods can't be rescheduled on other nodes. The bmctl update command will parse the cluster configuration file and apply custom resources based on the parsed result.

Here's a sample configuration with two nodes:

  --- 
 apiVersion 
 : 
  
 baremetal.cluster.gke.io/v1 
 kind 
 : 
  
 NodePool 
 metadata 
 : 
  
 name 
 : 
  
 nodepool1 
  
 namespace 
 : 
  
 cluster-cluster1 
 spec 
 : 
  
 clusterName 
 : 
  
 cluster1 
  
 nodes 
 : 
  
 - 
  
 address 
 : 
  
 172.18.0.5 
  
 - 
  
 address 
 : 
  
 172.18.0.6 
 

You can remove a node from the node pool by deleting its entry:

  --- 
 apiVersion 
 : 
  
 baremetal.cluster.gke.io/v1 
 kind 
 : 
  
 NodePool 
 metadata 
 : 
  
 name 
 : 
  
 nodepool1 
  
 namespace 
 : 
  
 cluster-cluster1 
 spec 
 : 
  
 clusterName 
 : 
  
 cluster1 
  
 nodes 
 : 
  
 - 
  
 address 
 : 
  
 172.18.0.5 
 

To update the cluster, run the following command for the self-managing clusters, such as admin and standalone clusters:

 bmctl  
update  
cluster  
-c  
 CLUSTER_NAME 
  
 \ 
  
--kubeconfig = 
 KUBECONFIG 
 

After the bmctl update command is executed successfully, it will take a while for machine-init or machine-reset pods to be done.

The following sections describe some important differences for updating specific node types.

Control plane and load balancer nodes

The control plane and load balancer node pool specifications for Google Distributed Cloud are special. These specifications declare and control critical cluster resources. The canonical source for these resources is their respective sections in the cluster config file:

  • spec.controlPlane.nodePoolSpec
  • spec.LoadBalancer.nodePoolSpec

You add or remove control plane or load balancer nodes by editing the array of addresses under nodes in the corresponding section of the cluster config file.

In a high availability (HA) configuration, an odd number of control plane node pools (three or more) are required to establish a quorum to ensure that if a control plane fails, others will take over. If you have an even number of nodes temporarily while ading or removing nodes for maintenance or replacement, your deployment maintains HA as long as you have enough quorum.

Worker nodes

You can add or remove worker nodes directly with the bmctl command. Worker node pools must have at least one desired node. However, if you'd like to remove the whole node pool, use the kubectl command. In the following example, the command deletes a node pool named np1 , where the variable for the cluster namespace is my-cluster :

 kubectl  
-n  
 my-cluster 
  
delete  
nodepool  
 np1 
 

Other mutable fields

Besides adding and removing nodes, you can also use the bmctl update command to modify certain elements of your cluster configuration. Typically, to update your cluster resource, you edit your local version of the cluster configuration file and use bmctl update to apply your changes. The bmctl update command is similar to the kubectl apply command.

The following sections outline some common examples for updating an existing cluster by either changing a field value or modifying a related custom resource.

loadBalancer.addressPools

The addressPools section contains fields for specifying load-balancing pools for bundled load balancers. You can add more load-balancing address pools at any time, but you can't remove or modify any existing address pools.

  addressPools 
 : 
 - 
  
 name 
 : 
  
 pool1 
  
 addresses 
 : 
  
 - 
  
 192.168.1.0-192.168.1.4 
  
 - 
  
 192.168.1.240/28 
  - 
  
 name 
 : 
  
 pool2 
  
 addresses 
 : 
  
 - 
  
 192.168.1.224/28 
 

bypassPreflightCheck

The default value of the bypassPreflightCheck field is false . If you set this field to true in the cluster configuration file, the internal preflight checks are ignored you apply resources to existing clusters.

  apiVersion 
 : 
  
 baremetal.cluster.gke.io/v1 
 kind 
 : 
  
 Cluster 
 metadata 
 : 
  
 name 
 : 
  
 cluster1 
  
 namespace 
 : 
  
 cluster-cluster1 
  
 annotations 
 : 
  
 baremetal.cluster.gke.io/private-mode 
 : 
  
 "true" 
 spec 
 : 
  
 bypassPreflightCheck 
 : 
  
 true 
 

loginUser

You can set the loginUser field under the node access configuration. This field supports passwordless sudo capability for machine login.

  apiVersion 
 : 
  
 baremetal.cluster.gke.io/v1 
 kind 
 : 
  
 Cluster 
 metadata 
 : 
  
 name 
 : 
  
 cluster1 
  
 namespace 
 : 
  
 cluster-cluster1 
  
 annotations 
 : 
  
 baremetal.cluster.gke.io/private-mode 
 : 
  
 "true" 
 spec 
 : 
  
 nodeAccess 
 : 
  
 loginUser 
 : 
  
 abm 
 

NetworkGatewayGroup

The NetworkGatewayGroup custom resource is used to provide floating IP addresses for advanced networking features, such as the egress NAT gateway or the bundled load-balancing feature with BGP . To use the NetworkGatewayGroup custom resource and related networking features, you must set clusterNetwork.advancedNetworking to true when you create your clusters.

  apiVersion 
 : 
  
 networking.gke.io/v1 
 kind 
 : 
  
 NetworkGatewayGroup 
  
 name 
 : 
  
 default 
  
 namespace 
 : 
  
 cluster-bm 
 spec 
 : 
  
 floatingIPs 
 : 
  
 - 
  
 10.0.1.100 
  
 - 
  
 10.0.2.100 
 

BGPLoadBalancer

When you configure bundled load balancers with BGP, the data plane load balancing uses, by default, the same external peers that were specified for control plane peering. Alternatively, you can configure the data plane load balancing separately, using the BGPLoadBalancer custom resource (and the BGPPeer custom resource). For more information, see Configure bundled load balancers with BGP .

  apiVersion 
 : 
  
 networking.gke.io/v1 
 kind 
 : 
  
 BGPLoadBalancer 
 metadata 
 : 
  
 name 
 : 
  
 default 
  
 namespace 
 : 
  
 cluster-bm 
 spec 
 : 
  
 peerSelector 
 : 
  
 cluster.baremetal.gke.io/default-peer 
 : 
  
 "true" 
 

BGPPeer

When you configure bundled load balancers with BGP, the data plane load balancing uses, by default, the same external peers that were specified for control plane peering. Alternatively, you can configure the data plane load balancing separately, using the BGPPeer custom resource (and the BGPLoadBalancer custom resource). For more information, see Configure bundled load balancers with BGP .

  apiVersion 
 : 
  
 networking.gke.io/v1 
 kind 
 : 
  
 BGPPeer 
 metadata 
 : 
  
 name 
 : 
  
 bgppeer1 
  
 namespace 
 : 
  
 cluster-bm 
  
 labels 
 : 
  
 cluster.baremetal.gke.io/default-peer 
 : 
  
 "true" 
 spec 
 : 
  
 localASN 
 : 
  
 65001 
  
 peerASN 
 : 
  
 65002 
  
 peerIP 
 : 
  
 10.0.3.254 
  
 sessions 
 : 
  
 2 
 

NetworkAttachmentDefinition

You can use the bmctl update command to modify NetworkAttachmentDefinition custom resources that correspond to the network.

  apiVersion 
 : 
  
 "k8s.cni.cncf.io/v1" 
 kind 
 : 
  
 NetworkAttachmentDefinition 
 metadata 
 : 
  
 name 
 : 
  
 gke-network-1 
  
 namespace 
 : 
  
 cluster-my-cluster 
 spec 
 : 
  
 config 
 : 
  
 '{ 
  
 "type": 
  
 "ipvlan", 
  
 "master": 
  
 "enp2342", 
  
 "mode": 
  
 "l2", 
  
 "ipam": 
  
 { 
  
 "type": 
  
 "whereabouts", 
  
 "range": 
  
 "172.120.0.0/24" 
 

After you modify the config file, you can update the cluster by running the bmctl update command. It will parse the cluster config file and apply custom resources based on the parsed result.

For the self-managing clusters, such as admin and standalone clusters, run:

 bmctl  
update  
cluster  
-c  
 CLUSTER_NAME 
  
--kubeconfig = 
 KUBECONFIG 
 

For user clusters, run:

 bmctl  
update  
cluster  
-c  
 CLUSTER_NAME 
  
--admin-kubeconfig = 
 ADMIN_KUBECONFIG 
 
Create a Mobile Website
View Site in Mobile | Classic
Share by: