Network requirements

Network requirements

External network requirements

Google Distributed Cloud requires an internet connection for operational purposes. Google Distributed Cloud retrieves cluster components from Container Registry and the cluster is registered with Connect .

You can connect to Google using the public internet (with HTTPS), through a Virtual Private Network (VPN), or through a Dedicated Interconnect .

Internal network requirements

Google Distributed Cloud can work with Layer 2 or Layer 3 connectivity between cluster nodes and requires load balancer nodes be in the same Layer 2 domain. The load balancer nodes can be the control plane nodes or a dedicated set of nodes. See Choosing and configuring load balancers for configuration information.

The Layer 2 network requirement applies whether you run the load balancer on the control plane node pool or in a dedicated set of nodes.

The requirements for load balancer machines are:

  • All load balancers for a given cluster are in the same Layer 2 domain.
  • All VIPs must be in the load balancer machine subnet and routable to the gateway of the subnet.
  • Users are responsible to allow ingress load balancer traffic.

Pod networking

Google Distributed Cloud 1.7.0 and later versions allows you to configure up to 250 pods per node. Kubernetes assigns a CIDR block to each node so that each pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of pods per node. The following table lists the size of the CIDR block that Kubernetes assigns to each node based on the configured maximum pods per node:

Maximum pods per node CIDR block per node Number of IP addresses
32
/26 64
33 – 64
/25 128
65 – 128
/24 256
129 - 250
/23 512

Running 250 pods per node requires Kubernetes to reserve a /23 CIDR block for each node. Assuming that the cluster's clusterNetwork.pods.cidrBlocks is configured to the default value of /16 , this allows the cluster to have a limit of (2 (23-16) )=128 nodes. If you intend to grow the cluster beyond this limit, you can either increase the value of clusterNetwork.pods.cidrBlocks or decrease the value of nodeConfig.podDensity.maxPodsPerNode .

Single user cluster deployment with high availability

The following diagram illustrates a number of key networking concepts for Google Distributed Cloud in one possible network configuration.

Google Distributed Cloud typical network configuration

  • The control plane nodes run load balancers, and they are all on the same Layer 2 network, while other connections, including worker nodes, only require Layer 3 connectivity.
  • Configuration files define IP addresses for worker node pools. Configuration files also define VIPs for the following purposes:
    • Services
    • Ingress
    • Control plane access through the Kubernetes API
  • A connection to Google Cloud is also required.

Port usage

This section shows how UDP and TCP ports are used on cluster and load balancer nodes.

Master nodes

Protocol Direction Port range Purpose Used by
UDP
Inbound 6081 GENEVE Encapsulation Self
TCP
Inbound 22 Provisioning and updates of admin cluster nodes Admin workstation
TCP
Inbound 6444 Kubernetes API server All
TCP
Inbound 2379 - 2380 etcd server client API kube-apiserver, etcd
TCP
Inbound 10250 kubelet API Self, Control plane
TCP
Inbound 10251 kube-scheduler Self
TCP
Inbound 10252 kube-controller-manager Self
TCP
Both 4240 CNI health check All

Worker nodes

Protocol Direction Port range Purpose Used by
TCP
Inbound 22 Provisioning and updates of user cluster nodes Admin cluster nodes
UDP
Inbound 6081 GENEVE Encapsulation Self
TCP
Inbound 10250 kubelet API Self, Control plane
TCP
Inbound 30000 - 32767 NodePort Services Self
TCP
Both 4240 CNI health check All

Load balancer nodes

Protocol Direction Port range Purpose Used by
TCP
Inbound 22 Provisioning and updates of user cluster nodes Admin cluster nodes
UDP
Inbound 6081 GENEVE Encapsulation Self
TCP
Inbound 443 * Cluster management All
TCP
Both 4240 CNI health check All
TCP
Inbound 7946 Metal LB health check LB nodes
UDP
Inbound 7946 Metal LB health check LB nodes

* This port can be configured in the cluster config, using the controlPlaneLBPort field.

Multi-cluster port requirements

In a multi-cluster configuration, added clusters must include the following ports to communicate with the admin cluster.

Protocol Direction Port range Purpose Used by
TCP
Inbound 22 Provisioning and updates of cluster nodes All nodes
TCP
Inbound 443 * Kubernetes API server for added cluster Control plane, LB nodes

* This port can be configured in the cluster config, using the controlPlaneLBPort field.

Configuring firewalld ports

Starting with Google Distributed Cloud 1.7.0, you are not required to disable firewalld to run Google Distributed Cloud on Red Hat Enterprise Linux (RHEL) or CentOS. To use firewalld, you must open the UDP and TCP ports used by master, worker, and load balancer nodes as described in Port usage on this page. The following example configurations show how you can open ports with firewall-cmd , the firewalld command line client.

Master node example configuration

The following example block of commands shows how you can open the needed ports on servers running master nodes:

 firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 22 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 4240 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6444 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6081 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10250 
-10252/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 2379 
-2380/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 443 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 30000 
-32767/tcp
firewall-cmd  
--permanent  
--new-zone = 
k8s-pods
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--add-source  
 PODS_CIDR 
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--set-target = 
ACCEPT
firewall-cmd  
--reload 

Replace PODS_CIDR with the CIDR blocks reserved for your pods, clusterNetwork.pods.cidrBlocks . The default CIDR block for pods is 192.168.0.0/16 .

Worker node example configuration

The following example block of commands shows how you can open the needed ports on servers running worker nodes:

 firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 22 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 4240 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6444 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6081 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10250 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 443 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 30000 
-32767/tcp
firewall-cmd  
--permanent  
--new-zone = 
k8s-pods
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--add-source  
 PODS_CIDR 
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--set-target = 
ACCEPT
firewall-cmd  
--reload 

Replace PODS_CIDR with the CIDR blocks reserved for your pods, clusterNetwork.pods.cidrBlocks . The default CIDR block for pods is 192.168.0.0/16 .

Load balancer node example configuration

The following example block of commands shows how you can open the needed ports on servers running load balancer nodes:

 firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 22 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 4240 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6444 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 7946 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 7946 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6081 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10250 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 443 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 30000 
-32767/tcp
firewall-cmd  
--permanent  
--new-zone = 
k8s-pods
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--add-source  
 PODS_CIDR 
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--set-target = 
ACCEPT
firewall-cmd  
--reloadfirewall-cmd  
--reload 

Replace PODS_CIDR with the CIDR blocks reserved for your pods, clusterNetwork.pods.cidrBlocks . The default CIDR block for pods is 192.168.0.0/16 .

Create a Mobile Website
View Site in Mobile | Classic
Share by: