Network requirements

External network requirements

Google Distributed Cloud requires an internet connection for operational purposes. Google Distributed Cloud retrieves cluster components from Container Registry , and the cluster is registered with Connect .

You can connect to Google by using the public internet through HTTPS, a virtual private network (VPN), or a Dedicated Interconnect connection.

If the machines you are using for your admin workstation and cluster nodes use a proxy server to access the internet, your proxy server must allow some specific connections. For details, see the prerequisites section of Install behind a proxy .

Internal network requirements

Google Distributed Cloud can work with Layer 2 or Layer 3 connectivity between cluster nodes, but requires load balancer nodes to have Layer 2 connectivity. The load balancer nodes can be the control plane nodes or a dedicated set of nodes. For more information, see Choosing and configuring load balancers .

The Layer 2 connectivity requirement applies whether you run the load balancer on the control plane node pool or in a dedicated set of nodes.

The requirements for load balancer machines are as follows:

  • All load balancers for a given cluster are in the same Layer 2 domain.
  • All virtual IP addresses (VIPs) must be in the load balancer machine subnet and routable to the gateway of the subnet.
  • Users are responsible to allow ingress load balancer traffic.

Pod networking

Google Distributed Cloud 1.7.0 and later versions allows you to configure up to 250 pods per node. Kubernetes assigns a Classless Inter-Domain Routing (CIDR) block to each node so that each pod can have a unique IP address. The size of the CIDR block corresponds to the maximum number of pods per node. The following table lists the size of the CIDR block that Kubernetes assigns to each node based on the configured maximum pods per node:

Maximum pods per node CIDR block per node Number of IP addresses
32
/26 64
33 – 64
/25 128
65 – 128
/24 256
129 - 250
/23 512

Running 250 pods per node requires Kubernetes to reserve a /23 CIDR block for each node. Assuming that your cluster uses the default value of /16 for the clusterNetwork.pods.cidrBlocks field, your cluster has a limit of (2 (23-16) )=128 nodes. If you intend to grow the cluster beyond this limit, you can either increase the value of clusterNetwork.pods.cidrBlocks or decrease the value of nodeConfig.podDensity.maxPodsPerNode .

Single user cluster deployment with high availability

The following diagram illustrates a number of key networking concepts for Google Distributed Cloud in one possible network configuration.

Google Distributed Cloud typical network configuration

Consider the following information to meet the network requirements:

  • The control plane nodes run the load balancers, and they all have Layer 2 connectivity, while other connections, including worker nodes, only require Layer 3 connectivity.
  • Configuration files define IP addresses for worker node pools. Configuration files also define VIPs for the following purposes:
    • Services
    • Ingress
    • Control plane access through the Kubernetes API
  • You require a connection to Google Cloud.

Port usage

This section shows how UDP and TCP ports are used on cluster and load balancer nodes.

Control plane nodes

Protocol Direction Port range Purpose Used by
UDP
Inbound 6081 GENEVE Encapsulation Self
TCP
Inbound 22 Provisioning and updates of admin cluster nodes Admin workstation
TCP
Inbound 6444 Kubernetes API server All
TCP
Inbound 2379 - 2380 etcd server client API kube-apiserver and etcd
TCP
Inbound 10250 kubelet API Self and control plane
TCP
Inbound 10251 kube-scheduler Self
TCP
Inbound 10252 kube-controller-manager Self
TCP
Inbound 10256 Node health check All
TCP
Both 4240 CNI health check All

Worker nodes

Protocol Direction Port range Purpose Used by
TCP
Inbound 22 Provisioning and updates of user cluster nodes Admin cluster nodes
UDP
Inbound 6081 GENEVE Encapsulation Self
TCP
Inbound 10250 kubelet API Self and control plane
TCP
Inbound 10256 Node health check All
TCP
Inbound 30000 - 32767 NodePort services Self
TCP
Both 4240 CNI health check All

Load balancer nodes

Protocol Direction Port range Purpose Used by
TCP
Inbound 22 Provisioning and updates of user cluster nodes Admin cluster nodes
UDP
Inbound 6081 GENEVE Encapsulation Self
TCP
Inbound 443 * Cluster management All
TCP
Both 4240 CNI health check All
TCP
Inbound 7946 Metal LB health check load balancer nodes
UDP
Inbound 7946 Metal LB health check load balancer nodes
TCP
Inbound 10256 Node health check All

* This port can be configured in the cluster config, using the controlPlaneLBPort field.

Multi-cluster port requirements

In a multi-cluster configuration, added clusters must include the following ports to communicate with the admin cluster.

Protocol Direction Port range Purpose Used by
TCP
Inbound 22 Provisioning and updates of cluster nodes All nodes
TCP
Inbound 443 * Kubernetes API server for added cluster Control plane and load balancer nodes

* This port can be configured in the cluster config, using the controlPlaneLBPort field.

Configure firewalld ports

You are not required to disable firewalld to run Google Distributed Cloud on Red Hat Enterprise Linux (RHEL) or CentOS. To use firewalld, you must open the UDP and TCP ports used by control plane, worker, and load balancer nodes as described in Port usage on this page. The following example configurations show how you can open ports with firewall-cmd , the firewalld command line utility.

Control plane node example configuration

The following block of commands shows an example of how you can open the needed ports on servers running control plane nodes:

 firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 22 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 4240 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6444 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6081 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10250 
-10252/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10256 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 2379 
-2380/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 443 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 30000 
-32767/tcp
firewall-cmd  
--permanent  
--new-zone = 
k8s-pods
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--add-source  
 PODS_CIDR 
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--set-target = 
ACCEPT
firewall-cmd  
--reload 

Replace PODS_CIDR with the CIDR blocks reserved for your pods configured in the clusterNetwork.pods.cidrBlocks field. The default CIDR block for pods is 192.168.0.0/16 .

Worker node example configuration

The following block of commands shows an example of how you can open the needed ports on servers running worker nodes:

 firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 22 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 4240 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6444 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6081 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10250 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10256 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 443 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 30000 
-32767/tcp
firewall-cmd  
--permanent  
--new-zone = 
k8s-pods
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--add-source  
 PODS_CIDR 
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--set-target = 
ACCEPT
firewall-cmd  
--reload 

Replace PODS_CIDR with the CIDR blocks reserved for your pods configured in the clusterNetwork.pods.cidrBlocks field. The default CIDR block for pods is 192.168.0.0/16 .

Load balancer node example configuration

The following block of commands shows an example of how you can open the needed ports on servers running load balancer nodes:

 firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 22 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 4240 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6444 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 7946 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 7946 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 6081 
/udp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10250 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 10256 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 443 
/tcp
firewall-cmd  
--permanent  
--zone = 
public  
--add-port = 
 30000 
-32767/tcp
firewall-cmd  
--permanent  
--new-zone = 
k8s-pods
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--add-source  
 PODS_CIDR 
firewall-cmd  
--permanent  
--zone = 
k8s-pods  
--set-target = 
ACCEPT
firewall-cmd  
--reload 

Replace PODS_CIDR with the CIDR blocks reserved for your pods configured in the clusterNetwork.pods.cidrBlocks field. The default CIDR block for pods is 192.168.0.0/16 .

Create a Mobile Website
View Site in Mobile | Classic
Share by: