Setting up intranode visibility


This guide shows you how to set up intranode visibility on a Google Kubernetes Engine (GKE) cluster.

Intranode visibility configures networking on each node in the cluster so that traffic sent from one Pod to another Pod is processed by the cluster's Virtual Private Cloud (VPC) network, even if the Pods are on the same node.

Intranode visibility is disabled by default on Standard clusters and enabled by default in Autopilot clusters.

Architecture

Intranode visibility ensures that packets sent between Pods are always processed by the VPC network, which ensures that firewall rules, routes, flow logs, and packet mirroring configurations apply to the packets.

When a Pod sends a packet to another Pod on the same node, the packet leaves the node and is processed by the Google Cloud network. Then the packet is immediately sent back to the same node and forwarded to the destination Pod.

Intranode visibility deploys the netd DaemonSet.

Benefits

Intranode visibility provides the following benefits:

  • See flow logs for all traffic between Pods, including traffic between Pods on the same node.
  • Create firewall rules that apply to all traffic among Pods, including traffic between Pods on the same node.
  • Use Packet Mirroring to clone traffic, including traffic between Pods on the same node, and forward it for examination.

Requirements and limitations

Intranode visibility has the following requirements and limitations:

  • Your cluster must be on GKE version 1.15 or later.
  • Intranode visibility is not supported with Windows Server node pools.
  • To prevent connectivity issues, when using the ip-masq-agent flag with intranode visibility, your custom nonMasqueradeCIDRs list must include the cluster's node and Pod IP address ranges.

Firewall rules

When you enable intranode visibility, the VPC network processes all packets sent between Pods, including packets sent between Pods on the same node. This means VPC firewall rules and hierarchical firewall policies consistently apply to Pod-to-Pod communication, regardless of Pod location.

If you configure custom firewall rules for communication within the cluster, carefully evaluate your cluster's networking needs to determine the set of egress and ingress allow rules. You can use connectivity tests to ensure that legitimate traffic is not obstructed. For example, Pod-to-Pod communication is required for network policy to function.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Kubernetes Engine API.
  • Enable Google Kubernetes Engine API
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update .

Enable intranode visibility on a new cluster

You can create a cluster with intranode visibility enabled using the gcloud CLI or the Google Cloud console.

gcloud

To create a single-node cluster that has intranode visibility enabled, use the --enable-intra-node-visibility flag:

 gcloud  
container  
clusters  
create  
 CLUSTER_NAME 
  
 \ 
  
--location = 
 CONTROL_PLANE_LOCATION 
  
 \ 
  
--enable-intra-node-visibility 

Replace the following:

  • CLUSTER_NAME : the name of your new cluster.
  • CONTROL_PLANE_LOCATION : the Compute Engine location of the control plane of your cluster. Provide a region for regional clusters, or a zone for zonal clusters.

Console

To create a single-node cluster that has intranode visibility enabled, perform the following steps:

  1. Go to the Google Kubernetes Enginepage in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. Click Create.

  3. Enter the Namefor your cluster.

  4. In the Configure clusterdialog, next to GKE Standard, click Configure.

  5. Configure your cluster as needed.

  6. From the navigation pane, under Cluster, click Networking.

  7. Select the Enable intranode visibilitycheckbox.

  8. Click Create.

Enable intranode visibility on an existing cluster

You can enable intranode visibility on an existing cluster using the gcloud CLI or the Google Cloud console.

When you enable intranode visibility for an existing cluster, GKE restarts components in both the control plane and the worker nodes.

gcloud

To enable intranode visibility on an existing cluster, use the --enable-intra-node-visibility flag:

 gcloud  
container  
clusters  
update  
 CLUSTER_NAME 
  
 \ 
  
--enable-intra-node-visibility 

Replace CLUSTER_NAME with the name of your cluster.

Console

To enable intranode visibility on an existing cluster, perform the following steps:

  1. Go to the Google Kubernetes Enginepage in the Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Networking, click Edit intranode visibility.

  4. Select the Enable intranode visibilitycheckbox.

  5. Click Save Changes.

This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy and respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions .

Disable intranode visibility

You can disable intranode visibility on a cluster using the gcloud CLI or the Google Cloud console.

When you disable intranode visibility for an existing cluster, GKE restarts components in both the control plane and the worker nodes.

gcloud

To disable intranode visibility, use the --no-enable-intra-node-visibility flag:

 gcloud  
container  
clusters  
update  
 CLUSTER_NAME 
  
 \ 
  
--no-enable-intra-node-visibility 

Replace CLUSTER_NAME with the name of your cluster.

Console

To disable intranode visibility, perform the following steps:

  1. Go to the Google Kubernetes Enginepage in Google Cloud console.

    Go to Google Kubernetes Engine

  2. In the cluster list, click the name of the cluster you want to modify.

  3. Under Networking, click Edit intranode visibility.

  4. Clear the Enable intranode visibilitycheckbox.

  5. Click Save Changes.

This change requires recreating the nodes, which can cause disruption to your running workloads. For details about this specific change, find the corresponding row in the manual changes that recreate the nodes using a node upgrade strategy and respecting maintenance policies table. To learn more about node updates, see Planning for node update disruptions .

Exercise: Verify intranode visibility

This exercise shows you the steps required to enable intranode visibility and confirm that it is working for your cluster.

In this exercise, you perform the following steps:

  1. Enable flow logs for the default subnet in the us-central1 region.
  2. Create a single-node cluster with intranode visibility enabled in the us-central1-a zone.
  3. Create two Pods in your cluster.
  4. Send an HTTP request from one Pod to another Pod.
  5. View the flow log entry for the Pod-to-Pod request.

Enable flow logs

  1. Enable flow logs for the default subnet:

     gcloud  
    compute  
    networks  
    subnets  
    update  
    default  
     \ 
      
    --region = 
    us-central1  
     \ 
      
    --enable-flow-logs 
    
  2. Verify that the default subnet has flow logs enabled:

     gcloud  
    compute  
    networks  
    subnets  
    describe  
    default  
     \ 
      
    --region = 
    us-central1 
    

    The output shows that flow logs are enabled, similar to the following:

     ...
    enableFlowLogs: true
    ... 
    

Create a cluster

  1. Create a single node cluster with intranode visibility enabled:

     gcloud  
    container  
    clusters  
    create  
    flow-log-test  
     \ 
      
    --location = 
    us-central1-a  
     \ 
      
    --num-nodes = 
     1 
      
     \ 
      
    --enable-intra-node-visibility 
    
  2. Get the credentials for your cluster:

     gcloud  
    container  
    clusters  
    get-credentials  
    flow-log-test  
     \ 
      
    --location = 
    us-central1-a 
    

Create two Pods

  1. Create a Pod.

    Save the following manifest to a file named pod-1.yaml :

      apiVersion 
     : 
      
     v1 
     kind 
     : 
      
     Pod 
     metadata 
     : 
      
     name 
     : 
      
     pod-1 
     spec 
     : 
      
     containers 
     : 
      
     - 
      
     name 
     : 
      
     container-1 
      
     image 
     : 
      
     google/cloud-sdk:slim 
      
     command 
     : 
      
     - 
      
     sh 
      
     - 
      
     -c 
      
     - 
      
     while true; do sleep 30; done 
     
    
  2. Apply the manifest to your cluster:

     kubectl  
    apply  
    -f  
    pod-1.yaml 
    
  3. Create a second Pod.

    Save the following manifest to a file named pod-2.yaml :

      apiVersion 
     : 
      
     v1 
     kind 
     : 
      
     Pod 
     metadata 
     : 
      
     name 
     : 
      
     pod-2 
     spec 
     : 
      
     containers 
     : 
      
     - 
      
     name 
     : 
      
     container-2 
      
     image 
     : 
      
     us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0 
     
    
  4. Apply the manifest to your cluster:

     kubectl  
    apply  
    -f  
    pod-2.yaml 
    
  5. View the Pods:

     kubectl  
    get  
    pod  
    pod-1  
    pod-2  
    --output  
    wide 
    

    The output shows the IP addresses of your Pods, similar to the following:

     NAME      READY     STATUS    RESTARTS   AGE       IP           ...
    pod-1     1/1       Running   0          1d        10.52.0.13   ...
    pod-2     1/1       Running   0          1d        10.52.0.14   ... 
    

    Note the IP addresses of pod-1 and pod-2 .

Send a request

  1. Get a shell to the container in pod-1 :

     kubectl  
     exec 
      
    -it  
    pod-1  
    --  
    sh 
    
  2. In your shell, send a request to pod-2 :

     curl  
    -s  
     POD_2_IP_ADDRESS 
    :8080 
    

    Replace POD_2_IP_ADDRESS with the IP address of pod-2 .

    The output shows the response from the container running in pod-2 .

     Hello, world!
    Version: 2.0.0
    Hostname: pod-2 
    
  3. Type exit to leave the shell and return to your main command-line environment.

View flow log entries

To view a flow log entry, use the following command:

 gcloud  
logging  
 read 
  
 \ 
  
 'logName="projects/ PROJECT_ID 
/logs/compute.googleapis.com%2Fvpc_flows" AND jsonPayload.connection.src_ip=" POD_1_IP_ADDRESS 
" AND jsonPayload.connection.dest_ip=" POD_2_IP_ADDRESS 
"' 
 

Replace the following:

  • PROJECT_ID : your project ID.
  • POD_1_IP_ADDRESS : the IP address of pod-1 .
  • POD_2_IP_ADDRESS : the IP address of pod-2 .

The output shows a flow log entry for a request from pod-1 to pod-2 . In this example, pod-1 has IP address 10.56.0.13 , and pod-2 has IP address 10.56.0.14 .

 ...
jsonPayload:
  bytes_sent: '0'
  connection:
    dest_ip: 10.56.0.14
    dest_port: 8080
    protocol: 6
    src_ip: 10.56.0.13
    src_port: 35414
... 

Clean up

To avoid incurring unwanted charges on your account, perform the following steps to remove the resources you created:

  1. Delete the cluster:

     gcloud  
    container  
    clusters  
    delete  
    -q  
    flow-log-test 
    
  2. Disable flow logs for the default subnet:

     gcloud  
    compute  
    networks  
    subnets  
    update  
    default  
    --no-enable-flow-logs 
    

What's next

Create a Mobile Website
View Site in Mobile | Classic
Share by: