Access Managed Lustre instances on GKE with the Managed Lustre CSI driver

This guide describes how you can create a new Kubernetes volume backed by the Managed Lustre CSI driver in GKE with dynamic provisioning . The Managed Lustre CSI driver lets you create storage backed by Managed Lustre instances on-demand, and access them as volumes for your stateful workloads.

Before you begin

Before you start, make sure that you have performed the following tasks:

  • Enable the Google Cloud Managed Lustre API and the Google Kubernetes Engine API.
  • Enable APIs
  • If you want to use the Google Cloud CLI for this task, install and then initialize the gcloud CLI. If you previously installed the gcloud CLI, get the latest version by running gcloud components update .

Set up environment variables

Set up the following environment variables:

  export 
  
 CLUSTER_NAME 
 = 
 CLUSTER_NAME 
 export 
  
 PROJECT_ID 
 = 
 PROJECT_ID 
 export 
  
 NETWORK_NAME 
 = 
 LUSTRE_NETWORK 
 export 
  
 IP_RANGE_NAME 
 = 
 LUSTRE_IP_RANGE 
 export 
  
 FIREWALL_RULE_NAME 
 = 
 LUSTRE_FIREWALL_RULE 
 export 
  
 LOCATION 
 = 
 ZONE 
 export 
  
 CLUSTER_VERSION 
 = 
 CLUSTER_VERSION 
 

Replace the following:

  • CLUSTER_NAME : the name of the cluster.
  • PROJECT_ID : your Google Cloud project ID .
  • LUSTRE_NETWORK : the shared Virtual Private Cloud (VPC) network where both the GKE cluster and Managed Lustre instance reside.
  • LUSTRE_IP_RANGE : the name for the IP address range created for VPC Network Peering with Managed Lustre.
  • LUSTRE_FIREWALL_RULE : the name for the firewall rule to allow TCP traffic from the IP address range.
  • ZONE : the geographical zone of your GKE cluster; for example, us-central1-a .
  • CLUSTER_VERSION : the GKE cluster version.

Set up a VPC network

You must specify the same VPC network when creating the Managed Lustre instance and your GKE clusters.

  1. To enable service networking, run the following command:

     gcloud  
    services  
     enable 
      
    servicenetworking.googleapis.com  
     \ 
      
    --project = 
     ${ 
     PROJECT_ID 
     } 
     
    
  2. Create a VPC network. Setting the --mtu flag to 8896 results in a 10% performance gain.

     gcloud  
    compute  
    networks  
    create  
     ${ 
     NETWORK_NAME 
     } 
      
     \ 
      
    --subnet-mode = 
    auto  
    --project = 
     ${ 
     PROJECT_ID 
     } 
      
     \ 
      
    --mtu = 
     8896 
     
    
  3. Create an IP address range.

     gcloud  
    compute  
    addresses  
    create  
     ${ 
     IP_RANGE_NAME 
     } 
      
     \ 
      
    --global  
     \ 
      
    --purpose = 
    VPC_PEERING  
     \ 
      
    --prefix-length = 
     20 
      
     \ 
      
    --description = 
     "Managed Lustre VPC Peering" 
      
     \ 
      
    --network = 
     ${ 
     NETWORK_NAME 
     } 
      
     \ 
      
    --project = 
     ${ 
     PROJECT_ID 
     } 
     
    
  4. Get the CIDR range associated with the range you created in the preceding step.

      CIDR_RANGE 
     = 
     $( 
      
    gcloud  
    compute  
    addresses  
    describe  
     ${ 
     IP_RANGE_NAME 
     } 
      
     \ 
      
    --global  
     \ 
      
    --format = 
     "value[separator=/](address, prefixLength)" 
      
     \ 
      
    --project = 
     ${ 
     PROJECT_ID 
     } 
     ) 
     
    
  5. Create a firewall rule to allow TCP traffic from the IP address range you created.

     gcloud  
    compute  
    firewall-rules  
    create  
     ${ 
     FIREWALL_RULE_NAME 
     } 
      
     \ 
      
    --allow = 
    tcp:988,tcp:6988  
     \ 
      
    --network = 
     ${ 
     NETWORK_NAME 
     } 
      
     \ 
      
    --source-ranges = 
     ${ 
     CIDR_RANGE 
     } 
      
     \ 
      
    --project = 
     ${ 
     PROJECT_ID 
     } 
     
    
  6. To set up network peering for your project, verify that you have necessary IAM permissions, specifically the compute.networkAdmin or servicenetworking.networksAdmin role.

    1. Go to Google Cloud console > IAM & Admin, then search for your project owner principal.
    2. Click the pencil icon, then click + ADD ANOTHER ROLE.
    3. Select Compute Network Adminor Service Networking Admin.
    4. Click Save.
  7. Connect the peering.

     gcloud  
    services  
    vpc-peerings  
    connect  
     \ 
      
    --network = 
     ${ 
     NETWORK_NAME 
     } 
      
     \ 
      
    --project = 
     ${ 
     PROJECT_ID 
     } 
      
     \ 
      
    --ranges = 
     ${ 
     IP_RANGE_NAME 
     } 
      
     \ 
      
    --service = 
    servicenetworking.googleapis.com 
    

Configure the Managed Lustre CSI driver

This section covers how you can enable and disable the Managed Lustre CSI driver, if needed.

Lustre communication ports

The GKE Managed Lustre CSI driver uses different ports for communication with Managed Lustre instances, depending on your GKE cluster version and existing Managed Lustre configurations.

  • Default port (Recommended):for new GKE clusters that run version 1.33.2-gke.4780000 or later, the driver uses port 988 for Lustre communication by default.

  • Legacy Port:use port 6988 by appending the --enable-legacy-lustre-port flag to your gcloud commands in the following scenarios:

    • Earlier GKE versions:if your GKE cluster runs a version earlier than 1.33.2-gke.4780000 , the --enable-legacy-lustre-port flag works around a port conflict with the gke-metadata-server on GKE nodes.
    • Existing Lustre instances:if you are connecting to an existing Managed Lustre instance that was created with the gke-support-enabled flag, you must still include --enable-legacy-lustre-port in your gcloud commands, irrespective of your cluster version. Without this flag, your GKE cluster will fail to mount the existing Lustre instance. For information about the gke-support-enabled flag, see the optional flags description in Create an instance .

You can configure the new and existing clusters to use either the default port 988 , or the legacy port 6988 .

Enable the Managed Lustre CSI driver on a new GKE cluster

The following sections describe how to enable the Managed Lustre CSI driver on a new GKE cluster.

Use the default port 988

To enable the Managed Lustre CSI driver when creating a new GKE cluster that runs version 1.33.2-gke.4780000 or later, run the following command:

Autopilot

 gcloud  
container  
clusters  
create-auto  
 " 
 ${ 
 CLUSTER_NAME 
 } 
 " 
  
 \ 
  
--location = 
 ${ 
 LOCATION 
 } 
  
 \ 
  
--network = 
 " 
 ${ 
 NETWORK_NAME 
 } 
 " 
  
 \ 
  
--cluster-version = 
 ${ 
 CLUSTER_VERSION 
 } 
  
 \ 
  
--enable-lustre-csi-driver 

Standard

 gcloud  
container  
clusters  
create  
 " 
 ${ 
 CLUSTER_NAME 
 } 
 " 
  
 \ 
  
--location = 
 ${ 
 LOCATION 
 } 
  
 \ 
  
--network = 
 " 
 ${ 
 NETWORK_NAME 
 } 
 " 
  
 \ 
  
--cluster-version = 
 ${ 
 CLUSTER_VERSION 
 } 
  
 \ 
  
--addons = 
LustreCsiDriver 

Use the legacy port 6988

To enable the Managed Lustre CSI driver when creating a new GKE cluster that runs a version earlier than 1.33.2-gke.4780000 , run the following command:

Autopilot

 gcloud  
container  
clusters  
create-auto  
 " 
 ${ 
 CLUSTER_NAME 
 } 
 " 
  
 \ 
  
--location = 
 ${ 
 LOCATION 
 } 
  
 \ 
  
--network = 
 " 
 ${ 
 NETWORK_NAME 
 } 
 " 
  
 \ 
  
--cluster-version = 
 ${ 
 CLUSTER_VERSION 
 } 
  
 \ 
  
--enable-lustre-csi-driver  
 \ 
  
--enable-legacy-lustre-port 

Standard

 gcloud  
container  
clusters  
create  
 " 
 ${ 
 CLUSTER_NAME 
 } 
 " 
  
 \ 
  
--location = 
 ${ 
 LOCATION 
 } 
  
 \ 
  
--network = 
 " 
 ${ 
 NETWORK_NAME 
 } 
 " 
  
 \ 
  
--cluster-version = 
 ${ 
 CLUSTER_VERSION 
 } 
  
 \ 
  
--addons = 
LustreCsiDriver  
 \ 
  
--enable-legacy-lustre-port 

Enable the Managed Lustre CSI driver on existing GKE clusters

The following sections describe how to enable the Managed Lustre CSI driver on existing GKE clusters.

Use the default port 988

To enable the Managed Lustre CSI driver on an existing GKE cluster that runs version 1.33.2-gke.4780000 or later, run the following command:

   
gcloud  
container  
clusters  
update  
 ${ 
 CLUSTER_NAME 
 } 
  
 \ 
  
--location = 
 ${ 
 LOCATION 
 } 
  
 \ 
  
--update-addons = 
 LustreCsiDriver 
 = 
ENABLED 

Use the legacy port 6988

To enable the Managed Lustre CSI driver on an existing GKE cluster, you might need to use the legacy port 6988 by adding the --enable-legacy-lustre-port flag. This flag is required in the following scenarios:

  • If your GKE cluster runs on a version earlier than 1.33.2-gke.4780000 .
  • If you intend to connect this cluster to an existing Managed Lustre instance that was created with the gke-support-enabled flag.

     gcloud  
    container  
    clusters  
    update  
     ${ 
     CLUSTER_NAME 
     } 
      
     \ 
      
    --location = 
     ${ 
     LOCATION 
     } 
      
     \ 
      
    --enable-legacy-lustre-port 
    

Node upgrade required on existing clusters

Enabling the Managed Lustre CSI driver on existing clusters can trigger node re-creation in order to update the necessary kernel modules for the Managed Lustre client. For immediate availability, we recommend manually upgrading your node pools.

GKE clusters on a release channel upgrade according to their scheduled rollout, which can take several weeks depending on your maintenance window . If you're on a static GKE version, you need to manually upgrade your node pools.

After the node pool upgrade, CPU nodes might appear to be using a GPU image in the Google Cloud console or CLI output. For example:

 config:
  imageType: COS_CONTAINERD
  nodeImageConfig:
    image: gke-1330-gke1552000-cos-121-18867-90-4-c-nvda 

This behavior is expected. The GPU image is being reused on CPU nodes to securely install the Managed Lustre kernel modules. You won't be charged for GPU usage.

Disable the Managed Lustre CSI driver

You can disable the Managed Lustre CSI driver on an existing GKEcluster by using the Google Cloud CLI.

 gcloud  
container  
clusters  
update  
 ${ 
 CLUSTER_NAME 
 } 
  
 \ 
  
--location = 
 ${ 
 LOCATION 
 } 
  
 \ 
  
--update-addons = 
 LustreCsiDriver 
 = 
DISABLED 

After the CSI driver is disabled, GKE automatically recreates your nodes and uninstalls the Managed Lustre kernel modules.

Create a new volume using the Managed Lustre CSI driver

The following sections describe the typical process for creating a Kubernetes volume backed by a Managed Lustre instance in GKE:

  1. Create a StorageClass .
  2. Use a PersistentVolumeClaim to access the volume .
  3. Create a workload that consumes the volume .

Create a StorageClass

When the Managed Lustre CSI driver is enabled, GKE automatically creates a StorageClass for provisioning Managed Lustre instances. The StorageClass depends on the Managed Lustre performance tier . GKE, and is one of the following:

  • lustre-rwx-125mbps-per-tib
  • lustre-rwx-250mbps-per-tib
  • lustre-rwx-500mbps-per-tib
  • lustre-rwx-1000mbps-per-tib

GKE provides a default StorageClass for each supported Managed Lustre performance tier. This simplifies the dynamic provisioning of Managed Lustre instances, as you can use the built-in StorageClasses without having to define your own.

For zonal clusters, the CSI driver provisions Managed Lustre instances in the same zone as the cluster. For regional clusters, it provisions the instance in one of the zones within the region.

The following example shows you how to create a custom StorageClass with specific topology requirements:

  1. Save the following manifest in a file named lustre-class.yaml :

      apiVersion 
     : 
      
     storage.k8s.io/v1 
     kind 
     : 
      
     StorageClass 
     metadata 
     : 
      
     name 
     : 
      
     lustre-class 
     provisioner 
     : 
      
     lustre.csi.storage.gke.io 
     volumeBindingMode 
     : 
      
     Immediate 
     reclaimPolicy 
     : 
      
     Delete 
     parameters 
     : 
      
     perUnitStorageThroughput 
     : 
      
     "1000" 
      
     network 
     : 
      
      LUSTRE_NETWORK 
     
     allowedTopologies 
     : 
     - 
      
     matchLabelExpressions 
     : 
      
     - 
      
     key 
     : 
      
     topology.gke.io/zone 
      
     values 
     : 
      
     - 
      
     us-central1-a 
     
    

    For the full list of fields that are supported in the StorageClass, see the Managed Lustre CSI driver reference documentation .

  2. Create the StorageClass by running this command:

     kubectl  
    apply  
    -f  
    lustre-class.yaml 
    

Use a PersistentVolumeClaim to access the Volume

This section shows you how to create a PersistentVolumeClaim resource that references the Managed Lustre CSI driver's StorageClass.

  1. Save the following manifest in a file named lustre-pvc.yaml :

      apiVersion 
     : 
      
     v1 
     kind 
     : 
      
     PersistentVolumeClaim 
     metadata 
     : 
      
     name 
     : 
      
     lustre-pvc 
     spec 
     : 
      
     accessModes 
     : 
      
     - 
      
     ReadWriteMany 
      
     resources 
     : 
      
     requests 
     : 
      
     storage 
     : 
      
     18000Gi 
      
     storageClassName 
     : 
      
     lustre-class 
     
    

    For the full list of fields that are supported in the PersistentVolumeClaim, see the Managed Lustre CSI driver reference documentation .

  2. Create the PersistentVolumeClaim by running this command:

     kubectl  
    apply  
    -f  
    lustre-pvc.yaml 
    

Create a workload to consume the volume

This section shows an example of how to create a Pod that consumes the PersistentVolumeClaim resource you created earlier.

Multiple Pods can share the same PersistentVolumeClaim resource.

  1. Save the following manifest in a file named my-pod.yaml .

      apiVersion 
     : 
      
     v1 
     kind 
     : 
      
     Pod 
     metadata 
     : 
      
     name 
     : 
      
     my-pod 
     spec 
     : 
      
     containers 
     : 
      
     - 
      
     name 
     : 
      
     nginx 
      
     image 
     : 
      
     nginx 
      
     volumeMounts 
     : 
      
     - 
      
     name 
     : 
      
     lustre-volume 
      
     mountPath 
     : 
      
     /data 
      
     volumes 
     : 
      
     - 
      
     name 
     : 
      
     lustre-volume 
      
     persistentVolumeClaim 
     : 
      
     claimName 
     : 
      
     lustre-pvc 
     
    
  2. Apply the manifest to the cluster.

     kubectl  
    apply  
    -f  
    my-pod.yaml 
    
  3. Verify that the Pod is running. The Pod runs after the PersistentVolumeClaim is provisioned. This operation might take a few minutes to complete.

     kubectl  
    get  
    pods 
    

    The output is similar to the following:

     NAME  
    READY  
    STATUS  
    RESTARTS  
    AGE
    my-pod  
     1 
    /1  
    Running  
     0 
      
    11s 
    

Use fsGroup with Managed Lustre volumes

You can change the group ownership of the root level directory of the mounted file system to match a user-requested fsGroup specified in the Pod's SecurityContext . fsGroup won't recursively change the ownership of the entire mounted Managed Lustre file system; only the root directory of the mount point is affected.

Troubleshooting

For troubleshooting guidance, refer to the Troubleshooting page in the Managed Lustre documentation.

Clean up

To avoid incurring charges to your Google Cloud account, delete the storage resources you created in this guide.

  1. Delete the Pod and PersistentVolumeClaim.

     kubectl  
    delete  
    pod  
    my-pod
    kubectl  
    delete  
    pvc  
    lustre-pvc 
    
  2. Check the PersistentVolume status.

     kubectl  
    get  
    pv 
    

    The output is similar to the following:

     No resources found 
    

    It might take a few minutes for the underlying Managed Lustre instance to be fully deleted.

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: