Set up the Cloud Storage FUSE CSI driver for GKE


This page describes how you can set up and prepare to use the Cloud Storage FUSE CSI driver for GKE.

To use the Cloud Storage FUSE CSI driver, perform these steps:

Create the Cloud Storage bucket

If you have not already done so, create your Cloud Storage buckets . You will mount these buckets as volumes in your GKE cluster. To improve performance, set the Location type to Region, and select a region that matches your GKE cluster.

Enable the Cloud Storage FUSE CSI driver

Follow these steps, depending on whether you are using GKE Autopilot or Standard clusters. We recommend that you use an Autopilot cluster for a fully managed Kubernetes experience. To choose the mode that's the best fit for your workloads, see Choose a GKE mode of operation .

Autopilot

The Cloud Storage FUSE CSI driver is enabled by default for Autopilot clusters. You can skip to Configure access to Cloud Storage buckets .

Standard

If your Standard cluster has Cloud Storage FUSE CSI driver enabled, skip to Configure access to Cloud Storage buckets .

The Cloud Storage FUSE CSI driver is not enabled by default in Standard clusters. To create a Standard cluster with the Cloud Storage FUSE CSI driver enabled, you can use the gcloud container clusters create command:

 gcloud  
container  
clusters  
create  
 CLUSTER_NAME 
  
 \ 
  
--addons  
GcsFuseCsiDriver  
 \ 
  
--cluster-version = 
 VERSION 
  
 \ 
  
--location = 
 LOCATION 
  
 \ 
  
--workload-pool = 
 PROJECT_ID 
.svc.id.goog 

Replace the following:

  • CLUSTER_NAME : the name of your cluster.
  • VERSION : the GKE version number. You must select 1.24 or later.
  • LOCATION : the Compute Engine region or zone for the cluster.
  • PROJECT_ID : your project ID.

To enable the driver on an existing Standard cluster, use the gcloud container clusters update command:

 gcloud  
container  
clusters  
update  
 CLUSTER_NAME 
  
 \ 
  
--update-addons  
 GcsFuseCsiDriver 
 = 
ENABLED  
 \ 
  
--location = 
 LOCATION 
 

To verify that the Cloud Storage FUSE CSI driver is enabled on your cluster, run the following command:

 gcloud  
container  
clusters  
describe  
 CLUSTER_NAME 
  
 \ 
  
--location = 
 LOCATION 
  
 \ 
  
--project = 
 PROJECT_ID 
  
 \ 
  
--format = 
 "value(addonsConfig.gcsFuseCsiDriverConfig.enabled)" 
 

Configure access to Cloud Storage buckets

The Cloud Storage FUSE CSI driver uses Workload Identity Federation for GKE so that you can set fine grained permissions on how your GKE Pods can access data stored in Cloud Storage.

To make your Cloud Storage buckets accessible by your GKE cluster, authenticate using Workload Identity Federation for GKE with the Cloud Storage bucket that you want to mount in your Pod specification:

  1. If you don't have Workload Identity Federation for GKE enabled, follow these steps to enable it. If you want to use an existing node pool, manually enable Workload Identity Federation for GKE on your node pool after enabling Workload Identity Federation for GKE on your cluster.
  2. Get credentials for your cluster:

     gcloud  
    container  
    clusters  
    get-credentials  
     CLUSTER_NAME 
      
     \ 
      
    --location = 
     LOCATION 
     
    

    Replace the following:

    • CLUSTER_NAME : the name of your cluster that has Workload Identity Federation for GKE enabled.
    • LOCATION : the Compute Engine region or zone for the cluster.
  3. Create a namespace to use for the Kubernetes ServiceAccount . You can also use the default namespace or any existing namespace.

     kubectl  
    create  
    namespace  
     NAMESPACE 
     
    

    Replace NAMESPACE with the name of the Kubernetes namespace for the Kubernetes ServiceAccount.

  4. Create a Kubernetes ServiceAccount for your application to use. You can also use any existing Kubernetes ServiceAccount in any namespace, including the default Kubernetes ServiceAccount.

     kubectl  
    create  
    serviceaccount  
     KSA_NAME 
      
     \ 
      
    --namespace  
     NAMESPACE 
     
    

    Replace KSA_NAME with the name of your Kubernetes ServiceAccount.

  5. Grant one of the IAM roles for Cloud Storage to the Kubernetes ServiceAccount. Follow these steps, depending on whether you are granting the Kubernetes ServiceAccount access to a specific Cloud Storage bucket only, or global access to all buckets in the project.

    Specific bucket access

     gcloud  
    storage  
    buckets  
    add-iam-policy-binding  
    gs:// BUCKET_NAME 
      
     \ 
      
    --member  
     "principal://iam.googleapis.com/projects/ PROJECT_NUMBER 
    /locations/global/workloadIdentityPools/ PROJECT_ID 
    .svc.id.goog/subject/ns/ NAMESPACE 
    /sa/ KSA_NAME 
    " 
      
     \ 
      
    --role  
     " ROLE_NAME 
    " 
     
    

    Replace the following:

    • BUCKET_NAME : your Cloud Storage bucket name.
    • PROJECT_NUMBER : the numerical project number of your GKE cluster. To find your project number, see Identifying projects .
    • PROJECT_ID : the project ID of your GKE cluster.
    • NAMESPACE : the name of the Kubernetes namespace for the Kubernetes ServiceAccount.
    • KSA_NAME : the name of your new Kubernetes ServiceAccount.
    • ROLE_NAME : the IAM role to assign to your Kubernetes ServiceAccount.
      • For read-only workloads, use the Storage Object Viewer role ( roles/storage.objectViewer ).
      • For read-write workloads, use the Storage Object User role ( roles/storage.objectUser ).

    Global bucket access

     gcloud  
    projects  
    add-iam-policy-binding  
     GCS_PROJECT 
      
     \ 
      
    --member  
     "principal://iam.googleapis.com/projects/ PROJECT_NUMBER 
    /locations/global/workloadIdentityPools/ PROJECT_ID 
    .svc.id.goog/subject/ns/ NAMESPACE 
    /sa/ KSA_NAME 
    " 
      
     \ 
      
    --role  
     " ROLE_NAME 
    " 
     
    

    Replace the following:

    • GCS_PROJECT : the project ID of your Cloud Storage buckets.
    • PROJECT_NUMBER : the numerical project number of your GKE cluster. To find your project number, see Identifying projects .
    • PROJECT_ID : the project ID of your GKE cluster.
    • NAMESPACE : the name of the Kubernetes namespace for the Kubernetes ServiceAccount.
    • KSA_NAME : the name of your new Kubernetes ServiceAccount.
    • ROLE_NAME : the IAM role to assign to your Kubernetes ServiceAccount.
      • For read-only workloads, use the Storage Object Viewer role ( roles/storage.objectViewer ).
      • For read-write workloads, use the Storage Object User role ( roles/storage.objectUser ).

Configure access for Pods with host network

For GKE cluster versions earlier than 1.33.3-gke.1226000 , the Cloud Storage FUSE CSI driver does not support Pods running on the host network ( hostNetwork: true ) due to restrictions of Workload Identity Federation for GKE . However, for later GKE versions, you can configure secure authentication for hostNetwork enabled Pods when you use the Cloud Storage FUSE CSI driver to mount Cloud Storage buckets. The host network support is available only on Standard GKE clusters.

Make sure that your GKE cluster meets the following requirements:

  • Both the control plane and node pools in your Standard GKE cluster must have a version 1.33.3-gke.1226000 or later.
  • Enable Workload Identity on your cluster.
  • Grant the necessary IAM permissions to the Kubernetes Service Account that your hostNetwork -enabled Pod uses to access your Cloud Storage bucket. For more information, see Authenticate to Cloud Storage FUSE .

You specify the volume attribute hostNetworkPodKSA: "true" in your Pod or PersistentVolume definition to enable your HostNetwork Pods to access Cloud Storage volumes. The exact configuration differs based on how you manage the Cloud Storage FUSE sidecar container.

Managed sidecars

This section applies if GKE automatically injects and manages the Cloud Storage FUSE sidecar container into your Pods. This option is the default and recommended setup for the Cloud Storage FUSE CSI driver.

Ephemeral volume

The following Pod manifest configures an ephemeral volume for a HostNetwork Pod to access a Cloud Storage bucket.

  apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 Pod 
 metadata 
 : 
  
 name 
 : 
  
 test-pod 
  
 namespace 
 : 
  
 ns1 
  
 annotations 
 : 
  
 gke-gcsfuse/volumes 
 : 
  
 "true" 
 spec 
 : 
  
 serviceAccountName 
 : 
  
 test-ksa-ns1 
  
 hostNetwork 
 : 
  
 true 
  
 containers 
 : 
  
 - 
  
 image 
 : 
  
 busybox 
  
 name 
 : 
  
 busybox 
  
 command 
 : 
  
 - 
  
 sleep 
  
 - 
  
 "3600" 
  
 volumeMounts 
 : 
  
 - 
  
 name 
 : 
  
 gcs-fuse-csi-ephemeral 
  
 mountPath 
 : 
  
 /data 
  
 volumes 
 : 
  
 - 
  
 name 
 : 
  
 gcs-fuse-csi-ephemeral 
  
 csi 
 : 
  
 driver 
 : 
  
 gcsfuse.csi.storage.gke.io 
  
 volumeAttributes 
 : 
  
 bucketName 
 : 
  
 test-bucket 
  
 hostNetworkPodKSA 
 : 
  
 "true" 
 

Persistent volume

The following PV manifest configures a PV for a HostNetwork Pod to access a Cloud Storage bucket.

  apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 PersistentVolume 
 metadata 
 : 
 name 
 : 
  
 gcp-cloud-storage-csi-pv 
 spec 
 : 
 accessModes 
 : 
 - 
  
 ReadWriteMany 
 capacity 
 : 
  
 storage 
 : 
  
 5Gi 
 persistentVolumeReclaimPolicy 
 : 
  
 Retain 
 # storageClassName does not need to refer to an existing StorageClass object. 
 storageClassName 
 : 
  
 test-storage-class 
 mountOptions 
 : 
  
 - 
  
 uid=1001 
  
 - 
  
 gid=3003 
 csi 
 : 
  
 driver 
 : 
  
 gcsfuse.csi.storage.gke.io 
  
 volumeHandle 
 : 
  
 test-wi-host-network-2 
  
 volumeAttributes 
 : 
  
 hostNetworkPodKSA 
 : 
  
 "true" 
 

Private sidecars

This section applies if you manually manage the Cloud Storage FUSE sidecar container within your Pods or use a custom sidecar image.

Make sure that your sidecar image is based on Cloud Storage FUSE CSI driver version v1.17.2 or later .

Ephemeral volume

The following Pod manifest configures an ephemeral volume for a HostNetwork Pod to access a Cloud Storage bucket.

  apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 Pod 
 metadata 
 : 
  
 name 
 : 
  
 test-pod 
  
 namespace 
 : 
  
 ns1 
  
 annotations 
 : 
  
 gke-gcsfuse/volumes 
 : 
  
 "true" 
 spec 
 : 
  
 serviceAccountName 
 : 
  
 test-ksa-ns1 
  
 hostNetwork 
 : 
  
 true 
  
 containers 
 : 
  
 - 
  
 image 
 : 
  
 busybox 
  
 name 
 : 
  
 busybox 
  
 command 
 : 
  
 - 
  
 sleep 
  
 - 
  
 "3600" 
  
 volumeMounts 
 : 
  
 - 
  
 name 
 : 
  
 gcs-fuse-csi-ephemeral 
  
 mountPath 
 : 
  
 /data 
  
 volumes 
 : 
  
 - 
  
 name 
 : 
  
 gcs-fuse-csi-ephemeral 
  
 csi 
 : 
  
 driver 
 : 
  
 gcsfuse.csi.storage.gke.io 
  
 volumeAttributes 
 : 
  
 bucketName 
 : 
  
 test-bucket 
  
 hostNetworkPodKSA 
 : 
  
 "true" 
  
 identityProvider 
 : 
  
 "https://container.googleapis.com/v1/projects/ PROJECT_ID 
/locations/ LOCATION 
/clusters/ CLUSTER_NAME 
" 
 

In the identityProvider field, replace the following:

  • PROJECT_ID : your Google Cloud project ID.
  • LOCATION : the location of your cluster.
  • CLUSTER_NAME : the name of your Standard GKE cluster.

Persistent volume

The following PV manifest configures a PV for a HostNetwork Pod to access a Cloud Storage bucket.

  apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 PersistentVolume 
 metadata 
 : 
 name 
 : 
  
 gcp-cloud-storage-csi-pv 
 spec 
 : 
 accessModes 
 : 
 - 
  
 ReadWriteMany 
 capacity 
 : 
  
 storage 
 : 
  
 5Gi 
 persistentVolumeReclaimPolicy 
 : 
  
 Retain 
 # storageClassName does not need to refer to an existing StorageClass object. 
 storageClassName 
 : 
  
 test-storage-class 
 mountOptions 
 : 
  
 - 
  
 uid=1001 
  
 - 
  
 gid=3003 
 csi 
 : 
  
 driver 
 : 
  
 gcsfuse.csi.storage.gke.io 
  
 volumeHandle 
 : 
  
 test-wi-host-network-2 
  
 volumeAttributes 
 : 
  
 hostNetworkPodKSA 
 : 
  
 "true" 
  
 identityProvider 
 : 
  
 "https://container.googleapis.com/v1/projects/ PROJECT_ID 
/locations/ LOCATION 
/clusters/ CLUSTER_NAME 
" 
 

In the identityProvider field, replace the following:

  • PROJECT_ID : your Google Cloud project ID.
  • LOCATION : the location of your cluster.
  • CLUSTER_NAME : the name of your Standard GKE cluster.

What's next

Create a Mobile Website
View Site in Mobile | Classic
Share by: