Configure an ephemeral disk for Cloud Run jobs

Cloud Run provides an ephemeral disk volume that persists only for the duration of your instance. This feature lets you specify the amount of disk you need and the location for mounting it. Cloud Run will then allocate that amount of disk to your resource.

Disks are automatically provisioned, pre-formatted to ext4 , and encrypted with instance-specific keys at startup. Ephemeral disk creates your volume so that any user can read or write to it. Because the storage is ephemeral, all data is permanently deleted when the instance shuts down. This includes shutdowns caused by:

  • Instance crashes
  • Job task completion (success or failure)

Disks are dedicated to a specific instance and are not shared across other instances. You have control over the file system structure with a configurable mount point for each volume.

Before shutting down an instance, Cloud Run sends a SIGTERM signal to all the containers in an instance, indicating the start of a 10-second period before the actual shutdown occurs, at which point Cloud Run sends a SIGKILL signal. You can use this 10-second window to perform cleanup operations such as doing a final round of copying disk content persistent storage.

Use cases

You can use ephemeral disk for the following:

  • Data processing workloads: When processing large data files in Cloud Run, you typically store the entire file in memory or orchestrate splitting it up into smaller pieces. With ephemeral storage, you won't need to pay for large amounts of memory to make a temporary local copy of your data. You will also be able to process larger data sets.
  • Caching: In web serving use cases, caching data on disk rather than fetching from remote storage can optimize your application's latency.

Storage and instance limits

The following limits apply:

  • Instance storage limit: each instance is limited to 10 GB of total space by default. If necessary, request a quota increase .
  • Instance volume limit: each instance is limited to a maximum of 10 volumes.
  • Project limit: each project is limited to 100 GB per region by default. If necessary, request a quota increase .

Request a quota increase

Projects using a Cloud Run ephemeral disk in a region for the first time are automatically granted 10 GB per instance, per region limit and 100 GB per project, per region limit.

If you need additional capacity, you must request a quota increase for your Cloud Run job. Use the links provided in the following buttons to request the quota you need.

Current quota Quota link
10 GB per instance Request greater quota per instance
100 GB per project Request greater quota per project

For more information on requesting quota increases, see How to increase quota .

Limitations

The following limitations apply:
  • Ephemeral disk is only available in the second generation execution environment. By default, Cloud Run jobs use the second generation execution environment .
  • Live migration is not supported. This means that Cloud Run jobs will be less reliable, especially long-running jobs.

Disallowed paths

Cloud Run does not allow you to mount a volume at /dev , /proc , or /sys , or on their subdirectories.

Supported regions

The ephemeral disk feature is available in the following regions:

  • For non-GPU workloads, ephemeral disk is available in:
    • asia-northeast1 (Tokyo)
    • europe-west1 (Belgium)leaf icon Low CO 2
    • northamerica-northeast1 (Montreal)leaf icon Low CO 2
    • northamerica-northeast2 (Toronto)leaf icon Low CO 2
    • us-central1 (Iowa)leaf icon Low CO 2
    • us-east1 (South Carolina)
    • us-east4 (Northern Virginia)
    • us-west1 (Oregon)leaf icon Low CO 2
  • If you use GPUs, ephemeral disk is available in all regions that support GPUs .

Before you begin

  1. Sign in to your Google Cloud account. If you're new to Google Cloud, create an account to evaluate how our products perform in real-world scenarios. New customers also get $300 in free credits to run, test, and deploy workloads.
  2. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project : Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project : To create a project, you need the Project Creator role ( roles/resourcemanager.projectCreator ), which contains the resourcemanager.projects.create permission. Learn how to grant roles .

    Go to project selector

  3. Verify that billing is enabled for your Google Cloud project .

  4. In the Google Cloud console, on the project selector page, select or create a Google Cloud project.

    Roles required to select or create a project

    • Select a project : Selecting a project doesn't require a specific IAM role—you can select any project that you've been granted a role on.
    • Create a project : To create a project, you need the Project Creator role ( roles/resourcemanager.projectCreator ), which contains the resourcemanager.projects.create permission. Learn how to grant roles .

    Go to project selector

  5. Verify that billing is enabled for your Google Cloud project .

  6. Install and initialize the gcloud CLI.
  7. Update components:
    gcloud  
    components  
    update
  8. Review the Cloud Run pricing page for CPU, memory, and network egress. The entire size of the provisioned disk and the lifetime of the instance that is using it contribute to your cost.

Required roles

To get the permissions that you need to configure an ephemeral disk, ask your administrator to grant you the following IAM roles:

For a list of IAM roles and permissions that are associated with Cloud Run, see Cloud Run IAM roles and Cloud Run IAM permissions . If your Cloud Run job interfaces with Google Cloud APIs, such as Cloud Client Libraries, see the service identity configuration guide . For more information about granting roles, see deployment permissions and manage access .

Create and mount an ephemeral disk

You can create and mount an ephemeral disk using the Google Cloud console or Google Cloud CLI:

Console

  1. In the Google Cloud console, go to the Cloud Run Jobspage:

    Go to Cloud Run

  2. Click Deploy containerto fill out the initial job settings page. If you are configuring an existing job, select the job, then click View and edit job configuration.

  3. Click Containers, Connections, Securityto expand the job properties page.

  4. Click the Containertab.

    image

    • Under Resources :
      • Select Ephemeral Disk .
      • Specify Ephemeral Disk size from the menu.
      • Enter the mount path.
  5. Click Createor Update.

gcloud

To add a volume and mount it:

 gcloud beta run jobs update JOB 
\
    --add-volume=name= VOLUME_NAME 
,type=ephemeral-disk,size= SIZE 
\
    --add-volume-mount=volume= VOLUME_NAME 
,mount-path= MOUNT_PATH 
 

Replace the following:

  • JOB : the name of your job.
  • VOLUME_NAME : the name you want to give your volume.
  • SIZE : the disk size—for example, 100Gi . The size must be at least 10Gi for ephemeral-disk volumes.
  • MOUNT_PATH : the relative path where you are mounting the volume, for example, /mnt/my-volume .

Reading and writing to a volume

If you use the Cloud Run volume mount feature, you access a mounted volume using the same libraries in your programming language that you use to read and write files on your local file system.

This is especially useful if you're using an existing container that expects data to be stored on the local file system and uses regular file system operations to access it.

The following snippets assume a volume mount with a mountPath set to /mnt/my-volume .

Nodejs

Use the File System module to create a new file or append to an existing file in the volume, /mnt/my-volume :

var fs = require('fs');
fs.appendFileSync('/mnt/my-volume/sample-logfile.txt', 'Hello logs!', { flag: 'a+' });

Python

Write to a file kept in the volume, /mnt/my-volume :

f = open("/mnt/my-volume/sample-logfile.txt", "a")

Go

Use the os package to create a new file kept in the volume, /mnt/my-volume :

f, err := os.Create("/mnt/my-volume/sample-logfile.txt")

Java

Use the Java.io.File class to create a log file in the volume, /mnt/my-volume :

import java.io.File;
File f = new File("/mnt/my-volume/sample-logfile.txt");

Clear and remove volumes and volume mounts

You can clear all volumes and mounts or you can remove individual volumes and volume mounts.

Clear all volumes and volume mounts

To clear all volumes and volume mounts from your single-container job, run the following command:

gcloud  
run  
 jobs 
  
update  
 JOB 
  
 \ 
  
--clear-volumes  
--clear-volume-mounts

If you have multiple containers, follow the sidecars CLI conventions to clear volumes and volume mounts:

gcloud  
run  
 jobs 
  
update  
 JOB 
  
 \ 
  
--clear-volumes  
 \ 
  
--clear-volume-mounts  
 \ 
  
--container = 
container1  
 \ 
  
--clear-volumes  
 \ 
  
-–clear-volume-mounts  
 \ 
  
--container = 
container2  
 \ 
  
--clear-volumes  
 \ 
  
-–clear-volume-mounts

Remove individual volumes and volume mounts

In order to remove a volume, you must also remove all volume mounts using that volume.

To remove individual volumes or volume mounts, use the remove-volume and remove-volume-mount flags:

gcloud  
run  
 jobs 
  
update  
 JOB 
  
 \ 
  
--remove-volume  
 VOLUME_NAME 
  
--container = 
container1  
 \ 
  
--remove-volume-mount  
 MOUNT_PATH 
  
 \ 
  
--container = 
container2  
 \ 
  
--remove-volume-mount  
 MOUNT_PATH 

Best practices

Adhere to the following best practices to effectively manage ephemeral data and optimize storage performance.

Copy to persistent storage

If you intend to copy the ephemeral disk contents to persistent storage, such as a Cloud Storage bucket, we recommend incrementally copying, rather than relying on the 10-second SIGTERM to SIGKILL grace period. See Container runtime contract for more information on forced shutdowns.

Cloud Run can read and write from Cloud Storage without any additional networking setup. To achieve optimal performance, we recommend routing traffic to and from Cloud Storage through a VPC network using Direct VPC.

This method works if you don't need the Cloud Run resource to access the internet. If you do need internet access, either set up Cloud NAT, or see Internal traffic to a Google API .

To configure Direct VPC egress with a job complete the following steps:

  1. In the Google Cloud console, go to the Cloud Run page:

    Go to Cloud Run

  2. If you are configuring a new job, click the Jobstab and select Deploy container. Fill out the initial job settings page as needed. If you are configuring an existing job, click the job, then click View and edit job configuration.

  3. Click Containers, Connections, Securityto expand the job properties page.

  4. Click the Connectionstab.

  5. Click Connect to a VPC for outbound traffic.

  6. Click Send traffic directly to a VPC.

  7. In the Networkfield, select the VPC network that you want to send traffic to.

  8. In the Subnetfield, select the subnet where your job receives IP addresses from. You can execute multiple jobs on the same subnet.

  9. For Traffic routing, select one Route all traffic to the VPCto send all outbound traffic through the VPC network.

  10. Click Createor Update.

  11. To verify that your job is on your VPC network, click the job, then click the Configurationtab. The network and subnet are listed in the VPCcard.

  12. Enable Private Google Access on the subnet you connected to.

Troubleshoot

If you see slow network speeds when downloading a large amount of data to your ephemeral disk, follow the steps to turn on Direct VPC . If Direct VPC is not enabled, you will see slower network transfer speeds.
Create a Mobile Website
View Site in Mobile | Classic
Share by: