Scale A3 Mega cluster across multiple reservations
Stay organized with collectionsSave and categorize content based on your preferences.
This document provides information about how to use multiple reservations for
an A3 Mega Slurm cluster.
As the jobs running on A3 Mega cluster grow, you might need to span your
jobs across more than one reservation. To do this, you need to make a few minor
changes to the following files:
The cluster blueprint:a3mega-slurm-blueprint.yaml
The cluster deployment file:a3mega-slurm-deployment.yaml
Overview
To update your cluster, we recommend creating a single Slurm partition with
multiplenodesetsso that a single job can span across multiple reservations.
To do this, complete the following steps:
In the deployment file, create a nodeset for each additional reservation
In the cluster blueprint, add all nodesets to the A3 Mega partition
Deploy or redeploy the A3 Mega cluster
Switch to the Cluster Toolkit directory
Ensure that you are in the Cluster Toolkit directory.
To go to the Cluster Toolkit working directory, run the following command.
cd cluster-toolkit
Create one nodeset for each reservation
To create a nodeset for each reservation, you need to update youra3mega-slurm-deployment.yamldeployment file to add nodeset
variables for each reservation. This file deployment is located in the A3 Mega
directory:cluster-toolkit/example/machine-learning/a3-megagpu-8g/.
The following example shows how to add three nodeset variables to thea3mega-slurm-deployment.yamldeployment file. ReplaceNUMBER_OFVMS*with the number of VMs in each
reservation.
To add the nodesets to the A3 Mega partition, you need to update thea3mega-slurm-blueprint.yamlcluster blueprint. This blueprint file is located
in the A3 Mega directory:cluster-toolkit/example/machine-learning/a3-megagpu-8g/.
To add the nodesets, complete the following steps in thea3mega-slurm-blueprint.yamlblueprint:
Locate theid: a3mega_nodesetsection. It should resemble the following:
Make a copy of the entireid: a3mega_nodesetsection for each of the new
reservations. In each section, change thenode_count_staticsetting to
specify the nodeset variable created in the preceding step.
For example, if you had created three nodesets, you would update as follows:
When you connect to the login or controller node, you might see the following:
*** Slurm is currently being configured in the background. ***
If you see this message, wait a few minutes until Slurm has finished configuring
and then reconnect to the cluster. Then you can runsinfoandscontrolto examine your new partition.
For thesinfocommand, the output should resemble the following:
PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
a3mega* up infinite 216 idle a3mega-a3meganodesa-[0-79],a3mega-a3meganodesb-[0-63],a3mega-a3meganodesc-[0-71]
debug up infinite 4 idle~ a3mega-debugnodeset-[0-3]
For thescontrol show partition a3megacommand, the output should resemble
the following:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThis guide outlines the process of configuring an A3 Mega Slurm cluster to utilize multiple reservations for spanning jobs.\u003c/p\u003e\n"],["\u003cp\u003eTo enable multi-reservation support, you must create a nodeset for each additional reservation within the \u003ccode\u003edeployment-image-cluster.yaml\u003c/code\u003e file.\u003c/p\u003e\n"],["\u003cp\u003eIn the \u003ccode\u003eslurm-a3mega-cluster.yaml\u003c/code\u003e blueprint, all newly created nodesets should be added to the A3 Mega partition to allow for their use.\u003c/p\u003e\n"],["\u003cp\u003eAfter modifying the configurations, you need to deploy or redeploy the A3 Mega cluster using the provided command, potentially overwriting existing infrastructure.\u003c/p\u003e\n"],["\u003cp\u003eOnce deployed, you can verify the configuration by connecting to the cluster and using the \u003ccode\u003esinfo\u003c/code\u003e and \u003ccode\u003escontrol\u003c/code\u003e commands to examine the new partition and its nodesets.\u003c/p\u003e\n"]]],[],null,["# Scale A3 Mega cluster across multiple reservations\n\nThis document provides information about how to use multiple reservations for\nan A3 Mega Slurm cluster.\n\nAs the jobs running on A3 Mega cluster grow, you might need to span your\njobs across more than one reservation. To do this, you need to make a few minor\nchanges to the following files:\n\n- The cluster blueprint: `a3mega-slurm-blueprint.yaml`\n- The cluster deployment file: `a3mega-slurm-deployment.yaml`\n\nOverview\n--------\n\nTo update your cluster, we recommend creating a single Slurm partition with\nmultiple `nodesets` so that a single job can span across multiple reservations.\n\nTo do this, complete the following steps:\n\n1. In the deployment file, create a nodeset for each additional reservation\n2. In the cluster blueprint, add all nodesets to the A3 Mega partition\n3. Deploy or redeploy the A3 Mega cluster\n\nSwitch to the Cluster Toolkit directory\n---------------------------------------\n\nEnsure that you are in the Cluster Toolkit directory.\nTo go to the Cluster Toolkit working directory, run the following command. \n\n```\ncd cluster-toolkit\n```\n\nCreate one nodeset for each reservation\n---------------------------------------\n\nTo create a nodeset for each reservation, you need to update your\n[`a3mega-slurm-deployment.yaml`](https://github.com/GoogleCloudPlatform/cluster-toolkit/blob/main/examples/machine-learning/a3-megagpu-8g/a3mega-slurm-deployment.yaml) deployment file to add nodeset\nvariables for each reservation. This file deployment is located in the A3 Mega\ndirectory: `cluster-toolkit/example/machine-learning/a3-megagpu-8g/`.\n\nThe following example shows how to add three nodeset variables to the\n`a3mega-slurm-deployment.yaml` deployment file. Replace\n\u003cvar translate=\"no\"\u003eNUMBER_OF\u003cem\u003eVMS\u003c/em\u003e*\u003c/var\u003e with the number of VMs in each\nreservation. \n\n```\nvars:\n project_id: customer-project\n region: customer-region\n zone: customer-zone\n ...\n a3mega_nodeset_a_size: NUMBER_OF_VMS_A\n a3mega_nodeset_b_size: NUMBER_OF_VMS_B\n a3mega_nodeset_c_size: NUMBER_OF_VMS_C\n ...\n```\n\nAdd all nodesets to the A3 Mega partition\n-----------------------------------------\n\nTo add the nodesets to the A3 Mega partition, you need to update the\n[`a3mega-slurm-blueprint.yaml`](https://github.com/GoogleCloudPlatform/cluster-toolkit/blob/main/examples/machine-learning/a3-megagpu-8g/a3mega-slurm-blueprint.yaml) cluster blueprint. This blueprint file is located\nin the A3 Mega directory: `cluster-toolkit/example/machine-learning/a3-megagpu-8g/`.\n\nTo add the nodesets, complete the following steps in the\n`a3mega-slurm-blueprint.yaml` blueprint:\n\n1. Locate the `id: a3mega_nodeset` section. It should resemble the following:\n\n ```yaml\n ‐ id: a3mega_nodeset\n source: community/modules/compute/schedmd-slurm-gcp-v6-nodeset\n use:\n ‐ sysnet\n ‐ gpunets\n settings:\n node_count_static: $(vars.a3mega_cluster_size)\n node_count_dynamic_max: 0\n ...\n ```\n2. Make a copy of the entire `id: a3mega_nodeset` section for each of the new\n reservations. In each section, change the `node_count_static` setting to\n specify the nodeset variable created in the preceding step.\n\n For example, if you had created three nodesets, you would update as follows: \n\n ```yaml\n ‐ id: a3mega_nodes_a\n source: community/modules/compute/schedmd-slurm-gcp-v6-nodeset\n use:\n ‐ sysnet\n ‐ gpunets\n settings:\n node_count_static: $(vars.a3mega_nodeset_a_size)\n node_count_dynamic_max: 0\n ...\n\n ‐ id: a3mega_nodes_b\n source: community/modules/compute/schedmd-slurm-gcp-v6-nodeset\n use:\n ‐ sysnet\n ‐ gpunets\n settings:\n node_count_static: $(vars.a3mega_nodeset_b_size)\n node_count_dynamic_max: 0\n ...\n\n ‐ id: a3mega_nodes_c\n source: community/modules/compute/schedmd-slurm-gcp-v6-nodeset\n use:\n ‐ sysnet\n ‐ gpunets\n settings:\n node_count_static: $(vars.a3mega_nodeset_c_size)\n node_count_dynamic_max: 0\n ...\n ```\n3. Locate the `id: a3mega_partition` section.\n\n ```yaml\n ‐ id: a3mega_partition\n source: community/modules/compute/schedmd-slurm-gcp-v6-partition\n use:\n - a3mega_nodeset\n settings:\n ...\n ```\n4. Add the new nodesets.\n\n ```yaml\n ‐ id: a3mega_partition\n source: community/modules/compute/schedmd-slurm-gcp-v6-partition\n use:\n - a3mega_nodes_a\n - a3mega_nodes_b\n - a3mega_nodes_c\n settings:\n ...\n ```\n\nDeploy the A3 Mega cluster\n--------------------------\n\n- If you are deploying the cluster for the first time, continue with the deployment. To deploy the cluster, see [Deploy an A3 Mega Slurm cluster for ML training](/cluster-toolkit/docs/deploy/deploy-a3-mega-cluster#provision-cluster).\n- If you are updating an existing cluster, run the following command from the\n Cluster Toolkit directory.\n\n The `-w` flag specifies that you want to overwrite the previously deployed\n infrastructure.\n **Caution:** If you are using a version of Terraform newer than 1.5, the following command might destroy the controller, login, and compute nodes, and might re-create several other assets such as VPC firewall rules. Any local modifications to the system won't be preserved. For more information, see [Known issue](https://github.com/GoogleCloudPlatform/cluster-toolkit/issues/2774). \n\n ```\n ./gcluster deploy -w \\\n -d example/machine-learning/a3-megagpu-8g/a3mega-slurm-deployment.yaml \\\n -b example/machine-learning/a3-megagpu-8g/a3mega-slurm-blueprint.yaml\n ```\n\n This process might take approximately 10-30 minutes to delete any existing\n nodes and create all of the new nodes.\n\nConnect to the A3 Mega Slurm cluster\n------------------------------------\n\nTo login, you can use either Google Cloud console or Google Cloud CLI. \n\n### Console\n\n1. Go to the **Compute Engine** \\\u003e **VM instances** page.\n\n [Go to VM instances](https://console.cloud.google.com/compute/instances)\n2. Locate the login node. It should have a name similar to\n `a3mega-login-001`.\n\n3. From the **Connect** column of the login node, click **SSH**.\n\n### gcloud\n\nTo connect to the login node, use the\n[`gcloud compute ssh` command](/sdk/gcloud/reference/compute/ssh). \n\n```\ngcloud compute ssh $(gcloud compute instances list --filter \"name ~ login\" --format \"value(name)\") \\\n --tunnel-through-iap \\\n --zone ZONE\n```\n\nTest your multi-nodeset partition\n---------------------------------\n\nWhen you connect to the login or controller node, you might see the following: \n\n```\n*** Slurm is currently being configured in the background. ***\n```\n\nIf you see this message, wait a few minutes until Slurm has finished configuring\nand then reconnect to the cluster. Then you can run `sinfo` and `scontrol`\nto examine your new partition.\n\n- For the `sinfo` command, the output should resemble the following:\n\n ```\n PARTITION AVAIL TIMELIMIT NODES STATE NODELIST\n a3mega* up infinite 216 idle a3mega-a3meganodesa-[0-79],a3mega-a3meganodesb-[0-63],a3mega-a3meganodesc-[0-71]\n debug up infinite 4 idle~ a3mega-debugnodeset-[0-3]\n ```\n- For the `scontrol show partition a3mega` command, the output should resemble\n the following:\n\n ```\n PartitionName=a3mega\n AllowGroups=ALL AllowAccounts=ALL AllowQos=ALL\n AllocNodes=ALL Default=YES QoS=N/A\n DefaultTime=NONE DisableRootJobs=NO ExclusiveUser=NO GraceTime=0 Hidden=NO\n MaxNodes=UNLIMITED MaxTime=UNLIMITED MinNodes=0 LLN=NO MaxCPUsPerNode=UNLIMITED MaxCPUsPerSocket=UNLIMITED\n NodeSets=a3meganodesa,a3meganodesb,a3meganodesc\n Nodes=a3mega-a3meganodesa-[0-79],a3mega-a3meganodesb-[0-63],a3mega-a3meganodesc-[0-71]\n PriorityJobFactor=1 PriorityTier=1 RootOnly=NO ReqResv=NO OverSubscribe=EXCLUSIVE\n OverTimeLimit=NONE PreemptMode=OFF\n State=UP TotalCPUs=44928 TotalNodes=216 SelectTypeParameters=NONE\n JobDefaults=(null)\n DefMemPerCPU=8944 MaxMemPerNode=UNLIMITED\n TRES=cpu=44928,mem=392421G,node=216,billing=44928\n ResumeTimeout=900 SuspendTimeout=600 SuspendTime=300 PowerDownOnIdle=NO\n ```"]]