Stay organized with collectionsSave and categorize content based on your preferences.
This page shows you how to autoscale your clusters. To learn about how the
cluster autoscaler works, refer toCluster autoscaler.
Cluster autoscaling resizes the number of nodes in a given node pool based on
the demands of your workloads. You specifyminReplicasandmaxReplicasvalues for each node pool in your cluster.
For an individual node pool,minReplicasmust be ≥ 1. However, the sum
of the untainted user cluster nodes at any given time must be at least 3. This
means the sum of theminReplicasvalues for all autoscaled node pools, plus
the sum of thereplicasvalues for all non-autoscaled node pools, must be at
least 3.
Create a user cluster with autoscaling
To create a user cluster with autoscaling, add theautoscalingfield to thenodePoolssection in theuser cluster configuration file.
This configuration creates a node pool with 3 replicas, and applies autoscaling with the minimum node pool size as 1 and the maximum node pool size as 5.
TheminReplicasvalue must be ≥ 1.
Add a node pool with autoscaling
To add a node pool with autoscaling to an existing cluster:
Edit the user cluster configuration file to add a new node pool, and include theautoscalingfield. Adapt the values ofminReplicasandmaxReplicasas needed.
If you are having problems with downscaling your cluster, seePod scheduling and disruption.
You might have to add aPodDisruptionBudgetfor thekube-systemPods. For
more information about manually adding aPodDisruptionBudgetfor thekube-systemPods, see theKubernetes cluster autoscaler FAQ.
When scaling down, cluster autoscaler respects scheduling and eviction rules
set on Pods. These restrictions can prevent a node from being deleted by the
autoscaler. A node's deletion could be prevented if it contains a Pod with any
of these conditions:
The Pod's affinity or anti-affinity rules prevent rescheduling.
The Pod has local storage.
The Pod is not managed by a Controller such as a Deployment, StatefulSet,
Job or ReplicaSet.
For more information about cluster autoscaler and preventing disruptions, see
the following questions in theKubernetes cluster autoscaler FAQ:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eCluster autoscaling adjusts the number of nodes in a node pool based on workload demands, using \u003ccode\u003eminReplicas\u003c/code\u003e and \u003ccode\u003emaxReplicas\u003c/code\u003e values.\u003c/p\u003e\n"],["\u003cp\u003eTo implement autoscaling, add the \u003ccode\u003eautoscaling\u003c/code\u003e field to the \u003ccode\u003enodePools\u003c/code\u003e section in the user cluster configuration file, specifying \u003ccode\u003eminReplicas\u003c/code\u003e (which must be ≥ 1) and \u003ccode\u003emaxReplicas\u003c/code\u003e.\u003c/p\u003e\n"],["\u003cp\u003eAutoscaling can be enabled, added to, or disabled for existing node pools by modifying the user cluster configuration file and using the \u003ccode\u003egkectl update cluster\u003c/code\u003e command.\u003c/p\u003e\n"],["\u003cp\u003eThe cluster autoscaler's behavior can be monitored through logs, the \u003ccode\u003ecluster-autoscaler-status\u003c/code\u003e configuration map, and cluster autoscale events.\u003c/p\u003e\n"],["\u003cp\u003eTroubleshooting cluster autoscaling involves checking limitations, pod scheduling and disruption, and understanding how the autoscaler interacts with Pod scheduling and eviction rules.\u003c/p\u003e\n"]]],[],null,["# Cluster autoscaler\n\n| **Preview**\n|\n|\n| This feature is subject to the \"Pre-GA Offerings Terms\" in the General Service Terms section\n| of the [Service Specific Terms](/terms/service-terms#1).\n|\n| Pre-GA features are available \"as is\" and might have limited support.\n|\n| For more information, see the\n| [launch stage descriptions](/products#product-launch-stages).\n\nThis page shows you how to autoscale your clusters. To learn about how the\ncluster autoscaler works, refer to\n[Cluster autoscaler](/anthos/clusters/docs/on-prem/1.8/concepts/cluster-autoscaler).\n\nCluster autoscaling resizes the number of nodes in a given node pool based on\nthe demands of your workloads. You specify `minReplicas` and `maxReplicas`\nvalues for each node pool in your cluster.\n\nFor an individual node pool, `minReplicas` must be ≥ 1. However, the sum\nof the untainted user cluster nodes at any given time must be at least 3. This\nmeans the sum of the `minReplicas` values for all autoscaled node pools, plus\nthe sum of the `replicas` values for all non-autoscaled node pools, must be at\nleast 3.\n\nCreate a user cluster with autoscaling\n--------------------------------------\n\nTo create a user cluster with autoscaling, add the `autoscaling` field to the `nodePools` section in the [user cluster configuration file](/anthos/clusters/docs/on-prem/1.8/how-to/user-cluster-configuration-file).\n\n```\nnodePools:\n- name: pool‐1\n …\n replicas: 3\n ...\n autoscaling:\n minReplicas: 1\n maxReplicas: 5\n```\n\nThis configuration creates a node pool with 3 replicas, and applies autoscaling with the minimum node pool size as 1 and the maximum node pool size as 5.\n\nThe `minReplicas` value must be ≥ 1.\n\nAdd a node pool with autoscaling\n--------------------------------\n\nTo add a node pool with autoscaling to an existing cluster:\n\n1. Edit the user cluster configuration file to add a new node pool, and include the `autoscaling` field. Adapt the values of `minReplicas` and `maxReplicas` as needed.\n\n```\nnodePools:\n- name: my-new-node-pool\n …\n replicas: 3\n ...\n autoscaling:\n minReplicas: 1\n maxReplicas: 5\n```\n\n1. Run the `gkectl update cluster --config \u003cvar class=\"edit\"\u003eUSER_CLUSTER_CONFIG\u003c/var\u003e --kubeconfig \u003cvar class=\"edit\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e` command.\n\nEnable an existing node pool for autoscaling\n--------------------------------------------\n\nTo enable autoscaling for a node pool in an existing cluster:\n\n1. Edit a specific `nodePool` in the user cluster configuration file, and include the `autoscaling` field. Adapt the values of `minReplicas` and `maxReplicas` as needed.\n\n \u003cbr /\u003e\n\n ```\n nodePools:\n\n\n - name: my-existing-node-pool\n ...\n replicas: 3\n ...\n autoscaling:\n minReplicas: 1\n maxReplicas: 5\n\n ```\n2. Run the `gkectl update cluster --config \u003cvar class=\"edit\"\u003eUSER_CLUSTER_CONFIG\u003c/var\u003e --kubeconfig \u003cvar class=\"edit\"\u003eADMIN_CLUSTER_KUBECONFIG\u003c/var\u003e` command.\n\n| **Note:** You cannot modify replicas for an existing node pool while adding autoscaling.\n\nDisable autoscaling for an existing node pool\n---------------------------------------------\n\nTo disable autoscaling for a specific node pool:\n\n1. Edit the user cluster configuration file and remove the `autoscaling` field for that node pool.\n\n2. Run the `gkectl update cluster` command.\n\nCheck cluster autoscaler behavior\n---------------------------------\n\nYou can determine what the cluster autoscaler is doing in several ways.\n\n### Check cluster autoscaler logs\n\nFirst, find the name of the cluster autoscaler Pod. Run this command, replacing USER_CLUSTER_NAME with the user cluster name:\n\n```\nkubectl --kubeconfig ADMIN_CLUSTER_KUBECONFIG get pods -n USER_CLUSTER_NAMEvar\u003e | grep cluster-autoscaler\n```\n\nTo check logs on the cluster autoscaler Pod, replacing POD_NAME with the Pod name:\n\n```\nkubectl --kubeconfig ADMIN_KUBECONFIG logs cluster-autoscaler-POD_NAME --container cluster-autoscaler ADMIN_KUBECONFIG\n```\n\n### Check the configuration map\n\nThe cluster autoscaler publishes the kube-system/cluster-autoscaler-status configuration map. To see this map, run this command:\n\n```\nrun kubectl --kubeconfig USER_KUBECONFIG get configmap cluster-autoscaler-status -n kube-system -o yaml\n```\n\n### Check cluster autoscale events.\n\nYou can check [cluster autoscale events](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-events-are-emitted-by-ca):\n\n- On pods (particularly those that cannot be scheduled, or on underutilized nodes)\n- On nodes\n- On the kube-system/cluster-autoscaler-status config map.\n\nTroubleshooting\n---------------\n\nSee the following troubleshooting information for cluster autoscaler:\n\n- You might be experiencing one of the [limitations for cluster autoscaler](/kubernetes-engine/docs/concepts/cluster-autoscaler#limitations).\n- If you are having problems with downscaling your cluster, see [Pod scheduling and disruption](/kubernetes-engine/docs/concepts/cluster-autoscaler#scheduling-and-disruption). You might have to add a `PodDisruptionBudget` for the `kube-system` Pods. For more information about manually adding a `PodDisruptionBudget` for the `kube-system` Pods, see the [Kubernetes cluster autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-to-set-pdbs-to-enable-ca-to-move-kube-system-pods).\n- When scaling down, cluster autoscaler respects scheduling and eviction rules set on Pods. These restrictions can prevent a node from being deleted by the autoscaler. A node's deletion could be prevented if it contains a Pod with any of these conditions:\n - The Pod's affinity or anti-affinity rules prevent rescheduling.\n - The Pod has local storage.\n - The Pod is not managed by a Controller such as a Deployment, StatefulSet, Job or ReplicaSet.\n\nFor more information about cluster autoscaler and preventing disruptions, see\nthe following questions in the [Kubernetes cluster autoscaler FAQ](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md):\n\n- [How does scale-down work?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#how-does-scale-down-work)\n- [Does Cluster autoscaler work with PodDisruptionBudget in scale-down?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#does-ca-work-with-poddisruptionbudget-in-scale-down)\n- [What types of Pods can prevent Cluster autoscaler from removing a node?](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-types-of-pods-can-prevent-ca-from-removing-a-node)"]]