Reduce latency by using compact placement policies

This document describes how to reduce network latency among your Compute Engine instances by creating and applying compact placement policies to them. To learn more about placement policies, including their supported machine series, restrictions, and pricing, see Placement policies overview .

A compact placement policy specifies that your compute instances should be physically placed closer to each other. This can help improve performance and reduce network latency among your compute instances when, for example, you run high performance computing (HPC), machine learning (ML), or database server workloads.

Before you begin

  • If you haven't already, set up authentication . Authentication verifies your identity for access to Google Cloud services and APIs. To run code or samples from a local development environment, you can authenticate to Compute Engine by selecting one of the following options:

    Select the tab for how you plan to use the samples on this page:

    Console

    When you use the Google Cloud console to access Google Cloud services and APIs, you don't need to set up authentication.

    gcloud

    1. Install the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:

      gcloud  
      init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity .

    2. Set a default region and zone .

    REST

    To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.

      Install the Google Cloud CLI. After installation, initialize the Google Cloud CLI by running the following command:

      gcloud  
      init

      If you're using an external identity provider (IdP), you must first sign in to the gcloud CLI with your federated identity .

    For more information, see Authenticate for using REST in the Google Cloud authentication documentation.

Required roles

To get the permissions that you need to create and apply a compact placement policy to compute instances, ask your administrator to grant you the following IAM roles on your project:

For more information about granting roles, see Manage access to projects, folders, and organizations .

These predefined roles contain the permissions required to create and apply a compact placement policy to compute instances. To see the exact permissions that are required, expand the Required permissionssection:

Required permissions

The following permissions are required to create and apply a compact placement policy to compute instances:

  • To create placement policies: compute.resourcePolicies.create on the project
  • To apply a placement policy to existing compute instances: compute.instances.addResourcePolicies on the project
  • To create compute instances:
    • compute.instances.create on the project
    • To use a custom image to create the VM: compute.images.useReadOnly on the image
    • To use a snapshot to create the VM: compute.snapshots.useReadOnly on the snapshot
    • To use an instance template to create the VM: compute.instanceTemplates.useReadOnly on the instance template
    • To assign a legacy network to the VM: compute.networks.use on the project
    • To specify a static IP address for the VM: compute.addresses.use on the project
    • To assign an external IP address to the VM when using a legacy network: compute.networks.useExternalIp on the project
    • To specify a subnet for the VM: compute.subnetworks.use on the project or on the chosen subnet
    • To assign an external IP address to the VM when using a VPC network: compute.subnetworks.useExternalIp on the project or on the chosen subnet
    • To set VM instance metadata for the VM: compute.instances.setMetadata on the project
    • To set tags for the VM: compute.instances.setTags on the VM
    • To set labels for the VM: compute.instances.setLabels on the VM
    • To set a service account for the VM to use: compute.instances.setServiceAccount on the VM
    • To create a new disk for the VM: compute.disks.create on the project
    • To attach an existing disk in read-only or read-write mode: compute.disks.use on the disk
    • To attach an existing disk in read-only mode: compute.disks.useReadOnly on the disk
  • To create a reservation: compute.reservations.create on the project
  • To create an instance template: compute.instanceTemplates.create on the project
  • To create a managed instance group (MIG): compute.instanceGroupManagers.create on the project
  • To view the details of a compute instance: compute.instances.get on the project

You might also be able to get these permissions with custom roles or other predefined roles .

Create a compact placement policy

Before you create a compact placement policy, consider the following:

  • If you want to apply a compact placement policy to a compute instance other than M3, M2, M1, N2D, or N2, then we recommend that you specify a maximum distance value .

  • If you want to apply a compact placement policy to an A3 Mega, A3 High, or A3 Edge instance that was created before October 1, 2025, then contact your account team.

To create a compact placement policy, select one of the following options:

gcloud

  • To apply the compact placement policy to M3, M2, M1, N2D, or N2 instances, create the policy by using the gcloud compute resource-policies create group-placement command with the --collocation=collocated flag.

     gcloud compute resource-policies create group-placement POLICY_NAME 
    \
        --collocation=collocated \
        --region= REGION 
     
    

    Replace the following:

    • POLICY_NAME : the name of the compact placement policy.

    • REGION : the region in which to create the placement policy.

  • To apply the compact placement policy to any other supported compute instances, create the policy by using the gcloud beta compute resource-policies create group-placement command with the --collocation=collocated and --max-distance flags.

    Preview — The --max-distance flag

    This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms . Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions .

     gcloud beta compute resource-policies create group-placement POLICY_NAME 
    \
        --collocation=collocated \
        --max-distance= MAX_DISTANCE 
    \
        --region= REGION 
     
    

    Replace the following:

    • POLICY_NAME : the name of the compact placement policy.

    • MAX_DISTANCE : the maximum distance configuration for your compute instances. The value must be between 1 , which specifies to place your compute instances in the same rack for the lowest network latency possible, and 3 , which specifies to place your compute instances in adjacent clusters.

    • REGION : the region in which to create the placement policy.

REST

  • To apply the compact placement policy to M3, M2, M1, N2D, or N2 instances, create the policy by making a POST request to the resourcePolicies.insert method . In the request body, include the collocation field and set it to COLLOCATED .

     POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
    /regions/ REGION 
    /resourcePolicies
    
    {
      "name": " POLICY_NAME 
    ",
      "groupPlacementPolicy": {
        "collocation": "COLLOCATED"
      }
    } 
    

    Replace the following:

    • PROJECT_ID : the ID of the project where you want to create the placement policy.

    • REGION : the region in which to create the placement policy.

    • POLICY_NAME : the name of the compact placement policy.

  • To apply the compact placement policy to any other supported compute instances, create the policy by making a POST request to the beta.resourcePolicies.insert method . In the request body, include the following:

    • The collocation field set to COLLOCATED .

    • The maxDistance field.

    Preview — The maxDistance field

    This feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms . Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions .

     POST https://compute.googleapis.com/compute/beta/projects/ PROJECT_ID 
    /regions/ REGION 
    /resourcePolicies
    
    {
      "name": " POLICY_NAME 
    ",
      "groupPlacementPolicy": {
        "collocation": "COLLOCATED",
        "maxDistance": MAX_DISTANCE 
    }
    } 
    

    Replace the following:

    • PROJECT_ID : the ID of the project where you want to create the placement policy.

    • REGION : the region in which to create the placement policy.

    • POLICY_NAME : the name of the compact placement policy.

    • MAX_DISTANCE : the maximum distance configuration for your compute instances. The value must be between 1 , which specifies to place your compute instances in the same rack for the lowest network latency possible, and 3 , which specifies to place your compute instances in adjacent clusters.

Apply a compact placement policy

You can apply a compact placement policy to an existing compute instance or managed instance group (MIG), or when creating compute instances, instance templates, MIGs, or reservations of compute instances.

To apply a compact placement policy to a Compute Engine resource, select one of the following methods:

After you apply a compact placement policy to a instance, you can verify the physical location of the instance in relation to other instances that specify the same placement policy.

Apply the policy to an existing instance

Before applying a compact placement policy to an existing compute instance, make sure of the following:

Otherwise, applying the compact placement policy to the compute instance fails. If the compute instance already specifies a placement policy and you want to replace it, then see instead Replace a placement policy in a compute instance .

To apply a compact placement policy to an existing compute instance, select one of the following options:

gcloud

  1. Stop the compute instance .

  2. To apply a compact placement policy to an existing compute instance, use the gcloud compute instances add-resource-policies command .

     gcloud compute instances add-resource-policies INSTANCE_NAME 
    \
        --resource-policies= POLICY_NAME 
    \
        --zone= ZONE 
     
    

    Replace the following:

    • INSTANCE_NAME : the name of an existing compute instance.

    • POLICY_NAME : the name of an existing compact placement policy.

    • ZONE : the zone where the compute instance exists.

  3. Restart the instance .

REST

  1. Stop the compute instance .

  2. To apply a compact placement policy to an existing compute instance, make a POST request to the instances.addResourcePolicies method .

     POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
    /zones/ ZONE 
    /instances/ INSTANCE_NAME 
    /addResourcePolicies
    
    {
      "resourcePolicies": [
        "projects/ PROJECT_ID 
    /regions/ REGION 
    /resourcePolicies/ POLICY_NAME 
    "
      ]
    } 
    

    Replace the following:

    • PROJECT_ID : the ID of the project where the compact placement policy and the compute instance exist.

    • ZONE : the zone where the compute instance exists.

    • INSTANCE_NAME : the name of an existing compute instance.

    • REGION : the region where the compact placement policy is located.

    • POLICY_NAME : the name of an existing compact placement policy.

  3. Restart the compute instance .

Apply the policy while creating a instance

You can only create a compute instance that specifies a compact placement policy in the same region as the placement policy.

To create a compute instance that specifies a compact placement policy, select one of the following options:

gcloud

To create a compute instance that specifies a compact placement policy, use the gcloud compute instances create command with the --maintenance-policy and --resource-policies flags.

 gcloud compute instances create INSTANCE_NAME 
\
    --machine-type= MACHINE_TYPE 
\
    --maintenance-policy= MAINTENANCE_POLICY 
\
    --resource-policies= POLICY_NAME 
\
    --zone= ZONE 
 

Replace the following:

  • INSTANCE_NAME : the name of the compute instance to create.

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • MAINTENANCE_POLICY : the host maintenance policy of the compute instance. If the compact placement policy you specify uses a maximum distance value of 1 or 2 , or your chosen machine type doesn't support live migration, then you can only specify TERMINATE . Otherwise, you can specify MIGRATE or TERMINATE .

  • POLICY_NAME : the name of an existing compact placement policy.

  • ZONE : the zone in which to create the compute instance.

REST

To create a compute instance that specifies a compact placement policy, make a POST request to the instances.insert method . In the request body, include the onHostMaintenance and resourcePolicies fields.

 POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/zones/ ZONE 
/instances

{
  "name": " INSTANCE_NAME 
",
  "machineType": "zones/ ZONE 
/machineTypes/ MACHINE_TYPE 
",
  "disks": [
    {
      "boot": true,
      "initializeParams": {
        "sourceImage": "projects/ IMAGE_PROJECT 
/global/images/ IMAGE 
"
      }
    }
  ],
  "networkInterfaces": [
    {
      "network": "global/networks/default"
    }
  ],
  "resourcePolicies": [
    "projects/ PROJECT_ID 
/regions/ REGION 
/resourcePolicies/ POLICY_NAME 
"
  ],
  "scheduling": {
    "onHostMaintenance": " MAINTENANCE_POLICY 
"
  }
} 

Replace the following:

  • PROJECT_ID : the ID of the project where the compact placement policy is located.

  • ZONE : the zone where to create the compute instance in and where the machine type is located. You can only specify a zone in the region of the compact placement policy.

  • INSTANCE_NAME : the name of the compute instance to create.

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • IMAGE_PROJECT : the image project that contains the image—for example, debian-cloud . For more information about the supported image projects, see Public images .

  • IMAGE : specify one of the following:

    • A specific version of the OS image—for example, debian-12-bookworm-v20240617 .

    • An image family , which must be formatted as family/ IMAGE_FAMILY . This specifies the most recent, non-deprecated OS image. For example, if you specify family/debian-12 , the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices .

  • REGION : the region where the compact placement policy is located.

  • POLICY_NAME : the name of an existing compact placement policy.

  • MAINTENANCE_POLICY : the host maintenance policy of the compute instance. If the compact placement policy you specify uses a maximum distance value of 1 or 2 , or your chosen machine type doesn't support live migration, then you can only specify TERMINATE . Otherwise, you can specify MIGRATE or TERMINATE .

For more information about the configuration options to create a compute instance, see Create and start a compute instance .

Apply the policy while creating instances in bulk

You can only create compute instances in bulk with a compact placement policy in the same region as the placement policy.

To create compute instances in bulk that specify a compact placement policy, select one of the following options:

gcloud

To create compute instances in bulk that specify a compact placement policy, use the gcloud compute instances bulk create command with the --maintenance-policy and --resource-policies flags.

For example, to create compute instances in bulk in a single zone and specify a name pattern for the compute instances, run the following command:

 gcloud compute instances bulk create \
    --count= COUNT 
\
    --machine-type= MACHINE_TYPE 
\
    --maintenance-policy= MAINTENANCE_POLICY 
\
    --name-pattern= NAME_PATTERN 
\
    --resource-policies= POLICY_NAME 
\
    --zone= ZONE 
 

Replace the following:

  • COUNT : the number of compute instances to create, which can't be higher than the supported maximum number of compute instances of the specified compact placement policy.

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • MAINTENANCE_POLICY : the host maintenance policy of the compute instance. If the compact placement policy you specify uses a maximum distance value of 1 or 2 , or your chosen machine type doesn't support live migration, then you can only specify TERMINATE . Otherwise, you can specify MIGRATE or TERMINATE .

  • NAME_PATTERN : the name pattern for the compute instances. To replace a sequence of numbers in a compute instance name, use a sequence of hash ( # ) characters. For example, using vm-# for the name pattern generates instances with names starting with vm-1 , vm-2 , and continuing up to the number of compute instances specified by COUNT .

  • POLICY_NAME : the name of an existing compact placement policy.

  • ZONE : the zone in which to create the compute instances in bulk.

REST

To create compute instances in bulk that specify a compact placement policy, make a POST request to the instances.bulkInsert method . In the request body, include the onHostMaintenance and resourcePolicies fields.

For example, to create compute instances in bulk in a single zone and specify a name pattern for the compute instances, make a POST request as follows:

 POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/zones/ ZONE 
/instances/bulkInsert

{
  "count": " COUNT 
",
  "namePattern": " NAME_PATTERN 
",
  "instanceProperties": {
    "machineType": " MACHINE_TYPE 
",
    "disks": [
      {
        "boot": true,
        "initializeParams": {
          "sourceImage": "projects/ IMAGE_PROJECT 
/global/images/ IMAGE 
"
        }
      }
    ],
    "networkInterfaces": [
      {
        "network": "global/networks/default"
      }
    ],
    "resourcePolicies": [
      "projects/ PROJECT_ID 
/regions/ REGION 
/resourcePolicies/ POLICY_NAME 
"
    ],
    "scheduling": {
      "onHostMaintenance": " MAINTENANCE_POLICY 
"
    }
  }
} 

Replace the following:

  • PROJECT_ID : the ID of the project where the compact placement policy is located.

  • ZONE : the zone in which to create the compute instances in bulk.

  • COUNT : the number of compute instances to create, which can't be higher than the supported maximum number of instances of the specified compact placement policy.

  • NAME_PATTERN : the name pattern for the compute instances. To replace a sequence of numbers in an instance name, use a sequence of hash ( # ) characters. For example, using vm-# for the name pattern generates compute instances with names starting with vm-1 , vm-2 , and continuing up to the number of compute instances specified by COUNT .

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • IMAGE_PROJECT : the image project that contains the image—for example, debian-cloud . For more information about the supported image projects, see Public images .

  • IMAGE : specify one of the following:

    • A specific version of the OS image—for example, debian-12-bookworm-v20240617 .

    • An image family , which must be formatted as family/ IMAGE_FAMILY . This specifies the most recent, non-deprecated OS image. For example, if you specify family/debian-12 , the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices .

  • REGION : the region where the compact placement policy is located.

  • POLICY_NAME : the name of an existing compact placement policy.

  • MAINTENANCE_POLICY : the host maintenance policy of the instance. If the compact placement policy you specify uses a maximum distance value of 1 or 2 , or your chosen machine type doesn't support live migration, then you can only specify TERMINATE . Otherwise, you can specify MIGRATE or TERMINATE .

For more information about the configuration options to create compute instances in bulk, see Create compute instances in bulk .

Apply the policy while creating a reservation

If you want to create an on-demand, single-project reservation that specifies a compact placement policy, then you must create a specifically targeted reservation . When you create compute instances to consume the reservation, make sure of the following:

To create a single-project reservation with a compact placement policy, select one of the following methods:

To create a single-project reservation with a compact placement policy by specifying properties directly, select one of the following options:

Console

  1. In the Google Cloud console, go to the Reservationspage.

    Go to Reservations

  2. Click Create reservation. The Create a reservationpage appears.

  3. In the Namefield, enter a name for your reservation.

  4. In the Regionand Zonelists, select the region and zone where you want to reserve resources.

  5. In the Use with VM instancesection, select Select specific reservation.

  6. In the VM instances and GPUssection, do the following:

    1. In the Number of VM instancesfield, enter the number of VMs to reserve.

    2. Specify a machine series and type that supports compact placement policies.

  7. In the Group placement policysection, click the Select or create a group placement policylist, and then do one of the following:

    • To create a compact placement policy, complete the following steps:

      1. Click Create group placement policy. The Create a group placement policypane appears.

      2. In the Policy namefield, enter a name for your policy.

      3. Click Create.

    • To select an existing compact placement policy, select a policy that exists in the same region where you want to reserve instances.

  8. To create the reservation, click Create.

gcloud

To create a single-project reservation with a compact placement policy by specifying properties directly, use the gcloud compute reservations create command with the --require-specific-reservation and --resource-policies=policy flags.

 gcloud compute reservations create RESERVATION_NAME 
\
    --machine-type= MACHINE_TYPE 
\
    --require-specific-reservation \
    --resource-policies=policy= POLICY_NAME 
\
    --vm-count= NUMBER_OF_INSTANCES 
\
    --zone= ZONE 
 

Replace the following:

  • RESERVATION_NAME : the name of the reservation.

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • POLICY_NAME : the name of an existing compact placement policy.

  • NUMBER_OF_INSTANCES : the number of compute instances to reserve, which can't be higher than the supported maximum number of instances of the specified compact placement policy.

  • ZONE : the zone in which to reserve compute instances. You can only reserve compute instances in a zone in the region of the specified compact placement policy.

REST

To create a single-project reservation with a compact placement policy by specifying properties directly, make a POST request to the reservations.insert method . In the request body, include the resourcePolicies field, and the specificReservationRequired field set to true .

 POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/zones/ ZONE 
/reservations

{
  "name": " RESERVATION_NAME 
",
  "resourcePolicies": {
    "policy" : "projects/ PROJECT_ID 
/regions/ REGION 
/resourcePolicies/ POLICY_NAME 
"
  },
  "specificReservation": {
    "count": " NUMBER_OF_INSTANCES 
",
    "instanceProperties": {
      "machineType": " MACHINE_TYPE 
",
    }
  },
  "specificReservationRequired": true
} 

Replace the following:

  • PROJECT_ID : the ID of the project where the compact placement policy is located.

  • ZONE : the zone in which to reserve compute instances. You can only reserve compute instances in a zone in the region of the specified compact placement policy.

  • RESERVATION_NAME : the name of the reservation.

  • REGION : the region where the compact placement policy is located.

  • POLICY_NAME : the name of an existing compact placement policy.

  • NUMBER_OF_INSTANCES : the number of compute instances to reserve, which can't be higher than the supported maximum number of compute instances of the specified compact placement policy.

  • MACHINE_TYPE : a supported machine type for compact placement policies.

For more information about the configuration options to create single-project reservations, see Create a reservation for a single project .

Apply the policy while creating an instance template

If you want to create a regional instance template, then the template and the compact placement policy must be in the same region. Otherwise, creating the instance template fails.

After you create an instance template that specifies a compact placement policy, you can use the template to do the following:

To create an instance template that specifies a compact placement policy, use one of the following methods:

  • To create a regional instance template with a new compact placement policy, use the Google Cloud console. The new policy doesn't specify a maximum distance value, and you can apply it to a maximum of 22 instances.

  • To create a global or regional instance template that specifies an existing compact placement policy, use the gcloud CLI or REST API.

To create the instance template, select one of the following options:

Console

  1. In the Google Cloud console, go to the Instance templatespage.

    Go to Instance templates

  2. Click Create instance template. The Create an instance templatepage appears.

  3. In the Regionfield, specify the region where you want to create the compact placement policy and the instance template.

  4. In the Machine configurationsection, specify a supported machine series and type for compact placement policies.

  5. In the Placement policysection, in the Placement policylist, select Compact. When you select this option, the Google Cloud console automatically generates a compact placement policy without a maximum distance value and that supports up to 22 instances.

  6. Click Create.

gcloud

To create an instance template that specifies a compact placement policy, use the gcloud compute instance-templates create command with the --maintenance-policy and --resource-policies flags.

For example, to create a global instance template that specifies a compact placement policy, run the following command:

 gcloud compute instance-templates create INSTANCE_TEMPLATE_NAME 
\
    --machine-type= MACHINE_TYPE 
\
    --maintenance-policy= MAINTENANCE_POLICY 
\
    --resource-policies= POLICY_NAME 
 

Replace the following:

  • INSTANCE_TEMPLATE_NAME : the name of the instance template.

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • MAINTENANCE_POLICY : the host maintenance policy . If the compact placement policy you specify uses a maximum distance value of 1 or 2 , or your chosen machine type doesn't support live migration, then you can only specify TERMINATE . Otherwise, you can specify MIGRATE or TERMINATE .

  • POLICY_NAME : the name of an existing compact placement policy.

REST

To create an instance template that specifies a compact placement policy, make a POST request to one of the following methods:

In the request body, include the onHostMaintenance and resourcePolicies fields.

For example, to create a global instance template that specifies a compact placement policy, make a POST request as follows:

 POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/global/instanceTemplates

{
  "name": " INSTANCE_TEMPLATE_NAME 
",
  "properties": {
    "disks": [
      {
        "boot": true,
        "initializeParams": {
          "sourceImage": "projects/ IMAGE_PROJECT 
/global/images/ IMAGE 
"
        }
      }
    ],
    "machineType": " MACHINE_TYPE 
",
    "networkInterfaces": [
      {
        "network": "global/networks/default"
      }
    ],
    "resourcePolicies": [
      " POLICY_NAME 
"
    ],
    "scheduling": {
      "onHostMaintenance": " MAINTENANCE_POLICY 
"
    }
  }
} 

Replace the following:

  • PROJECT_ID : the ID of the project where the compact placement policy is located.

  • INSTANCE_TEMPLATE_NAME : the name of the instance template.

  • IMAGE_PROJECT : the image project that contains the image—for example, debian-cloud . For more information about the supported image projects, see Public images .

  • IMAGE : specify one of the following:

    • A specific version of the OS image—for example, debian-12-bookworm-v20240617 .

    • An image family , which must be formatted as family/ IMAGE_FAMILY . This specifies the most recent, non-deprecated OS image. For example, if you specify family/debian-12 , the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices .

  • MACHINE_TYPE : a supported machine type for compact placement policies.

  • POLICY_NAME : the name of an existing compact placement policy.

  • MAINTENANCE_POLICY : the host maintenance policy . If the compact placement policy you specify uses a maximum distance value of 1 or 2 , or your chosen machine type doesn't support live migration, then you can only specify TERMINATE . Otherwise, you can specify MIGRATE or TERMINATE .

For more information about the configuration options to create an instance template, see Create instance templates .

Apply the policy to instances in a MIG

After you create an instance template that specifies a compact placement policy, you can use the template to do the following:

Apply the policy while creating a MIG

You can only create compute instances that specify a compact placement policy if the compute instances are located in the same region as the placement policy.

To create a MIG by using an instance template that specifies a compact placement policy, select one of the following options:

gcloud

To create a MIG by using an instance template that specifies a compact placement policy, use the gcloud compute instance-groups managed create command .

For example, to create a zonal MIG by using a global instance template that specifies a compact placement policy, run the following command:

 gcloud compute instance-groups managed create INSTANCE_GROUP_NAME 
\
    --size= SIZE 
\
    --template= INSTANCE_TEMPLATE_NAME 
\
    --zone= ZONE 
 

Replace the following:

  • INSTANCE_GROUP_NAME : the name of the MIG to create.

  • SIZE : the size of the MIG.

  • INSTANCE_TEMPLATE_NAME : the name of an existing global instance template that specifies a compact placement policy.

  • ZONE : the zone in which to create the MIG, which must be in the region where the compact placement policy is located.

REST

To create a MIG by using an instance template that specifies a compact placement policy, make a POST request to one of the following methods:

For example, to create a zonal MIG by using a global instance template that specifies a compact placement policy, make a POST request as follows:

 POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/zones/ ZONE 
/instanceGroupManagers

{
  "name": " INSTANCE_GROUP_NAME 
",
  "targetSize": SIZE 
,
  "versions": [
    {
      "instanceTemplate": "global/instanceTemplates/ INSTANCE_TEMPLATE_NAME 
"
    }
  ]
} 

Replace the following:

  • PROJECT_ID : the ID of the project where the compact placement policy and the instance template that specifies the placement policy are located.

  • ZONE : the zone in which to create the MIG, which must be in the region where the compact placement policy is located.

  • INSTANCE_GROUP_NAME : the name of the MIG to create.

  • INSTANCE_TEMPLATE_NAME : the name of an existing global instance template that specifies a compact placement policy.

  • SIZE : the size of the MIG.

For more information about the configuration options to create MIGs, see Basic scenarios for creating MIGs .

Apply the policy to an existing MIG

You can only apply a compact placement policy to an existing MIG if the MIG is located in the same region as the placement policy or, for zonal MIGs, in a zone in the same region as the placement policy.

To update a MIG to use an instance template that specifies a compact placement policy, select one of the following options:

gcloud

To update a MIG to use an instance template that specifies a compact placement policy, use the gcloud compute instance-groups managed rolling-action start-update command .

For example, to update a zonal MIG to use an instance template that specifies a compact placement policy, and replace the existing compute instances from the MIG with new compute instances that specify the template's properties, run the following command:

 gcloud compute instance-groups managed rolling-action start-update MIG_NAME 
\
    --version=template= INSTANCE_TEMPLATE_NAME 
\
    --zone= ZONE 
 

Replace the following:

  • MIG_NAME : the name of an existing MIG.

  • INSTANCE_TEMPLATE_NAME : the name of an existing global instance template that specifies a compact placement policy.

  • ZONE : the zone where the MIG is located. You can only apply the compact placement policy to a MIG located in the same region as the placement policy.

REST

To update a MIG to use an instance template that specifies a compact placement policy, and automatically apply the properties of the template and the placement policy to existing compute instances in the MIG, make a PATCH request to one of the following methods:

For example, to update a zonal MIG to use a global instance template that specifies a compact placement policy, and replace the existing compute instances from the MIG with new compute instances that specify the template's properties, make the following PATCH request:

 PATCH https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/zones/ ZONE 
/instanceGroupManagers/ MIG_NAME 
{
  "instanceTemplate": "global/instanceTemplates/ INSTANCE_TEMPLATE_NAME 
"
} 

Replace the following:

  • PROJECT_ID : the ID of the project where the MIG, the compact placement policy, and the instance template that specifies the placement policy are located.

  • ZONE : the zone where the MIG is located. You can only apply the compact placement policy to a MIG located in the same region as the placement policy.

  • MIG_NAME : the name of an existing MIG.

  • INSTANCE_TEMPLATE_NAME : the name of an existing global instance template that specifies a compact placement policy.

For more information about the configuration options to update the compute instances in a MIG, see Update and apply new configurations to instances in a MIG .

Verify the physical location of an instance

After applying a compact placement policy to a compute instance, you can view the compute instance's physical location in relation to other instances. This comparison is limited to compute instances that exist in your project and that specify the same compact placement policy. Viewing the physical location of a compute instance helps you do the following:

  • Confirm that the policy was successfully applied.

  • Identify which compute instances are closest to each other.

To view the physical location of a compute instance in relation to other compute instances that specify the same compact placement policy, select one of the following options:

gcloud

To view the physical location of a compute instance that specifies a compact placement policy, use the gcloud compute instances describe command with the --format flag.

 gcloud compute instances describe INSTANCE_NAME 
\
    --format="table[box,title=VM-Position](resourcePolicies.scope():sort=1,resourceStatus.physicalHost:label=location)" \
    --zone= ZONE 
 

Replace the following:

  • INSTANCE_NAME : the name of an existing compute instance that specifies a compact placement policy.

  • ZONE : the zone where the compute instance exists.

The output is similar to the following:

 VM-Position

RESOURCE_POLICIES: us-central1/resourcePolicies/example-policy']
PHYSICAL_HOST: /CCCCCCC/BBBBBB/AAAA 

The value for the PHYSICAL_HOST field is composed by three parts. These parts each represent the cluster, rack, and host where the compute instance exists.

When comparing the position of two compute instances that use the same compact placement policy in your project, the more parts of the PHYSICAL_HOST field the compute instances share, the closer they are physically located to each other. For example, assume that two compute instances both specify one of the following sample values for the PHYSICAL_HOST field:

  • /CCCCCCC/ xxxxxx/xxxx : the two compute instances are placed in the same cluster, which equals a maximum distance value of 2 . Compute instances placed in the same cluster experience low network latency.

  • /CCCCCCC/BBBBBB/ xxxx : the two compute instances are placed in the same rack, which equals a maximum distance value of 1 . Compute instances placed in the same rack experience lower network latency than compute instances placed in the same cluster.

  • /CCCCCCC/BBBBBB/AAAA : the two compute instances share the same host. Compute instances placed in the same host minimize network latency as much as possible.

REST

To view the physical location of a compute instance that specifies a compact placement policy, make a GET request to the instances.get method .

GET https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID 
/zones/ ZONE 
/instances/ INSTANCE_NAME 

Replace the following:

  • PROJECT_ID : the ID of the project where the compute instance exists.

  • ZONE : the zone where the compute instance exists.

  • INSTANCE_NAME : the name of an existing compute instance that specifies a compact placement policy.

The output is similar to the following:

{
  ...
  "resourcePolicies": [
    "https://www.googleapis.com/compute/v1/projects/example-project/regions/us-central1/resourcePolicies/example-policy"
  ],
  "resourceStatus": {
    "physicalHost": "/xxxxxxxx/xxxxxx/xxxxx"
  },
  ...
}

The value for the physicalHost field is composed by three parts. These parts each represent the cluster, rack, and host where the compute instance exists.

When comparing the position of two compute instances that use the same compact placement policy in your project, the more parts of the physicalHost field the compute instances share, the closer they are physically located to each other. For example, assume that two compute instances both specify one of the following sample values for the physicalHost field:

  • /CCCCCCC/ xxxxxx/xxxx : the two compute instances are placed in the same cluster, which equals a maximum distance value of 2 . Compute instances placed in the same cluster experience low network latency.

  • /CCCCCCC/BBBBBB/ xxxx : the two compute instances are placed in the same rack, which equals a maximum distance value of 1 . Compute instances placed in the same rack experience lower network latency than compute instances placed in the same cluster.

  • /CCCCCCC/BBBBBB/AAAA : the two compute instances share the same host. Compute instances placed in the same host minimize network latency as much as possible.

What's next?

Design a Mobile Site
View Site in Mobile | Classic
Share by: