This document describes how to reduce network latency among your virtual machine (VM) instances by creating and applying compact placement policies to them.
A compact placement policy specifies that your VMs should be physically placed closer to each other. This can help improve performance and reduce network latency among your VMs when, for example, you run high performance computing (HPC), machine learning (ML), or database server workloads.
Before you begin
- If you haven't already, then set up authentication. Authentication
is
the process by which your identity is verified for access to Google Cloud services and APIs.
To run code or samples from a local development environment, you can authenticate to
Compute Engine by selecting one of the following options:
Select the tab for how you plan to use the samples on this page:
gcloud
-
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
- Set a default region and zone .
REST
To use the REST API samples on this page in a local development environment, you use the credentials you provide to the gcloud CLI.
Install the Google Cloud CLI, then initialize it by running the following command:
gcloud init
For more information, see Authenticate for using REST in the Google Cloud authentication documentation.
-
Required roles
To get the permissions that you need to create and apply a compact placement policy to VMs, ask your administrator to grant you the following IAM roles on your project:
- Compute Instance Admin (v1)
(
roles/compute.instanceAdmin.v1
) - To create a reservation: Compute Admin
(
roles/compute.admin
)
For more information about granting roles, see Manage access to projects, folders, and organizations .
These predefined roles contain the permissions required to create and apply a compact placement policy to VMs. To see the exact permissions that are required, expand the Required permissionssection:
Required permissions
The following permissions are required to create and apply a compact placement policy to VMs:
- To create placement policies:
compute.resourcePolicies.create
on the project - To apply a placement policy to existing VMs:
compute.instances.addResourcePolicies
on the project - To create VMs:
-
compute.instances.create
on the project - To use a custom image to create the VM:
compute.images.useReadOnly
on the image - To use a snapshot to create the VM:
compute.snapshots.useReadOnly
on the snapshot - To use an instance template to create the VM:
compute.instanceTemplates.useReadOnly
on the instance template - To assign a legacy network
to the VM:
compute.networks.use
on the project - To specify a static IP address for the VM:
compute.addresses.use
on the project - To assign an external IP address to the VM when using a legacy network:
compute.networks.useExternalIp
on the project - To specify a subnet for the VM:
compute.subnetworks.use
on the project or on the chosen subnet - To assign an external IP address to the VM when using a VPC network:
compute.subnetworks.useExternalIp
on the project or on the chosen subnet - To set VM instance metadata for the VM:
compute.instances.setMetadata
on the project - To set tags for the VM:
compute.instances.setTags
on the VM - To set labels for the VM:
compute.instances.setLabels
on the VM - To set a service account for the VM to use:
compute.instances.setServiceAccount
on the VM - To create a new disk for the VM:
compute.disks.create
on the project - To attach an existing disk in read-only or read-write mode:
compute.disks.use
on the disk - To attach an existing disk in read-only mode:
compute.disks.useReadOnly
on the disk
-
- To create a reservation:
compute.reservations.create
on the project - To create an instance template:
compute.instanceTemplates.create
on the project - To create a managed instance group (MIG):
compute.instanceGroupManagers.create
on the project - To view the details of a VM:
compute.instances.get
on the project
You might also be able to get these permissions with custom roles or other predefined roles .
Create a compact placement policy
Unless you want to apply a compact placement policy to N2 or N2D VMs, Google Cloud recommends specifying a maximum distance value when creating the policy. For more information, see How compact placement policies work .
To create a compact placement policy, select one of the following options:
gcloud
-
To apply the compact placement policy to N2 or N2D VMs, create the policy using the
gcloud compute resource-policies create group-placement
command with the--collocation=collocated
flag.gcloud compute resource-policies create group-placement POLICY_NAME \ --collocation=collocated \ --region= REGION
Replace the following:
-
POLICY_NAME
: the name of the compact placement policy. -
REGION
: the region in which to create the placement policy.
-
-
To apply the compact placement policy to any other supported VMs, create the policy using the
gcloud beta compute resource-policies create group-placement
command with the--collocation=collocated
and--max-distance
flags.Preview — The
--max-distance
flagThis feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms . Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions .
gcloud beta compute resource-policies create group-placement POLICY_NAME \ --collocation=collocated \ --max-distance= MAX_DISTANCE \ --region= REGION
Replace the following:
-
POLICY_NAME
: the name of the compact placement policy. -
MAX_DISTANCE
: the maximum distance configuration for your VMs. The value must be between1
, which specifies to place your VMs in the same rack for the lowest network latency possible, and3
, which specifies to place your VMs in adjacent clusters. If you want to apply the compact placement policy to a reservation, then you can't specify a value of1
. -
REGION
: the region in which to create the placement policy.
-
REST
-
To apply the compact placement policy to N2 or N2D VMs, create the policy by making a
POST
request to theresourcePolicies.insert
method . In the request body, include thecollocation
field and set it toCOLLOCATED
.POST h tt ps : //compute.googleapis.com/compute/v1/projects/ PROJECT_ID /regions/ REGION /resourcePolicies { "name" : " POLICY_NAME " , "groupPlacementPolicy" : { "collocation" : "COLLOCATED" } }
Replace the following:
-
PROJECT_ID
: the ID of the project where you want to create the placement policy. -
REGION
: the region in which to create the placement policy. -
POLICY_NAME
: the name of the compact placement policy.
-
-
To apply the compact placement policy to any other supported VMs, create the policy by making a
POST
request to thebeta.resourcePolicies.insert
method . In the request body, include the following:-
The
collocation
field set toCOLLOCATED
. -
The
maxDistance
field.
Preview — The
maxDistance
fieldThis feature is subject to the "Pre-GA Offerings Terms" in the General Service Terms section of the Service Specific Terms . Pre-GA features are available "as is" and might have limited support. For more information, see the launch stage descriptions .
POST h tt ps : //compute.googleapis.com/compute/beta/projects/ PROJECT_ID /regions/ REGION /resourcePolicies { "name" : " POLICY_NAME " , "groupPlacementPolicy" : { "collocation" : "COLLOCATED" , "maxDistance" : MAX_DISTANCE } }
Replace the following:
-
PROJECT_ID
: the ID of the project where you want to create the placement policy. -
REGION
: the region in which to create the placement policy. -
POLICY_NAME
: the name of the compact placement policy. -
MAX_DISTANCE
: the maximum distance configuration for your VMs. The value must be between1
, which specifies to place your VMs in the same rack for the lowest network latency possible, and3
, which specifies to place your VMs in adjacent clusters. If you want to apply the compact placement policy to a reservation, then you can't specify a value of1
.
-
Apply a compact placement policy
You can apply a compact placement policy to an existing VM or MIG, or when creating VMs, instance templates, MIGs, or reservations of VMs.
To apply a compact placement policy to a Compute Engine resource, select one of the following methods:
- Apply the policy to an existing VM .
- Apply the policy while creating a VM .
- Apply the policy while creating VMs in bulk .
- Apply the policy while creating a reservation .
- Apply the policy while creating an instance template .
- Apply the policy to VMs in a MIG .
After you apply a compact placement policy to a VM, you can verify the physical location of the VM in relation to other VMs that specify the same placement policy.
Apply the policy to an existing VM
Before applying a compact placement policy to an existing VM, make sure of the following:
-
The VM and the compact placement policy must be located in the same region. For example, if the placement policy is located in region
us-central1
, then the VM must be located in a zone inus-central1
. If you need to migrate a VM to another region, then see Move a VM between zones or regions . -
The VM must use a supported machine series and host maintenance policy . If you need to make changes to the VM, then do one or both of the following:
Otherwise, applying the compact placement policy to the VM fails. If the VM already specifies a placement policy and you want to replace it, then see Replace a placement policy in a VM instead.
To apply a compact placement policy to an existing VM, select one of the following options:
gcloud
-
To apply a compact placement policy to an existing VM, use the
gcloud compute instances add-resource-policies
command .gcloud compute instances add-resource-policies VM_NAME \ --resource-policies= POLICY_NAME \ --zone= ZONE
Replace the following:
-
VM_NAME
: the name of an existing VM. -
POLICY_NAME
: the name of an existing compact placement policy. -
ZONE
: the zone where the VM is located.
-
REST
-
To apply a compact placement policy to an existing VM, make a
POST
request to theinstances.addResourcePolicies
method .POST h tt ps : //compute.googleapis.com/compute/v1/projects/ PROJECT_ID /zones/ ZONE /instances/ VM_NAME /addResourcePolicies { "resourcePolicies" : [ "projects/ PROJECT_ID /regions/ REGION /resourcePolicies/ POLICY_NAME " ] }
Replace the following:
-
PROJECT_ID
: the ID of the project where the compact placement policy and the VM are located. -
ZONE
: the zone where the VM is located. -
VM_NAME
: the name of an existing VM. -
REGION
: the region where the compact placement policy is located. -
POLICY_NAME
: the name of an existing compact placement policy.
-
Apply the policy while creating a VM
You can only create a VM that specifies a compact placement policy in the same region as the placement policy.
To create a VM that specifies a compact placement policy, select one of the following options:
gcloud
To create a VM that specifies a compact placement policy, use the gcloud compute instances create
command
with the --maintenance-policy
and --resource-policies
flags.
gcloud compute instances create VM_NAME
\
--machine-type= MACHINE_TYPE
\
--maintenance-policy= MAINTENANCE_POLICY
\
--resource-policies= POLICY_NAME
\
--zone= ZONE
Replace the following:
-
VM_NAME
: the name of the VM to create. -
MACHINE_TYPE
: a supported machine type for compact placement policies. -
MAINTENANCE_POLICY
: the host maintenance policy of the VM. If the compact placement policy you specify uses a maximum distance value of1
or2
, then you can only specifyTERMINATE
. Otherwise, you can specify eitherMIGRATE
orTERMINATE
. -
POLICY_NAME
: the name of an existing compact placement policy. -
ZONE
: the zone in which to create the VM.
REST
To create a VM that specifies a compact placement policy, make a POST
request to the instances.insert
method
.
In the request body, include the onHostMaintenance
and resourcePolicies
fields.
POST
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/zones/ ZONE
/instances
{
"name"
:
" VM_NAME
"
,
"machineType"
:
"zones/ ZONE
/machineTypes/ MACHINE_TYPE
"
,
"disks"
:
[
{
"boot"
:
true
,
"initializeParams"
:
{
"sourceImage"
:
"projects/ IMAGE_PROJECT
/global/images/ IMAGE
"
}
}
],
"networkInterfaces"
:
[
{
"network"
:
"global/networks/default"
}
],
"resourcePolicies"
:
[
"projects/ PROJECT_ID
/regions/ REGION
/resourcePolicies/ POLICY_NAME
"
],
"scheduling"
:
{
"onHostMaintenance"
:
" MAINTENANCE_POLICY
"
}
}
Replace the following:
-
PROJECT_ID
: the ID of the project where the compact placement policy is located. -
ZONE
: the zone where to create the VM in and where the machine type is located. You can only specify a zone within the region of the compact placement policy. -
VM_NAME
: the name of the VM to create. -
MACHINE_TYPE
: a supported machine type for compact placement policies. -
IMAGE_PROJECT
: the image project that contains the image—for example,debian-cloud
. For more information about the supported image projects, see Public images . -
IMAGE
: specify one of the following:-
A specific version of the OS image—for example,
debian-12-bookworm-v20240617
. -
An image family , which must be formatted as
family/ IMAGE_FAMILY
. This specifies the most recent, non-deprecated OS image. For example, if you specifyfamily/debian-12
, the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices .
-
-
REGION
: the region where the compact placement policy is located. -
POLICY_NAME
: the name of an existing compact placement policy. -
MAINTENANCE_POLICY
: the host maintenance policy of the VM. If the compact placement policy you specify uses a maximum distance value of1
or2
, you can only specifyTERMINATE
. Otherwise, you can specify eitherMIGRATE
orTERMINATE
.
For more information about the configuration options to create a VM, see Create and start a VM instance .
Apply the policy while creating VMs in bulk
You can only create VMs in bulk with a compact placement policy in the same region as the placement policy.
To create VMs in bulk that specify a compact placement policy, select one of the following options:
gcloud
To create VMs in bulk that specify a compact placement policy, use the gcloud compute instances bulk create
command
with the --maintenance-policy
and --resource-policies
flags.
For example, to create VMs in bulk in a single zone and specify a name pattern for the VMs, run the following command:
gcloud compute instances bulk create \
--count= COUNT
\
--machine-type= MACHINE_TYPE
\
--maintenance-policy= MAINTENANCE_POLICY
\
--name-pattern= NAME_PATTERN
\
--resource-policies= POLICY_NAME
\
--zone= ZONE
Replace the following:
-
COUNT
: the number of VMs to create, which can't be higher than the supported maximum number of VMs of the specified compact placement policy. -
MACHINE_TYPE
: a supported machine type for compact placement policies. -
MAINTENANCE_POLICY
: the host maintenance policy of the VM. If the compact placement policy you specify uses a maximum distance value of1
or2
, then you can only specifyTERMINATE
. Otherwise, you can specify eitherMIGRATE
orTERMINATE
. -
NAME_PATTERN
: the name pattern for the VMs. To replace a sequence of numbers in a VM name, use a sequence of hash (#
) characters. For example, usingvm-#
for the name pattern generates VMs with names starting withvm-1
,vm-2
, and continuing up to the number of VMs specified byCOUNT
. -
POLICY_NAME
: the name of an existing compact placement policy. -
ZONE
: the zone in which to create the VMs in bulk.
REST
To create VMs in bulk that specify a compact placement policy, make a POST
request to the instances.bulkInsert
method
.
In the request body, include the onHostMaintenance
and resourcePolicies
fields.
For example, to create VMs in bulk in a single zone and specify a name
pattern for the VMs, make a POST
request as follows:
POST
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/zones/ ZONE
/instances/bulkInsert
{
"count"
:
" COUNT
"
,
"namePattern"
:
" NAME_PATTERN
"
,
"instanceProperties"
:
{
"machineType"
:
" MACHINE_TYPE
"
,
"disks"
:
[
{
"boot"
:
true
,
"initializeParams"
:
{
"sourceImage"
:
"projects/ IMAGE_PROJECT
/global/images/ IMAGE
"
}
}
],
"networkInterfaces"
:
[
{
"network"
:
"global/networks/default"
}
],
"resourcePolicies"
:
[
"projects/ PROJECT_ID
/regions/ REGION
/resourcePolicies/ POLICY_NAME
"
],
"scheduling"
:
{
"onHostMaintenance"
:
" MAINTENANCE_POLICY
"
}
}
}
Replace the following:
-
PROJECT_ID
: the ID of the project where the compact placement policy is located. -
ZONE
: the zone in which to create the VMs in bulk. -
COUNT
: the number of VMs to create, which can't be higher than the supported maximum number of VMs of the specified compact placement policy. -
NAME_PATTERN
: the name pattern for the VMs. To replace a sequence of numbers in a VM name, use a sequence of hash (#
) characters. For example, usingvm-#
for the name pattern generates VMs with names starting withvm-1
,vm-2
, and continuing up to the number of VMs specified byCOUNT
.with names starting with `vm-1` , `vm-2` , and continuing up to the number
of VMs specified by
COUNT
. -
MACHINE_TYPE
: a supported machine type for compact placement policies. -
IMAGE_PROJECT
: the image project that contains the image—for example,debian-cloud
. For more information about the supported image projects, see Public images . -
IMAGE
: specify one of the following:-
A specific version of the OS image—for example,
debian-12-bookworm-v20240617
. -
An image family , which must be formatted as
family/ IMAGE_FAMILY
. This specifies the most recent, non-deprecated OS image. For example, if you specifyfamily/debian-12
, the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices .
-
-
REGION
: the region where the compact placement policy is located. -
POLICY_NAME
: the name of an existing compact placement policy. -
MAINTENANCE_POLICY
: the host maintenance policy of the VM. If the compact placement policy you specify uses a maximum distance value of1
or2
, you can only specifyTERMINATE
. Otherwise, you can specify eitherMIGRATE
orTERMINATE
.
For more information about the configuration options to create VMs in bulk, see Create VMs in bulk .
Apply the policy while creating a reservation
If you want to create an on-demand, single-project reservation that specifies a compact placement policy, then you must create a specifically targeted reservation . When you create VMs to consume the reservation, make sure of the following:
-
The VMs must specify the same compact placement policy applied to the reservation.
-
The VMs must specifically target the reservation to consume it. For more information, see Consume VMs from a specific reservation .
To create a single-project reservation with a compact placement policy, select one of the following methods:
-
Create the reservation by specifying properties directly as described in this section.
-
Apply the policy while creating an instance template as described in this document, and then create a single-project reservation by specifying the newly created instance template .
To create a single-project reservation with a compact placement policy by specifying properties directly, select one of the following options:
gcloud
To create a single-project reservation with a compact placement policy by
specifying properties directly, use the gcloud compute reservations create
command
with the --require-specific-reservation
and --resource-policies=policy
flags.
gcloud compute reservations create RESERVATION_NAME
\
--machine-type= MACHINE_TYPE
\
--require-specific-reservation \
--resource-policies=policy= POLICY_NAME
\
--vm-count= NUMBER_OF_VMS
\
--zone= ZONE
Replace the following:
-
RESERVATION_NAME
: the name of the reservation. -
MACHINE_TYPE
: a supported machine type for compact placement policies. -
POLICY_NAME
: the name of an existing compact placement policy. -
NUMBER_OF_VMS
: the number of VMs to reserve, which can't be higher than the supported maximum number of VMs of the specified compact placement policy. -
ZONE
: the zone in which to reserve VMs. You can only reserve VMs in a zone within the region of the specified compact placement policy.
REST
To create a single-project reservation with a compact placement policy by
specifying properties directly, make a POST
request to the reservations.insert
method
.
In the request body, include the resourcePolicies
field, and the specificReservationRequired
field set to true
.
POST
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/zones/ ZONE
/reservations
{
"name"
:
" RESERVATION_NAME
"
,
"resourcePolicies"
:
{
"policy"
:
"projects/ PROJECT_ID
/regions/ REGION
/resourcePolicies/ POLICY_NAME
"
},
"specificReservation"
:
{
"count"
:
" NUMBER_OF_VMS
"
,
"instanceProperties"
:
{
"machineType"
:
" MACHINE_TYPE
"
,
}
},
"specificReservationRequired"
:
true
}
Replace the following:
-
PROJECT_ID
: the ID of the project where the compact placement policy is located. -
ZONE
: the zone in which to reserve VMs. You can only reserve VMs in a zone within the region of the specified compact placement policy. -
RESERVATION_NAME
: the name of the reservation. -
REGION
: the region where the compact placement policy is located. -
POLICY_NAME
: the name of an existing compact placement policy. -
NUMBER_OF_VMS
: the number of VMs to reserve, which can't be higher than the supported maximum number of VMs of the specified compact placement policy. -
MACHINE_TYPE
: a supported machine type for compact placement policies.
For more information about the configuration options to create single-project reservations, see Create a reservation for a single project .
Apply the policy while creating an instance template
If you want to create a regional instance template, then you must create the template in the same region as the compact placement policy. Otherwise, creating the instance template fails.
After creating an instance template that specifies a compact placement policy, you can use the template to do the following:
To create an instance template that specifies a compact placement policy, select one of the following options:
gcloud
To create an instance template that specifies a compact placement policy,
use the gcloud compute instance-templates create
command
with the --maintenance-policy
and --resource-policies
flags.
For example, to create a global instance template that specifies a compact placement policy, run the following command:
gcloud compute instance-templates create INSTANCE_TEMPLATE_NAME
\
--machine-type= MACHINE_TYPE
\
--maintenance-policy= MAINTENANCE_POLICY
\
--resource-policies= POLICY_NAME
Replace the following:
-
INSTANCE_TEMPLATE_NAME
: the name of the instance template. -
MACHINE_TYPE
: a supported machine type for compact placement policies. -
MAINTENANCE_POLICY
: the host maintenance policy of the VM. If the compact placement policy you specify uses a maximum distance value of1
or2
, then you can only specifyTERMINATE
. Otherwise, you can specify eitherMIGRATE
orTERMINATE
. -
POLICY_NAME
: the name of an existing compact placement policy.
REST
To create an instance template that specifies a compact placement policy,
make a POST
request to one of the following methods:
-
To create a global instance template:
instanceTemplates.insert
method . -
To create a regional instance template:
regionInstanceTemplates.insert
method .
In the request body, include the onHostMaintenance
and resourcePolicies
fields.
For example, to create a global instance template that specifies a compact
placement policy, make a POST
request as follows:
POST
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/global/instanceTemplates
{
"name"
:
" INSTANCE_TEMPLATE_NAME
"
,
"properties"
:
{
"disks"
:
[
{
"boot"
:
true
,
"initializeParams"
:
{
"sourceImage"
:
"projects/ IMAGE_PROJECT
/global/images/ IMAGE
"
}
}
],
"machineType"
:
" MACHINE_TYPE
"
,
"networkInterfaces"
:
[
{
"network"
:
"global/networks/default"
}
],
"resourcePolicies"
:
[
" POLICY_NAME
"
],
"scheduling"
:
{
"onHostMaintenance"
:
" MAINTENANCE_POLICY
"
}
}
}
Replace the following:
-
PROJECT_ID
: the ID of the project where the compact placement policy is located. -
INSTANCE_TEMPLATE_NAME
: the name of the instance template. -
IMAGE_PROJECT
: the image project that contains the image—for example,debian-cloud
. For more information about the supported image projects, see Public images . -
IMAGE
: specify one of the following:-
A specific version of the OS image—for example,
debian-12-bookworm-v20240617
. -
An image family , which must be formatted as
family/ IMAGE_FAMILY
. This specifies the most recent, non-deprecated OS image. For example, if you specifyfamily/debian-12
, the latest version in the Debian 12 image family is used. For more information about using image families, see Image families best practices .
-
-
MACHINE_TYPE
: a supported machine type for compact placement policies. -
POLICY_NAME
: the name of an existing compact placement policy. -
MAINTENANCE_POLICY
: the host maintenance policy of the VM. If the compact placement policy you specify uses a maximum distance value of1
or2
, then you can only specifyTERMINATE
. Otherwise, you can specify eitherMIGRATE
orTERMINATE
.
For more information about the configuration options to create an instance template, see Create instance templates .
Apply the policy to VMs in a MIG
After you create an instance template that specifies a compact placement policy, you can use the template to do the following:
Apply the policy while creating a MIG
You can only create VMs that specify a compact placement policy if the VMs are located in the same region as the placement policy.
To create a MIG using an instance template that specifies a compact placement policy, select one of the following options:
gcloud
To create a MIG using an instance template that specifies a compact
placement policy, use the gcloud compute instance-groups managed create
command
.
For example, to create a zonal MIG using a global instance template that specifies a compact placement policy, run the following command:
gcloud compute instance-groups managed create INSTANCE_GROUP_NAME
\
--size= SIZE
\
--template= INSTANCE_TEMPLATE_NAME
\
--zone= ZONE
Replace the following:
-
INSTANCE_GROUP_NAME
: the name of the MIG to create. -
SIZE
: the size of the MIG. -
INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy. -
ZONE
: the zone in which to create the MIG, which must be within the region where the compact placement policy is located.
REST
To create a MIG using an instance template that specifies a compact
placement policy, make a POST
request to one of the following methods:
-
To create a zonal MIG:
instanceGroupManagers.insert
method . -
To create a regional MIG:
regionInstanceGroupManagers.insert
method .
For example, to create a zonal MIG using a global instance template that
specifies a compact placement policy, make a POST
request as follows:
POST
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/zones/ ZONE
/instanceGroupManagers
{
"name"
:
" INSTANCE_GROUP_NAME
"
,
"targetSize"
:
SIZE
,
"versions"
:
[
{
"instanceTemplate"
:
"global/instanceTemplates/ INSTANCE_TEMPLATE_NAME
"
}
]
}
Replace the following:
-
PROJECT_ID
: the ID of the project where the compact placement policy and the instance template that specifies the placement policy are located. -
ZONE
: the zone in which to create the MIG, which must be within the region where the compact placement policy is located. -
INSTANCE_GROUP_NAME
: the name of the MIG to create. -
INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy. -
SIZE
: the size of the MIG.
For more information about the configuration options to create MIGs, see Basic scenarios for creating MIGs .
Apply the policy to an existing MIG
You can only apply a compact placement policy to an existing MIG if the MIG is located in the same region as the placement policy or, for zonal MIGs, in a zone within the same region as the placement policy.
To update a MIG to use an instance template that specifies a compact placement policy, select one of the following options:
gcloud
To update a MIG to use an instance template that specifies a compact
placement policy, use the gcloud compute instance-groups managed rolling-action start-update
command
.
For example, to update a zonal MIG to use an instance template that specifies a compact placement policy, and replace the existing VMs from the MIG with new VMs that specify the template's properties, run the following command:
gcloud compute instance-groups managed rolling-action start-update MIG_NAME
\
--version=template= INSTANCE_TEMPLATE_NAME
\
--zone= ZONE
Replace the following:
-
MIG_NAME
: the name of an existing MIG. -
INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy. -
ZONE
: the zone where the MIG is located. You can only apply the compact placement policy to a MIG located within the same region as the placement policy.
REST
To update a MIG to use an instance template that specifies a compact
placement policy, and automatically apply the properties of the template and
the placement policy to existing VMs in the MIG, make a PATCH
request to
one of the following methods:
-
To update a zonal MIG:
instanceGroupManagers.insert
method . -
To update a regional MIG:
regionInstanceGroupManagers.insert
method .
For example, to update a zonal MIG to use a global instance template that
specifies a compact placement policy, and replace the existing VMs from the
MIG with new VMs that specify the template's properties, make the following PATCH
request:
PATCH
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/zones/ ZONE
/instanceGroupManagers/ MIG_NAME
{
"instanceTemplate"
:
"global/instanceTemplates/ INSTANCE_TEMPLATE_NAME
"
}
Replace the following:
-
PROJECT_ID
: the ID of the project where the MIG, the compact placement policy, and the instance template that specifies the placement policy are located. -
ZONE
: the zone where the MIG is located. You can only apply the compact placement policy to a MIG located within the same region as the placement policy. -
MIG_NAME
: the name of an existing MIG. -
INSTANCE_TEMPLATE_NAME
: the name of an existing global instance template that specifies a compact placement policy.
For more information about the configuration options to update the VMs in a MIG, see Update and apply new configurations to VMs in a MIG .
Verify the physical location of a VM
After applying a compact placement policy to a VM, you can view the VM's physical location in relation to other VMs. This comparison is limited to VMs located in your project and that specify the same compact placement policy. Viewing the physical location of a VM helps you to do the following:
-
Confirm that the policy was successfully applied.
-
Identify which VMs are closest to each other.
To view the physical location of a VM in relation to other VMs that specify the same compact placement policy, select one of the following options:
gcloud
To view the physical location of a VM that specifies a compact placement
policy, use the gcloud compute instances describe
command
with the --format
flag.
gcloud compute instances describe VM_NAME
\
--format="table[box,title=VM-Position](resourcePolicies.scope():sort=1,resourceStatus.physicalHost:label=location)" \
--zone= ZONE
Replace the following:
-
VM_NAME
: the name of an existing VM that specifies a compact placement policy. -
ZONE
: the zone where the VM is located.
The output is similar to the following:
VM-Position
RESOURCE_POLICIES: us-central1/resourcePolicies/example-policy']
PHYSICAL_HOST: /CCCCCCC/BBBBBB/AAAA
The value for the PHYSICAL_HOST
field is composed by three parts. These
parts each represent the cluster, rack, and host where the VM is located.
When comparing the position of two VMs that use the same compact placement
policy within your project, the more parts of the PHYSICAL_HOST
field the
VMs share, the closer they are physically located to each other. For
example, assume that two VMs both specify one of the following sample values
for the PHYSICAL_HOST
field:
-
/CCCCCCC/ xxxxxx/xxxx
: the two VMs are placed in the same cluster, which equals a maximum distance value of2
. VMs placed in the same cluster experience low network latency. -
/CCCCCCC/BBBBBB/ xxxx
: the two VMs are placed in the same rack, which equals a maximum distance value of1
. VMs placed in the same rack experience lower network latency than VMs placed in the same cluster. -
/CCCCCCC/BBBBBB/AAAA
: the two VMs share the same host. VMs placed in the same host minimize network latency as much as possible.
REST
To view the physical location of a VM that specifies a compact placement
policy, make a GET
request to the instances.get
method
.
GET
h
tt
ps
:
//compute.googleapis.com/compute/v1/projects/ PROJECT_ID
/zones/ ZONE
/instances/ VM_NAME
Replace the following:
-
PROJECT_ID
: the ID of the project where the VM is located. -
ZONE
: the zone where the VM is located. -
VM_NAME
: the name of an existing VM that specifies a compact placement policy.
The output is similar to the following:
{
...
"resourcePolicies"
:
[
"https://www.googleapis.com/compute/v1/projects/example-project/regions/us-central1/resourcePolicies/example-policy"
],
"resourceStatus"
:
{
"physicalHost"
:
"/xxxxxxxx/xxxxxx/xxxxx"
},
...
}
The value for the physicalHost
field is composed by three parts. These
parts each represent the cluster, rack, and host where the VM is located.
When comparing the position of two VMs that use the same compact placement
policy within your project, the more parts of the physicalHost
field the
VMs share, the closer they are physically located to each other. For
example, assume that two VMs both specify one of the following sample values
for the physicalHost
field:
-
/CCCCCCC/ xxxxxx/xxxx
: the two VMs are placed in the same cluster, which equals a maximum distance value of2
. VMs placed in the same cluster experience low network latency. -
/CCCCCCC/BBBBBB/ xxxx
: the two VMs are placed in the same rack, which equals a maximum distance value of1
. VMs placed in the same rack experience lower network latency than VMs placed in the same cluster. -
/CCCCCCC/BBBBBB/AAAA
: the two VMs share the same host. VMs placed in the same host minimize network latency as much as possible.
What's next?
-
Learn how to view placement policies .
-
Learn how to replace, remove, or delete placement policies .
-
Learn how to do the following with a VM that specifies a placement policy: