This document provides instructions for configuring a regional external Application Load Balancer for your services that run on Compute Engine VMs.
Because regional external Application Load Balancers allow you to create load balancers in specific regions, they are often used for workloads that have jurisdictional compliance requirements. Workloads that require access to Standard Network Tier egress are another common use case for regional external Application Load Balancers, because the regional external Application Load Balancers support both the Premium and Standard Network Service Tier.
Before following this guide, familiarize yourself with the following:
Note: Regional external Application Load Balancers support both the Premium and Standard Network Service Tiers. This procedure demonstrates the setup with Standard Tier.
Permissions
To follow this guide, you must be able to create instances and modify a network in a project. You must be either a project owner or editor , or you must have all of the following Compute Engine IAM roles .
| Task | Required role | 
|---|---|
| Create networks, subnets, and load balancer components | Network Admin | 
| Add and remove firewall rules | Security Admin | 
| Create instances | Instance Admin | 
For more information, see the following guides:
Optional: Use BYOIP addresses
With bring your own IP (BYOIP), you can import your own public addresses to Google Cloud to use the addresses with Google Cloud resources. For example, if you import your own IPv4 addresses, you can assign one to the forwarding rule when you configure your load balancer. When you follow the instructions in this document to configure the load balancer , provide the BYOIP address as the IP address .
For more information about using BYOIP, see Bring your own IP addresses .
Setup overview
You can configure a regional external Application Load Balancer as described in the following high-level configuration flow. The numbered steps refer to the numbers in the diagram.
As shown in the diagram, this example creates a regional external Application Load Balancer in a
VPC network in region us-west1 
, with one backend service
and two backend instance groups.
The diagram shows the following:
-  A VPC network with two subnets: -  One subnet is used for backends (instance groups). Its primary IP address range is 10.1.2.0/24.
-  One subnet is a proxy-only subnet in the us-west1region. You must create one proxy-only subnet in each region of a VPC network where you use regional external Application Load Balancers. The region's proxy-only subnet is shared among all regional load balancers in the region. Source addresses of packets sent from the load balancers to your service's backends are allocated from the proxy-only subnet. In this example, the proxy-only subnet for the region has a primary IP address range of10.129.0.0/23, which is the recommended subnet size. For more information, see Proxy-only subnets .
 
-  
-  A firewall rule that permits proxy-only subnet traffic flows in your network. This means adding one rule that allows TCP port 80,443, and8080traffic from10.129.0.0/23(the range of the proxy-only subnet in this example). Another firewall rule for the health check probes .
-  Backend instances. 
-  Instance groups: - Managed or unmanaged instance groups for Compute Engine VM deployments
- NEGs for GKE deployments
 In each zone, you can have a combination of backend group types based on the requirements of your deployment. 
-  A regional health check that reports the readiness of your backends. 
-  A regional backend service that monitors the usage and health of backends. 
-  A regional URL map that parses the URL of a request and forwards requests to specific backend services based on the host and path of the request URL. 
-  A regional target HTTP or HTTPS proxy, which receives a request from the user and forwards it to the URL map. For HTTPS, configure a regional SSL certificate resource. The target proxy can use either the SSL certificate or the Certificate Manager certificate to decrypt SSL traffic if you configure HTTPS load balancing. The target proxy can forward traffic to your instances by using HTTP or HTTPS. 
-  A forwarding rule, which has the external IP address of your load balancer to forward each incoming request to the target proxy. The external IP address that is associated with the forwarding rule is reserved by using the gcloud compute addresses createcommand, as described in Reserving the load balancer's IP address .
Configure the network and subnets
You need a VPC network with two subnets: one for the load balancer's backends and the other for the load balancer's proxies. A regional external Application Load Balancer is regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
-  Network.The network is a custom-mode VPC network named lb-network.
-  Subnet for backends.A subnet named backend-subnetin theus-west1region uses10.1.2.0/24for its primary IP range.
-  Subnet for proxies.A subnet named proxy-only-subnetin theus-west1region uses10.129.0.0/23for its primary IP range.
Configure the network and subnet for backends
Console
-  In the Google Cloud console, go to the VPC networkspage. 
-  Click Create VPC network. 
-  For Name, enter lb-network.
-  In the Subnetssection: - Set Subnet creation modeto Custom.
- In the New subnetsection, enter the following information: -  Name: backend-subnet
-  Region: us-west1
-  IP address range: 10.1.2.0/24
 
-  Name: 
- Click Done.
 
-  Click Create. 
gcloud
-  Create the custom VPC network with the gcloud compute networks createcommand:gcloud compute networks create lb-network --subnet-mode=custom 
-  Create a subnet in the lb-networknetwork in theus-west1region with thegcloud compute networks subnets createcommand:gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-west1
Terraform
To create the VPC network, use the  google_compute_network 
 
resource.
To create the VPC subnet in the lb-network 
network, use the  google_compute_subnetwork 
 
resource.
API
-  Make a POSTrequest to thenetworks.insertmethod, replacing PROJECT_ID with your project ID.POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /global/networks { "routingConfig": { "routingMode": "REGIONAL" }, "name": "lb-network", "autoCreateSubnetworks": false } 
-  Make a POSTrequest to thesubnetworks.insertmethod, replacing PROJECT_ID with your project ID.POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /regions/us-west1/subnetworks { "name": "backend-subnet", "network": "projects/ PROJECT_ID /global/networks/lb-network", "ipCidrRange": "10.1.2.0/24", "region": "projects/ PROJECT_ID /regions/us-west1", } 
Configure the proxy-only subnet
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based regional load
balancers 
in the
same region of the lb-network 
VPC network. There can only be
one active proxy-only subnet per region, per network.
Console
If you're using the Google Cloud console, you can also wait and create the proxy-only subnet later on the Load balancingpage.
If you want to create the proxy-only subnet now, use the following steps:
-  In the Google Cloud console, go to the VPC networkspage. 
-  Click the name of the VPC network: lb-network.
-  Click Add subnet. 
-  For Name, enter proxy-only-subnet.
-  For Region, select us-west1.
-  Set Purposeto Regional Managed Proxy. 
-  For IP address range, enter 10.129.0.0/23.
-  Click Add. 
gcloud
Create the proxy-only subnet with the  gcloud compute networks subnets
create 
 
command.
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region=us-west1 \ --network=lb-network \ --range=10.129.0.0/23
Terraform
To create the VPC proxy-only subnet in the lb-network 
network, use the  google_compute_subnetwork 
 
resource.
API
Create the proxy-only subnet with the  subnetworks.insert 
 
method, replacing PROJECT_ID 
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /regions/us-west1/subnetworks { "name": "proxy-only-subnet", "ipCidrRange": "10.129.0.0/23", "network": "projects/ PROJECT_ID /global/networks/lb-network", "region": "projects/ PROJECT_ID /regions/us-west1", "purpose": "REGIONAL_MANAGED_PROXY", "role": "ACTIVE" }
Configure firewall rules
This example uses the following firewall rules:
-  fw-allow-health-check.An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in130.211.0.0/22and35.191.0.0/16). This example uses the target tagload-balanced-backendto identify the VMs that the firewall rule applies to.
-  fw-allow-proxies.An ingress rule, applicable to the instances being load balanced, that allows TCP traffic on ports80,443, and8080from the regional external Application Load Balancer's managed proxies. This example uses the target tagload-balanced-backendto identify the VMs that the firewall rule applies to.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Creating a managed instance group .
Console
-  In the Google Cloud console, go to the Firewall policiespage. 
-  Click Create firewall ruleto create the rule to allow Google Cloud health checks: -  Name: fw-allow-health-check
-  Network: lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
-  Target tags: load-balanced-backend
- Source filter: IPv4 ranges
-  Source IPv4 ranges: 130.211.0.0/22and35.191.0.0/16
-  Protocols and ports: - Choose Specified protocols and ports.
- Select the TCPcheckbox, and then enter 80for the port number. As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you usetcp:80for the protocol and port, Google Cloud can use HTTP on port80to contact your VMs, but it cannot use HTTPS on port443to contact them.
 
 
-  Name: 
-  Click Create. 
-  Click Create firewall ruleto create the rule to allow the load balancer's proxy servers to connect the backends: -  Name: fw-allow-proxies
-  Network: lb-network
- Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
-  Target tags: load-balanced-backend
- Source filter: IPv4 ranges
-  Source IPv4 ranges: 10.129.0.0/23
-  Protocols and ports: - Choose Specified protocols and ports.
- Select the TCPcheckbox, and then enter 80, 443, 8080for the port numbers.
 
 
-  Name: 
-  Click Create. 
gcloud
-  Create the fw-allow-health-checkrule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=load-balanced-backend \ --rules=tcp
-  Create the fw-allow-proxiesrule to allow the regional external Application Load Balancer's proxies to connect to your backends. Setsource-rangesto the allocated ranges of your proxy-only subnet, for example,10.129.0.0/23.gcloud compute firewall-rules create fw-allow-proxies \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges= source-range \ --target-tags=load-balanced-backend \ --rules=tcp:80,tcp:443,tcp:8080 
Terraform
To create the firewall rules, use the  google_compute_firewall 
 
resource.
API
Create the fw-allow-health-check 
firewall rule by making a POST 
request to
the  firewalls.insert 
 
method, replacing PROJECT_ID 
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /global/firewalls { "name": "fw-allow-health-check", "network": "projects/ PROJECT-ID /global/networks/lb-network", "sourceRanges": [ "130.211.0.0/22", "35.191.0.0/16" ], "targetTags": [ "load-balanced-backend" ], "allowed": [ { "IPProtocol": "tcp" } ], "direction": "INGRESS" }
Create the fw-allow-proxies 
firewall rule to allow TCP traffic within the
proxy subnet for the  firewalls.insert 
 
method, replacing PROJECT_ID 
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /global/firewalls { "name": "fw-allow-proxies", "network": "projects/ PROJECT_ID /global/networks/lb-network", "sourceRanges": [ "10.129.0.0/23" ], "targetTags": [ "load-balanced-backend" ], "allowed": [ { "IPProtocol": "tcp", "ports": [ "80" ] }, { "IPProtocol": "tcp", "ports": [ "443" ] }, { "IPProtocol": "tcp", "ports": [ "8080" ] } ], "direction": "INGRESS" }
Configure a regional external Application Load Balancer with a VM-based service
This section shows the configuration required for services that run on Compute Engine VMs. Client VMs connect to the IP address and port that you configure in the forwarding rule. When your client applications send traffic to this IP address and port, their requests are forwarded to your backend virtual machines (VMs) according to your regional external Application Load Balancer's URL map.
The example on this page explicitly creates a reserved external IP address for the regional external Application Load Balancer's forwarding rule, rather than allowing an ephemeral external IP address to be allocated. As a best practice, we recommend reserving IP addresses for forwarding rules.
Create a managed instance group backend
This section shows how to create a template and a managed instance group. The managed instance group provides VM instances running the backend servers of an example regional external Application Load Balancer. Traffic from clients is load balanced to these backend servers. For demonstration purposes, backends serve their own hostnames.
Console
-  Create an instance template. In the Google Cloud console, go to the Instance templatespage. - Click Create instance template.
- For Name, enter l7-xlb-backend-template.
- Ensure that Boot diskis set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that
are only available on Debian, such as apt-get.
- Click Advanced options.
- Click Networkingand configure the following fields: - For Network tags, enter load-balanced-backend.
- For Network interfaces, select the following: -  Network: lb-network
-  Subnet: backend-subnet
 
-  Network: 
 
- For Network tags, enter 
-  Click Management. Enter the following script into the Startup scriptfield. #! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2 
-  Click Create. 
 
-  Create a managed instance group. In the Google Cloud console, go to the Instance groupspage. - Click Create instance group.
- Select New managed instance group (stateless). For more information, see Stateless or stateful MIGs .
- For Name, enter l7-xlb-backend-example.
- For Location, select Single zone.
- For Region, select us-west1.
- For Zone, select us-west1-a.
- For Instance template, select l7-xlb-backend-template.
-  For Autoscaling mode, select On: add and remove instances to the group. Set Minimum number of instancesto 2, and set Maximum number of instancesto2or more.
-  Click Create. 
 
gcloud
The gcloud 
instructions in this guide assume that you are using Cloud
Shell 
or another environment with bash installed.
-  Create a VM instance template with HTTP server with the gcloud compute instance-templates createcommand.gcloud compute instance-templates create l7-xlb-backend-template \ --region=us-west1 \ --network=lb-network \ --subnet=backend-subnet \ --tags=load-balanced-backend \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2' 
-  Create a managed instance group in the zone with the gcloud compute instance-groups managed createcommand.gcloud compute instance-groups managed create l7-xlb-backend-example \ --zone=us-west1-a \ --size=2 \ --template=l7-xlb-backend-template
Terraform
To create the instance template, use the  google_compute_instance_template 
 
resource.
To create the managed instance group, use the  google_compute_instance_group_manager 
 
resource.
API
-  Create the instance template with the instanceTemplates.insertmethod, replacingPROJECT_IDwith your project ID.POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /global/instanceTemplates { "name":"l7-xlb-backend-template", "properties": { "machineType":"e2-standard-2", "tags": { "items":[ "load-balanced-backend" ] }, "metadata": { "kind":"compute#metadata", "items":[ { "key":"startup-script", "value":"#! /bin/bash\napt-get update\napt-get install apache2 -y\na2ensite default-ssl\na2enmod ssl\nvm_hostname=\"$(curl -H \"Metadata-Flavor:Google\" \\\nhttp://metadata.google.internal/computeMetadata/v1/instance/name)\"\necho \"Page served from: $vm_hostname\" | \\\ntee /var/www/html/index.html\nsystemctl restart apache2" } ] }, "networkInterfaces":[ { "network":"projects/ PROJECT_ID /global/networks/lb-network", "subnetwork":"regions/us-west1/subnetworks/backend-subnet", "accessConfigs":[ { "type":"ONE_TO_ONE_NAT" } ] } ], "disks": [ { "index":0, "boot":true, "initializeParams": { "sourceImage":"projects/debian-cloud/global/images/family/debian-12" }, "autoDelete":true } ] } } 
-  Create a managed instance group in each zone with the instanceGroupManagers.insertmethod, replacingPROJECT_IDwith your project ID.POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /zones/{zone}/instanceGroupManagers { "name": "l7-xlb-backend-example", "zone": "projects/ PROJECT_ID /zones/us-west1-a", "instanceTemplate": "projects/ PROJECT_ID /global/instanceTemplates/l7-xlb-backend-template", "baseInstanceName": "l7-xlb-backend-example", "targetSize": 2 } 
Add a named port to the instance group
For your instance group, define an HTTP service and map a port name to the relevant port. The backend service of the load balancer forwards traffic to the named port.
Console
-  In the Google Cloud console, go to the Instance groupspage. 
-  Click the name of your instance group (in this example l7-xlb-backend-example).
-  On the instance group's Overviewpage, click Edit . 
-  Click Specify port name mapping. 
-  Click Add item. 
-  For the port name, enter http. For the port number, enter80.
-  Click Save. 
gcloud
Use the  gcloud compute instance-groups
set-named-ports 
 
command.
gcloud compute instance-groups set-named-ports l7-xlb-backend-example \
    --named-ports http:80 \
    --zone us-west1-a 
Terraform
The named_port 
attribute is included in the managed instance group sample 
.
Reserve the load balancer's IP address
Reserve a static IP address for the load balancer.
Console
-  In the Google Cloud console, go to the Reserve a static addresspage. 
-  Choose a Namefor the new address. 
-  For Network Service Tier, select Standard. 
-  For IP version, select IPv4. IPv6 addresses can only be global and can only be used with global load balancers. 
-  For Type, select Regional. 
-  For Region, select us-west1. 
-  Leave the Attached tooption set to None. After you create the load balancer, this IP address will be attached to the load balancer's forwarding rule. 
-  Click Reserveto reserve the IP address. 
gcloud
-  To reserve a static external IP address using gcloud compute, use thecompute addresses createcommand .gcloud compute addresses create ADDRESS_NAME \ --region=us-west1 \ --network-tier=STANDARD Replace the following: -  ADDRESS_NAME: the name you want to call this address.
-  REGION: the region where you want to reserve this address. This region should be the same region as the load balancer. All regional IP addresses areIPv4.
 
-  
-  Use the compute addresses describecommand to view the result:gcloud compute addresses describe ADDRESS_NAME
Terraform
To reserve the IP address, use the  google_compute_address 
 
resource.
To learn how to apply or remove a Terraform configuration, see Basic Terraform commands .
API
To create a regional IPv4 address, call the
regional  addresses.insert 
method 
:
POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /regions/ REGION /addresses
Your request body should contain the following:
{ "name": " ADDRESS_NAME " "networkTier": "STANDARD" "region": "us-west1" }
Replace the following:
-  ADDRESS_NAME: the name for the address
-  REGION: the name of the region for this request
-  PROJECT_ID: the project ID for this request
Configure the load balancer
This example shows you how to create the following regional external Application Load Balancer resources:
- HTTP health check
- Backend service with a managed instance group as the backend
- A URL map 
 - Make sure to refer to a regional URL map if a region is defined for the target HTTP(S) proxy. A regional URL map routes requests to a regional backend service based on rules that you define for the host and path of an incoming URL. A regional URL map can be referenced by a regional target proxy rule in the same region only.
 
- SSL certificate (for HTTPS)
- Target proxy
- Forwarding rule
Proxy availability
Sometimes Google Cloud regions don't have enough proxy capacity for a new load balancer. If this happens, the Google Cloud console provides a proxy availability warning message when you are creating your load balancer. To resolve this issue, you can do one of the following:
- Select a different region for your load balancer. This can be a practical option if you have backends in another region.
- Select a VPC network that already has an allocated proxy-only subnet.
-  Wait for the capacity issue to be resolved. 
Console
Select the load balancer type
-  In the Google Cloud console, go to the Load balancing page. 
- Click Create load balancer .
- For Type of load balancer , select Application Load Balancer (HTTP/HTTPS) and click Next .
- For Public facing or internal , select Public facing (external) and click Next .
- For Global or single region deployment , select Best for regional workloads and click Next .
- Click Configure .
Basic configuration
- For the name of the load balancer, enter regional-l7-xlb.
- For Region, select us-west1.
- For Network, select lb-network.
Reserve a proxy-only subnet
For a regional external Application Load Balancer, reserve a proxy-only subnet:
- Click Reserve subnet.
- For Name, enter proxy-only-subnet.
- For IP address range, enter 10.129.0.0/23.
- Click Add.
Configure the frontend
For HTTP:
- Click Frontend configuration.
- Set Nameto l7-xlb-forwarding-rule.
- Set Protocolto HTTP.
- Set Network service tierto Standard.
- Set Portto 80.
- Select the IP addressthat you created in Reserving the load balancer's IP address .
- Click Done.
For HTTPS:
- Click Frontend configuration.
- In the Namefield, enter l7-xlb-forwarding-rule.
- In the Protocolfield, select HTTPS (includes HTTP/2).
- Set Network service tierto Standard.
- Ensure that the Portis set to 443.
- Select the IP addressthat you created in Reserving the load balancer's IP address .
-  To assign an SSL certificate to the target HTTPS proxy of the load balancer, you can either use a Compute Engine SSL certificate or a Certificate Manager certificate. -  To attach a Certificate Manager certificate to the target HTTPS proxy of the load balancer, in the Choose certificate repositorysection, select Certificates. If you already have an existing Certificate Manager certificate to select, do the following: - Click Add Certificate.
- Click Select an existing certificateand select the certificate from the list of certificates.
- Click Select.
 After you select the new Certificate Manager certificate, it appears in the list of certificates. To create a new Certificate Manager certificate, do the following: - Click Add Certificate.
- Click Create a new certificate.
-  To create a new certificate, follow the steps starting from step 3 as outlined in any one of the following configuration methods in the Certificate Manager documentation: 
 After you create the new Certificate Manager certificate, it appears in the list of certificates. 
-  To attach a Compute Engine SSL certificate to the target HTTPS proxy of the load balancer, in the Choose certificate repositorysection, select Classic Certificates. - In the Certificatelist, do the following: - If you already have a Compute Engine self-managed SSL certificate resource, select the primary SSL certificate.
- Click Create a new certificate. - In the Namefield, enter l7-xlb-cert.
- In the appropriate fields, upload your PEM-formatted files: - Certificate
- Private key
 
- Click Create.
 
- In the Namefield, enter 
- Optional: To add certificates in addition to the primary SSL certificate: - Click Add certificate.
- If you already have a certificate, select it from the Certificateslist.
- Optional: Click Create a new certificateand follow the instructions as specified in the previous step.
 
 
 
- In the Certificatelist, do the following: 
 
-  
-  Select an SSL policy from the SSL policylist. Optionally, to create an SSL policy, do the following: - In the SSL policylist, select Create a policy.
- Enter a name for the SSL policy.
- Select a minimum TLS version. The default value is TLS 1.0.
- Select one of the pre-configured Google-managed profiles or select a Customprofile that lets you select SSL features individually. The Enabled featuresand Disabled featuresare displayed.
- Click Save.
 If you have not created any SSL policies, a default Google Cloud SSL policy is applied. 
-  Click Done. 
Configure the backend service
- Click Backend configuration.
- From the Create or select backend servicesmenu, select Create a backend service.
- Set the name of the backend service to l7-xlb-backend-service.
- For Protocol, select HTTP.
- For Named Port, enter http.
- Set Backend typeto Instance group.
- In the Health checklist, click Create a health check, and then
enter the following information: - In the Namefield, enter l7-xlb-basic-check.
- In the Protocollist, select HTTP.
- In the Portfield, enter 80.
 
- In the Namefield, enter 
- Click Create.
- In the New backendsection: - Set Instance groupto l7-xlb-backend-example.
- Set Port numbersto 80.
- Set Balancing modeto Utilization.
- Click Done.
 
- Set Instance groupto 
- Click Create.
Configure the routing rules
- Click Routing rules.
- For Mode, select Simple host and path rule.
- Ensure that the l7-xlb-backend-serviceis the only backend service for any unmatched host and any unmatched path.
Review the configuration
- Click Review and finalize.
- Review your load balancer configuration settings.
- Optional: Click Equivalent codeto view the REST API request that will be used to create the load balancer.
- Click Create.
gcloud
-  Define the HTTP health check with the gcloud compute health-checks create httpcommand.gcloud compute health-checks create http l7-xlb-basic-check \ --region=us-west1 \ --request-path='/' \ --use-serving-port 
-  Define the backend service with the gcloud compute backend-services createcommand.gcloud compute backend-services create l7-xlb-backend-service \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=HTTP \ --port-name=http \ --health-checks=l7-xlb-basic-check \ --health-checks-region=us-west1 \ --region=us-west1 
-  Add backends to the backend service with the gcloud compute backend-services add-backendcommand.gcloud compute backend-services add-backend l7-xlb-backend-service \ --balancing-mode=UTILIZATION \ --instance-group=l7-xlb-backend-example \ --instance-group-zone=us-west1-a \ --region=us-west1 
-  Create the URL map with the gcloud compute url-maps createcommand.gcloud compute url-maps create regional-l7-xlb-map \ --default-service=l7-xlb-backend-service \ --region=us-west1 
-  Create the target proxy. For HTTP: For an HTTP load balancer, create the target proxy with the gcloud compute target-http-proxies createcommand.gcloud compute target-http-proxies create l7-xlb-proxy \ --url-map=regional-l7-xlb-map \ --url-map-region=us-west1 \ --region=us-west1 For HTTPS: You can create either Compute Engine or Certificate Manager certificates. Use any of the following methods to create certificates using Certificate Manager: - Regional self-managed certificates. For information about creating and using regional self-managed certificates, see Deploy a regional self-managed certificate . Certificate maps aren't supported.
-  Regional Google-managed certificates. Certificate maps aren't supported. The following types of regional Google-managed certificates are supported by Certificate Manager: - Regional Google-managed certificates with per-project DNS authorization. For more information, see Deploy a regional Google-managed certificate with DNS authorization .
- Regional Google-managed (private) certificates with Certificate Authority Service. For more information, see Deploy a regional Google-managed certificate with Certificate Authority Service .
 
 After you create certificates, attach the certificate directly to the target proxy. -  Assign your filepaths to variable names. export LB_CERT=path to PEM-formatted file export LB_PRIVATE_KEY=path to PEM-formatted file 
-  Create a regional SSL certificate using the gcloud compute ssl-certificates createcommand.gcloud compute ssl-certificates create l7-xlb-cert \ --certificate=$LB_CERT \ --private-key=$LB_PRIVATE_KEY \ --region=us-west1 
-  Use the regional SSL certificate to create a target proxy with the gcloud compute target-https-proxies createcommand.gcloud compute target-https-proxies create l7-xlb-proxy \ --url-map=regional-l7-xlb-map \ --region=us-west1 \ --ssl-certificates=l7-xlb-cert 
 
-  Create the forwarding rule. For HTTP: Use the gcloud compute forwarding-rules createcommand with the correct flags.gcloud compute forwarding-rules create l7-xlb-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network=lb-network \ --address= ADDRESS_NAME \ --ports=80 \ --region=us-west1 \ --target-http-proxy=l7-xlb-proxy \ --target-http-proxy-region=us-west1 For HTTPS: Create the forwarding rule with the gcloud compute forwarding-rules createcommand with the correct flags.gcloud compute forwarding-rules create l7-xlb-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network=lb-network \ --address= ADDRESS_NAME \ --ports=443 \ --region=us-west1 \ --target-https-proxy=l7-xlb-proxy \ --target-https-proxy-region=us-west1 
Terraform
To create the health check, use the  google_compute_region_health_check 
 
resource.
To create the backend service, use the  google_compute_region_backend_service 
 
resource.
To create the URL map, use the  google_compute_region_url_map 
 
resource.
To create the target HTTP proxy, use the  google_compute_region_target_http_proxy 
 
resource.
To create the forwarding rule, use the  google_compute_forwarding_rule 
 
resource.
To learn how to apply or remove a Terraform configuration, see Basic Terraform commands .
API
Create the health check by making a POST 
request to the  regionHealthChecks.insert 
 
method, replacing  PROJECT_ID 
 
with your project ID.
  POST 
  
 https 
 : 
 // 
 compute 
 .googleapis.com/ 
 compute 
 / 
 v1 
 / 
 projects 
 / 
< var>PROJECT_ID 
< / 
 var 
> / 
 regions 
 / 
 { 
 region 
 } 
 / 
 healthChecks 
 { 
  
 "name" 
 : 
  
 "l7-xlb-basic-check" 
 , 
  
 "type" 
 : 
  
 "HTTP" 
 , 
  
 "httpHealthCheck" 
 : 
  
 { 
  
 "portSpecification" 
 : 
  
 "USE_SERVING_PORT" 
  
 } 
 } 
 
 
Create the regional backend service by making a POST 
request to the  regionBackendServices.insert 
 
method, replacing  PROJECT_ID 
 
with your project ID.
  POST 
  
 https 
 : 
 // 
 compute 
 .googleapis.com/ 
 compute 
 / 
 v1 
 / 
 projects 
 / 
< var>PROJECT_ID 
< / 
 var 
> / 
 regions 
 / 
 us 
 - 
 west1 
 / 
 backendServices 
 { 
  
 "name" 
 : 
  
 "l7-xlb-backend-service" 
 , 
  
 "backends" 
 : 
  
 [ 
  
 { 
  
 "group" 
 : 
  
 "projects/<var>PROJECT_ID</var>/zones/us-west1-a/instanceGroups/l7-xlb-backend-example" 
 , 
  
 "balancingMode" 
 : 
  
 "UTILIZATION" 
  
 } 
  
 ], 
  
 "healthChecks" 
 : 
  
 [ 
  
 "projects/<var>PROJECT_ID</var>/regions/us-west1/healthChecks/l7-xlb-basic-check" 
  
 ], 
  
 "loadBalancingScheme" 
 : 
  
 "EXTERNAL_MANAGED" 
 } 
 
 
Create the URL map by making a POST 
request to the  regionUrlMaps.insert 
 
method, replacing  PROJECT_ID 
 
with your project ID.
  POST 
  
 https 
 : 
 // 
 compute 
 .googleapis.com/ 
 compute 
 / 
 v1 
 / 
 projects 
 / 
< var>PROJECT_ID 
< / 
 var 
> / 
 regions 
 / 
 us 
 - 
 west1 
 / 
 urlMaps 
 { 
  
 "name" 
 : 
  
 "regional-l7-xlb-map" 
 , 
  
 "defaultService" 
 : 
  
 "projects/<var>PROJECT_ID</var>/regions/us-west1/backendServices/l7-xlb-backend-service" 
 } 
 
 
Create the target HTTP proxy by making a POST 
request to the  regionTargetHttpProxies.insert 
 
method, replacing  PROJECT_ID 
 
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /regions/us-west1/targetHttpProxy { "name": "l7-xlb-proxy", "urlMap": "projects/ PROJECT_ID /global/urlMaps/regional-l7-xlb-map", "region": "us-west1" }
Create the forwarding rule by making a POST 
request to the  forwardingRules.insert 
 
method, replacing  PROJECT_ID 
 
with your project ID.
POST https://compute.googleapis.com/compute/v1/projects/ PROJECT_ID /regions/us-west1/forwardingRules { "name": "l7-xlb-forwarding-rule", "IPAddress": "10.1.2.99", "IPProtocol": "TCP", "portRange": "80-80", "target": "projects/ PROJECT_ID /regions/us-west1/targetHttpProxies/l7-xlb-proxy", "loadBalancingScheme": "EXTERNAL_MANAGED", "network": "projects/ PROJECT_ID /global/networks/lb-network", "networkTier": "STANDARD", }
Connect your domain to your load balancer
After the load balancer is created, note the IP address that is associated with
the load balancer—for example, 30.90.80.100 
. To point your domain to your
load balancer, create an A 
record by using your domain registration service. If
you added multiple domains to your SSL certificate, you must add an A 
record
for each one, all pointing to the load balancer's IP address. For example, to
create A 
records for www.example.com 
and example.com 
, use the following:
NAME TYPE DATA www A 30.90.80.100 @ A 30.90.80.100
If you use Cloud DNS as your DNS provider, see Add, modify, and delete records .
Test the load balancer
Now that the load balancing service is running, you can send traffic to the forwarding rule and watch the traffic be dispersed to different instances.
Console
-  In the Google Cloud console, go to the Load balancing page. 
- Select the load balancer that you just created.
- In the Backend 
section, confirm that the VMs are healthy. The Healthy 
column should be populated, indicating that both VMs
    are healthy ( 2/2). If you see otherwise, first try reloading the page. It can take a few moments for the Google Cloud console to indicate that the VMs are healthy. If the backends do not appear healthy after a few minutes, review the firewall configuration and the network tag assigned to your backend VMs.
- After the Google Cloud console shows that the backend instances are
  healthy, you can test your load balancer using a web browser by going to https:// IP_ADDRESS(orhttp:// IP_ADDRESS). ReplaceIP_ADDRESSwith the load balancer's IP address .
- If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.
- Your browser should render a page with content showing the name of the
    instance that served the page, along with its zone (for example, Page served from: lb-backend-example-xxxx). If your browser doesn't render this page, review the configuration settings in this guide.
gcloud
Note the IPv4 address that was reserved:
gcloud beta compute addresses describe ADDRESS_NAME \ --format="get(address)" \ --region="us-west1"
You can test your load balancer using a web browser by going to https:// IP_ADDRESS 
 
(or http:// IP_ADDRESS 
 
). Replace  IP_ADDRESS 
 
with the load balancer's IP address 
.
If you used a self-signed certificate for testing HTTPS, your browser displays a warning. You must explicitly instruct your browser to accept a self-signed certificate.
Your browser should render a page with minimal information about the backend instance. If your browser doesn't render this page, review the configuration settings in this guide.
Additional configuration options
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Enable session affinity
These procedures show you how to update a backend service for the example regional external Application Load Balancer so that the backend service uses generated cookie affinity, header field affinity, or HTTP cookie affinity.
When generated cookie affinity is enabled, the load balancer issues a cookie
on the first request. For each subsequent request with the same cookie, the load
balancer directs the request to the same backend VM or endpoint. For
regional external Application Load Balancers, the cookie is named GCILB 
.
When header field affinity is enabled, the load balancer routes requests to
backend VMs or endpoints in a NEG based on the value of the HTTP header named
in the --custom-request-header 
flag. Header field affinity is only valid if
the load balancing locality policy is either RING_HASH 
or MAGLEV 
and the
backend service's consistent hash specifies the name of the HTTP header.
When HTTP cookie affinity is enabled, the load balancer routes requests to
backend VMs or endpoints in a NEG, based on an HTTP cookie named in the HTTP_COOKIE 
flag with the optional --affinity-cookie-ttl 
flag. If the client
does not provide the cookie in its HTTP request, the proxy generates
the cookie and returns it to the client in a Set-Cookie 
header. HTTP cookie
affinity is only valid if the load balancing locality policy is either RING_HASH 
or MAGLEV 
and the backend service's consistent hash specifies the
HTTP cookie.
Console
To enable or change session affinity for a backend service:
-  In the Google Cloud console, go to the Load balancingpage. 
-  Select the load balancer that you just created. 
-  Click Backends. 
-  Click l7-xlb-backend-service(the name of the backend service you created for this example) and click Edit. 
-  On the Backend service detailspage, click Advanced configuration. 
-  For Session affinity, select the type of session affinity you want from the menu. 
-  Click Update. 
gcloud
Use the following commands to update the l7-xlb-backend-service 
backend service to different types of session affinity:
gcloud compute backend-services update l7-xlb-backend-service \ --session-affinity= GENERATED_COOKIE | HEADER_FIELD | HTTP_COOKIE | CLIENT_IP --region=us-west1
API
To set session affinity, make a PATCH 
request to the  regionBackendServices/patch 
method 
.
  PATCH 
  
 https 
 : 
 // 
 compute 
 .googleapis.com/ 
 compute 
 / 
 v1 
 / 
 projects 
 / 
< var>PROJECT_ID 
< / 
 var 
> / 
 regions 
 / 
 us 
 - 
 west1 
 / 
 regionBackendServices 
 / 
 l7 
 - 
 xlb 
 - 
 backend 
 - 
 service 
 { 
  
 "sessionAffinity" 
 : 
  
< var 
> "GENERATED_COOKIE" 
  
 | 
  
 "HEADER_FIELD" 
  
 | 
  
 "HTTP_COOKIE" 
  
 | 
  
 "CLIENT_IP" 
< / 
 var 
> } 
 
 
Update client HTTP keepalive timeout
The load balancer created in the previous steps has been configured with a default value for the client HTTP keepalive timeout .To update the client HTTP keepalive timeout, use the following instructions.
Console
-  In the Google Cloud console, go to the Load balancing page. 
- Click the name of the load balancer that you want to modify.
- Click Edit .
- Click Frontend configuration .
- Expand Advanced features . For HTTP keepalive timeout , enter a timeout value.
- Click Update .
- To review your changes, click Review and finalize , and then click Update .
gcloud
For an HTTP load balancer, update the target HTTP proxy by using the  gcloud compute target-http-proxies update 
command 
.
gcloud compute target-http-proxies update TARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec= HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --region= REGION
For an HTTPS load balancer, update the target HTTPS proxy by using the  gcloud compute target-https-proxies update 
command 
.
gcloud compute target-https-proxies update TARGET_HTTP_PROXY_NAME \ --http-keep-alive-timeout-sec= HTTP_KEEP_ALIVE_TIMEOUT_SEC \ --region REGION
Replace the following:
-  TARGET_HTTP_PROXY_NAME: the name of the target HTTP proxy.
-  TARGET_HTTPS_PROXY_NAME: the name of the target HTTPS proxy.
-  HTTP_KEEP_ALIVE_TIMEOUT_SEC: the HTTP keepalive timeout value from 5 to 600 seconds.
Enable IAP on the external Application Load Balancer
You can configure IAP to be
enabled or disabled (default). If enabled, you must provide values for oauth2-client-id 
and oauth2-client-secret 
.
To enable IAP, update the backend service
to include the --iap=enabled 
flag with the oauth2-client-id 
and oauth2-client-secret 
.
Optionally, you can enable IAP for a Compute Engine resource by using the Google Cloud console, gcloud CLI, or API.

