A regional external proxy Network Load Balancer is a proxy-based regional Layer 4 load balancer that lets you run and scale your TCP service traffic in a single region behind an external regional IP address. These load balancers distribute external TCP traffic from the internet to backends in the same region.
Before you begin, read the External proxy Network Load Balancer overview .
This guide contains instructions to set up a regional external proxy Network Load Balancer with a managed instance group (MIG) backend. For this example, you configure the deployment shown in the following diagram.
Note: Regional external proxy Network Load Balancers support both the Premium and Standard Network Service Tiers. This procedure demonstrates the setup with Standard Tier.
For this example, we'll use the load balancer to distribute TCP traffic across
backend VMs in two zonal managed instance groups in Region A. For
purposes of the example, the service is a set of Apache
servers configured
to respond on port 110
. Many browsers don't allow port 110
, so the
testing section uses curl
.
A regional external proxy Network Load Balancer is a regional load balancer. All load balancer components must be in the same region as the load balancer.
SNI-based routing : This page also shows you an alternative deployment architecture that you can use to configure SNI-based routing. For SNI-based routing, you use TLS routes to define how traffic is distributed. For details, see Create a load balancer with TLS routes .
Permissions
To follow this guide, you must be able to create instances and modify a network in a project. You must be either a project Owner or Editor , or you must have all of the following Compute Engine IAM roles .
| Task | Required role |
|---|---|
| Create networks, subnets, and load balancer components | Compute Network Admin
( roles/compute.networkAdmin
) |
| Add and remove firewall rules | Compute Security Admin
( roles/compute.securityAdmin
) |
| Create instances | Compute Instance Admin
( roles/compute.instanceAdmin
) |
For more information, see the following guides:
Optional: Use BYOIP addresses
With bring your own IP (BYOIP), you can import your own public addresses to Google Cloud to use the addresses with Google Cloud resources. For example, if you import your own IPv4 addresses, you can assign one to the forwarding rule when you configure your load balancer. When you follow the instructions in this document to configure the load balancer , provide the BYOIP address as the IP address .
For more information about using BYOIP, see Bring your own IP addresses .
Configure the network and subnets
You need a VPC network with two subnets, one for the load balancer's backends and the other for the load balancer's proxies. This load balancer is regional. Traffic within the VPC network is routed to the load balancer if the traffic's source is in a subnet in the same region as the load balancer.
This example uses the following VPC network, region, and subnets:
-
Network:a custom-mode VPC network named
lb-network -
Subnet for backends:a subnet named
backend-subnetin Region A that uses10.1.2.0/24for its primary IP address range -
Subnet for proxies:a subnet named
proxy-only-subnetin Region B that uses10.129.0.0/23for its primary IP address range
Create the network and subnets
Console
-
In the Google Cloud console, go to the VPC networkspage.
-
Click Create VPC network.
-
For Name, enter
lb-network. -
In the Subnetssection, set the Subnet creation modeto Custom.
-
Create a subnet for the load balancer's backends. In the New subnetsection, enter the following information:
- Name:
backend-subnet - Region:
REGION_A - IP address range:
10.1.2.0/24
- Name:
-
Click Done.
-
Click Create.
gcloud
-
To create the custom VPC network, use the
gcloud compute networks createcommand :gcloud compute networks create lb-network --subnet-mode=custom
-
To create a subnet in the
lb-networknetwork in theREGION_Aregion, use thegcloud compute networks subnets createcommand :gcloud compute networks subnets create backend-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region= REGION_A
Create the proxy-only subnet
A proxy-only subnet provides a set of IP addresses that Google uses to run Envoy proxies on your behalf. The proxies terminate connections from the client and create new connections to the backends.
This proxy-only subnet is used by all Envoy-based load balancers in
Region A of the lb-network
VPC network.
Console
If you're using the Google Cloud console, you can wait and create the proxy-only subnet later on the Load balancingpage.
If you want to create the proxy-only subnet now, use the following steps:
-
In the Google Cloud console, go to the VPC networkspage.
-
Click the name of the VPC network:
lb-network. -
Click Add subnet.
-
For Name, enter
proxy-only-subnet. -
For Region, select
REGION_A. -
Set Purposeto Regional Managed Proxy.
-
For IP address range, enter
10.129.0.0/23. -
Click Add.
gcloud
To create the proxy-only subnet, use the gcloud compute networks subnets
create
command
:
gcloud compute networks subnets create proxy-only-subnet \ --purpose=REGIONAL_MANAGED_PROXY \ --role=ACTIVE \ --region= REGION_A \ --network=lb-network \ --range=10.129.0.0/23
Create firewall rules
In this example, you create the following firewall rules:
-
fw-allow-ssh.An ingress rule, applicable to the instances being load balanced, that allows incoming SSH connectivity on TCP port22from any address. You can choose a more restrictive source IP range for this rule; for example, you can specify only the IP ranges of the system from which you initiate SSH sessions. This example uses the target tagallow-ssh. -
fw-allow-health-check.An ingress rule, applicable to the instances being load balanced, that allows all TCP traffic from the Google Cloud health checking systems (in130.211.0.0/22and35.191.0.0/16). This example uses the target tagallow-health-check. -
fw-allow-proxy-only-subnet.An ingress rule that allows connections from the proxy-only subnet to reach the backends.
Without these firewall rules, the default deny ingress rule blocks incoming traffic to the backend instances.
The target tags define the backend instances. Without the target tags, the firewall rules apply to all of your backend instances in the VPC network. When you create the backend VMs, make sure to include the specified target tags, as shown in Create a managed instance group .
Console
-
In the Google Cloud console, go to the Firewall policiespage.
-
Click Create firewall ruleto create the rule to allow incoming SSH connections. Complete the following fields:
- Name:
fw-allow-ssh - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-ssh - Source filter: IPv4 ranges
- Source IPv4 ranges:
0.0.0.0/0 - Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCPcheckbox, and then enter
22for the port number.
- Name:
-
Click Create.
-
Click Create firewall rulea second time to create the rule to allow Google Cloud health checks:
- Name:
fw-allow-health-check - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-health-check - Source filter: IPv4 ranges
- Source IPv4 ranges:
130.211.0.0/22and35.191.0.0/16 -
Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCPcheckbox, and then enter
80for the port number.
As a best practice, limit this rule to just the protocols and ports that match those used by your health check. If you use
tcp:80for the protocol and port, Google Cloud can use HTTP on port80to contact your VMs, but it cannot use HTTPS on port443to contact them.
- Name:
-
Click Create.
-
Click Create firewall rulea third time to create the rule to allow the load balancer's proxy servers to connect to the backends:
- Name:
fw-allow-proxy-only-subnet - Network:
lb-network - Direction of traffic: Ingress
- Action on match: Allow
- Targets: Specified target tags
- Target tags:
allow-proxy-only-subnet - Source filter: IPv4 ranges
- Source IPv4 ranges:
10.129.0.0/23 - Protocols and ports:
- Choose Specified protocols and ports.
- Select the TCPcheckbox, and then enter
80for the port number.
- Name:
-
Click Create.
gcloud
-
Create the
fw-allow-sshfirewall rule to allow SSH connectivity to VMs with the network tagallow-ssh. When you omitsource-ranges, Google Cloud interprets the rule to mean any source .gcloud compute firewall-rules create fw-allow-ssh \ --network=lb-network \ --action=allow \ --direction=ingress \ --target-tags=allow-ssh \ --rules=tcp:22 -
Create the
fw-allow-health-checkrule to allow Google Cloud health checks. This example allows all TCP traffic from health check probers; however, you can configure a narrower set of ports to meet your needs.gcloud compute firewall-rules create fw-allow-health-check \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:80 -
Create the
fw-allow-proxy-only-subnetrule to allow the region's Envoy proxies to connect to your backends. Set--source-rangesto the allocated ranges of your proxy-only subnet—in this example,10.129.0.0/23.gcloud compute firewall-rules create fw-allow-proxy-only-subnet \ --network=lb-network \ --action=allow \ --direction=ingress \ --source-ranges=10.129.0.0/23 \ --target-tags=allow-proxy-only-subnet \ --rules=tcp:80
Reserve the load balancer's IP address
Reserve a static IP address for the load balancer.
Console
-
In the Google Cloud console, go to the Reserve a static addresspage.
-
Choose a name for the new address.
-
For Network Service Tier, select Standard.
-
For IP version, select IPv4. IPv6 addresses are not supported.
-
For Type, select Regional.
-
For Region, select
REGION_A. -
Leave the Attached tooption set to None. After you create the load balancer, this IP address is attached to the load balancer's forwarding rule.
-
Click Reserveto reserve the IP address.
gcloud
-
To reserve a static external IP address, use the
gcloud compute addresses createcommand :gcloud compute addresses create ADDRESS_NAME \ --region= REGION_A \ --network-tier=STANDARD
Replace
ADDRESS_NAMEwith the name that you want to call this address. -
To view the result, use the
gcloud compute addresses describecommand :gcloud compute addresses describe ADDRESS_NAME
Create a managed instance group
This section shows you how to create two managed instance group (MIG) backends in Region A for the load balancer. The MIG provides VM instances running the backend Apache servers for this example. Typically, a regional external proxy Network Load Balancer isn't used for HTTP traffic, but Apache software is commonly used for testing.
Console
Create an instance template
-
In the Google Cloud console, go to the Instance templatespage.
-
Click Create instance template.
-
For Name, enter
ext-reg-tcp-proxy-backend-template. -
Ensure that the Boot diskis set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that are only available on Debian, such as
apt-get. -
Click Advanced options.
-
Click Networkingand configure the following fields:
- For Network tags, enter
allow-ssh,allow-health-check, andallow-proxy-only-subnet. - For Network interfaces, select the following:
- Network:
lb-network - Subnet:
backend-subnet
- Network:
- For Network tags, enter
-
Click Management. Enter the following script into the Startup scriptfield:
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
-
Click Create.
Create a managed instance group
-
In the Google Cloud console, go to the Instance groupspage.
-
Click Create instance group.
-
Select New managed instance group (stateless). For more information, see Create a MIG with stateful disks .
-
For Name, enter
mig-a. -
For Location, select Single zone.
-
For Region, select
REGION_A. -
For Zone, select
ZONE_A. -
For Instance template, select
ext-reg-tcp-proxy-backend-template. -
Specify the number of instances that you want to create in the group.
For this example, specify the following options for Autoscaling:
- For Autoscaling mode, select
Off:do not autoscale. - For Maximum number of instances, enter
2.
- For Autoscaling mode, select
-
For Port mapping, click Add port.
- For Port name, enter
tcp80. - For Port number, enter
80.
- For Port name, enter
-
Click Create.
-
To create a second managed instance group, repeat the Create a managed instance groupsteps and use the following settings:
- Name:
mig-b - Zone:
ZONE_B
Keep all the other settings the same.
- Name:
gcloud
The Google Cloud CLI instructions in this guide assume that you are using Cloud Shell
or another environment with bash
installed.
-
To create a VM instance template with HTTP server, use the
gcloud compute instance-templates createcommand :gcloud compute instance-templates create ext-reg-tcp-proxy-backend-template \ --region= REGION_A \ --network=lb-network \ --subnet=backend-subnet \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
-
Create a managed instance group in the
ZONE_Azone:gcloud compute instance-groups managed create mig-a \ --zone= ZONE_A \ --size=2 \ --template=ext-reg-tcp-proxy-backend-template
-
Create a managed instance group in the
ZONE_Bzone:gcloud compute instance-groups managed create mig-b \ --zone= ZONE_B \ --size=2 \ --template=ext-reg-tcp-proxy-backend-template
Configure the load balancer
Console
Start your configuration
-
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer .
- For Type of load balancer , select Network Load Balancer (TCP/UDP/SSL) and click Next .
- For Proxy or passthrough , select Proxy load balancer and click Next .
- For Public facing or internal , select Public facing (external) and click Next .
- For Global or single region deployment , select Best for regional workloads and click Next .
- Click Configure .
Basic configuration
- For Name, enter
my-ext-tcp-lb. - For Region, select
REGION_A. - For Network, select
lb-network.
Reserve a proxy-only subnet
- Click Reserve.
- In the Namefield, enter
proxy-only-subnet. - In the IP address rangefield, enter
10.129.0.0/23. - Click Add.
Configure the backends
- Click Backend configuration.
- In the Backend typelist, select Instance group.
- In the Protocollist, select TCP.
- In the Named portfield, enter
tcp80. - Configure the health check:
- In the Health checklist, select Create a health check.
- In the Namefield, enter
tcp-health-check. - In the Protocollist, select TCP.
- In the Portfield enter
80. - Click Create.
- Configure the first backend:
- For New backend, select instance group
mig-a. - For Port numbers, enter
80. - Retain the remaining default values, and then click Done.
- For New backend, select instance group
- Configure the second backend:
- Click Add backend.
- For New backend, select instance group
mig-b. - For Port numbers, enter
80. - Retain the remaining default values, and then click Done.
- Retain the remaining default values, and then click Save.
- In the Google Cloud console, verify that there is a check mark next to Backend configuration. If not, double-check that you have completed all of the steps.
Configure the frontend
- Click Frontend configuration.
- For Name, enter
ext-reg-tcp-forwarding-rule. - For Network Service Tier, select Standard.
- For IP address, select the IP address reserved previously: LB_IP_ADDRESS
- For Port number, enter
110. The forwarding rule only forwards packets with a matching destination port. - For Proxy protocol, select Offbecause the PROXY protocol doesn't work with the Apache HTTP Server software. For more information, see PROXY protocol .
- Click Done.
- In the Google Cloud console, verify that there is a check mark next to Frontend configuration. If not, double-check that you have completed all the previous steps.
Review and finalize
- Click Review and finalize.
- Review your load balancer configuration settings.
- Optional: Click Equivalent codeto view the REST API request that is used to create the load balancer.
- Click Create.
gcloud
-
Create a regional health check:
gcloud compute health-checks create tcp tcp-health-check \ --region= REGION_A \ --use-serving-port
-
Create a backend service:
gcloud compute backend-services create ext-reg-tcp-proxy-bs \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=TCP \ --port-name=tcp80 \ --region= REGION_A \ --health-checks=tcp-health-check \ --health-checks-region= REGION_A
-
Add instance groups to your backend service:
gcloud compute backend-services add-backend ext-reg-tcp-proxy-bs \ --region= REGION_A \ --instance-group=mig-a \ --instance-group-zone= ZONE_A \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
gcloud compute backend-services add-backend ext-reg-tcp-proxy-bs \ --region= REGION_A \ --instance-group=mig-b \ --instance-group-zone= ZONE_B \ --balancing-mode=UTILIZATION \ --max-utilization=0.8
-
Create a target TCP proxy:
gcloud compute target-tcp-proxies create ext-reg-tcp-target-proxy \ --backend-service=ext-reg-tcp-proxy-bs \ --proxy-header=NONE \ --region= REGION_AIf you want to turn on the proxy header, set it to
PROXY_V1instead ofNONE. In this example, don't enable the PROXY protocol because it doesn't work with the Apache HTTP Server software. For more information, see PROXY protocol . -
Create the forwarding rule. For
--ports, specify a single port number from 1-65535. This example uses port110. The forwarding rule only forwards packets with a matching destination port.gcloud compute forwarding-rules create ext-reg-tcp-forwarding-rule \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network=lb-network \ --region= REGION_A \ --target-tcp-proxy=ext-reg-tcp-target-proxy \ --target-tcp-proxy-region= REGION_A \ --address= LB_IP_ADDRESS \ --ports=110
Test the load balancer
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
-
Get the load balancer's IP address.
To get the IPv4 address, run the following command:
gcloud compute addresses describe ADDRESS_NAME -
Send traffic to your load balancer by running the following command. Replace
LB_IP_ADDRESSwith your load balancer's IPv4 address.curl -m1 LB_IP_ADDRESS :9090
Additional configuration options
This section expands on the configuration example to provide alternative and additional configuration options. All of the tasks are optional. You can perform them in any order.
Create a load balancer with TLS routes
This section shows you how to create a load balancer that can use SNI-based routing. SNI-based routing lets your proxy Network Load Balancers route traffic to specific backend services based on the Server Name Indication (SNI) hostname provided during the TLS handshake.
To create this load balancer we use the same networks, subnets, and firewall rules created previously on this page. You configure the deployment shown in the following diagram:
Create a managed instance group backend
This section shows you how to create a managed instance group (MIG) backends for the load balancer. The MIG provides VM instances running the backend servers for this example.
The Google Cloud CLI instructions in this guide assume that you are using Cloud Shell
or another environment with bash
installed.
-
Create an instance template with an "echo" HTTPS service that is exposed on port 443.
gcloud compute instance-templates create INSTANCE_TEMPLATE_NAME \ --region= REGION_A \ --network= NETWORK \ --subnet= SUBNET_A \ --stack-type=IPv4_ONLY \ --tags=allow-ssh,allow-health-check,allow-proxy-only-subnet \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash sudo sed -i "s/^#DNS=.*/DNS=8.8.8.8 8.8.4.4/" /etc/systemd/resolved.conf sudo systemctl restart systemd-resolved sudo rm -rf /var/lib/apt/lists/* sudo apt-get -y clean sudo apt-get -y update sudo apt-get -y install ca-certificates curl gnupg software-properties-common sudo curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - sudo add-apt-repository -y "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" sudo apt-get -y update sudo apt-get -y install docker-ce sudo which docker echo "{ \"registry-mirrors\": [\"https://mirror.gcr.io\"] }" | sudo tee -a /etc/docker/daemon.json sudo service docker restart sudo docker run -e HTTPS_PORT=9999 -p 443:9999 --rm -dt mendhak/http-https-echo:22'
Replace the following:
-
INSTANCE_TEMPLATE_NAME: a name for the instance template. -
REGION_A: the region for the instance template. -
NETWORK: the name of the network. -
SUBNET_A: the name of the subnetwork.
-
-
Create a managed instance group based on the instance template:
gcloud compute instance-groups managed create INSTANCE_GROUP_NAME \ --zone= ZONE_A \ --size=2 \ --template= INSTANCE_TEMPLATE_NAME
Replace
ZONE_Awith the zone for the instance group. -
Set the name of the serving port for the managed instance group:
gcloud compute instance-groups managed set-named-ports INSTANCE_GROUP_NAME \ --named-ports= PORT_NAME : PORT_NUMBER \ --zone= ZONE_A
Replace the following:
-
PORT_NAME: a name for the serving port—for example,tcp443. -
PORT_NUMBER: a port number for the serving port—for example,443.
-
Configure the firewall
Configure a firewall rule to allow traffic from the load balancer and from the health check probes to the backend instances.
gcloud compute firewall-rules create FIREWALL_RULE_NAME \ --network= NETWORK \ --action=allow \ --direction=ingress \ --source-ranges=130.211.0.0/22,35.191.0.0/16 \ --target-tags=allow-health-check \ --rules=tcp:443
Replace FIREWALL_RULE_NAME
with a name for the firewall
rule.
Configure the load balancer
-
Create an HTTPS health check:
gcloud compute health-checks create http HTTPS_HEALTH_CHECK_NAME \ --region= REGION_A \ --port= HC_PORT
Replace the following:
-
HTTPS_HEALTH_CHECK_NAME: a name for the health check. -
HC_PORT: the port for the health check—for example,443. -
REGION_A: the region for the health check.
-
-
Create a backend service:
gcloud compute backend-services create BACKEND_SERVICE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --protocol=TCP \ --port-name= PORT_NAME \ --health-checks= HTTPS_HEALTH_CHECK_NAME \ --health-checks-region= REGION_A \ --region= REGION_A
Replace the following:
-
BACKEND_SERVICE_NAME: a name for the backend service. -
PORT_NAME: the port name for the backend service. Use the same named port configured on the instance group—for example,tcp443. -
HTTPS_HEALTH_CHECK_NAME: the name of the HTTPS health check.
-
-
Add the backend instance group to your backend service:
gcloud compute backend-services add-backend BACKEND_SERVICE_NAME \ --balancing-mode=UTILIZATION \ --max-utilization=0.8 --instance-group= INSTANCE_GROUP_NAME \ --instance-group-zone= ZONE_A \ --region= REGION_A
Replace the following:
-
INSTANCE_GROUP_NAME: the name of the backend instance group. -
ZONE_A: the zone of the instance group.
-
-
Create a target TCP proxy.
gcloud beta compute target-tcp-proxies create TARGET_TCP_PROXY_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --proxy-header=NONE \ --region= REGION_A
Replace
TARGET_TCP_PROXY_NAMEwith the name of the target TCP proxy. -
Create a TLS route specification and save it to a YAML file.
cat <<EOF | tee YAML_FILE_NAME name : TLS_ROUTE_NAME targetProxies : - projects/ PROJECT_NUMBER /locations/ REGION_A /targetTcpProxies/ TARGET_TCP_PROXY rules : - matches : - sniHost : - example.com action : destinations : - serviceName : projects/ PROJECT_NUMBER /locations/ REGION_A /backendServices/ BACKEND_SERVICE_NAME EOFReplace the following:
-
YAML_FILE_NAME: a name for the YAML file—for example,tls-route.yaml. -
TLS_ROUTE_NAME: a name for the TLS route. -
PROJECT_NUMBER: the project number.
-
-
Use the YAML specification file to create the TLS route resource.
gcloud network-services tls-routes import TLS_ROUTE_NAME \ --source= YAML_FILE_NAME \ --location= REGION_A
-
Create the forwarding rule.
gcloud compute forwarding-rules create FORWARDING_RULE_NAME \ --load-balancing-scheme=EXTERNAL_MANAGED \ --network-tier=STANDARD \ --network= NETWORK \ --region= REGION_A \ --target-tcp-proxy= TARGET_TCP_PROXY_NAME \ --target-tcp-proxy-region= REGION_A \ --address= IP_ADDRESS \ --ports= PORT_NUMBER
Replace the following:
-
FORWARDING_RULE_NAME: a name for the forwarding rule. -
NETWORK: the name of the network. -
SUBNET_A: the name of the subnetwork in the same region as the load balancer. -
IP_ADDRESS: the IP address of the load balancer. -
PORT_NUMBER: the port used by the forwarding rule—for example,443.
-
Test the load balancer
Now that you have configured your load balancer, you can test sending traffic to the load balancer's IP address.
-
Verify that you can access the HTTPS service through the load balancer.
curl https://example.com --resolve example.com:443: IP_ADDRESS -k
You will see the command returning a response from one of the VMs in the managed instance group with the following printed to the console.
"path": "/", "headers": { "host": "example.com", "user-agent": "curl/7.81.0", "accept": "*/*" }, "method": "GET", "body": "", "fresh": false, "hostname": "example.com", "ip": "::ffff:10.142.0.2", "ips": [], "protocol": "https", "query": {}, "subdomains": [], "xhr": false, "os": { "hostname": "0cd3aec9b351" }, "connection": { "servername": "example.com" }You can further verify that if you provide a different SNI hostname that doesn't match a TLS route, or if you don't provide an SNI hostname at all, the request is dropped.
- Run a test with an SNI hostname that doesn't match example.com, to ensure that the connection is rejected.
curl https://unknown.com --resolve unknown.com:443: IP_ADDRESS -k
- Run a test with a plain text connection without TLS, to ensure that the connection is rejected.
curl example.com:443 --resolve example.com:443: IP_ADDRESS -k
These commands return the following error.
curl: (35) OpenSSL SSL_connect: Connection reset by peer in connection
You'll see the
connection_refusederror code in theproxyStatuslogs when the load balancer refuses such invalid connections.
Enable session affinity
The example configuration creates a backend service without session affinity.
These procedures show you how to update a backend service for the example load balancer created previously so that the backend service uses client IP affinity or generated cookie affinity.
When client IP affinity is enabled, the load balancer directs a particular client's requests to the same backend VM based on a hash created from the client's IP address and the load balancer's IP address (the internal IP address of an internal forwarding rule).
To enable client IP session affinity, complete the following steps.
Console
-
In the Google Cloud console, go to the Load balancingpage.
-
Click Backends.
-
Click
ext-reg-tcp-proxy-bs(the name of the backend service that you created for this example), and then click Edit. -
On the Backend service detailspage, click Advanced configuration.
-
For Session affinity, select Client IP.
-
Click Update.
gcloud
To update the ext-reg-tcp-proxy-bs
backend service and specify client
IP session affinity, use the gcloud compute backend-services update ext-reg-tcp-proxy-bs
command:
gcloud compute backend-services update ext-reg-tcp-proxy-bs \ --region= REGION_A \ --session-affinity=CLIENT_IP

