This guide provides instructions for creating an external passthrough Network Load Balancer deployment by using a regional backend service . This example creates an external passthrough Network Load Balancer that supports either TCP or UDP traffic. If you want to create an external passthrough Network Load Balancer that load-balances TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic (not just TCP or UDP), see Set up an external passthrough Network Load Balancer for multiple IP protocols .
In this example, we'll use the load balancer to distribute TCP traffic across
backend VMs in two zonal managed instance groups in the us-central1
region. An
equally valid approach would be to use a single regional managed instance group
for the us-central1
region.
This scenario distributes TCP traffic across healthy instances. To support this example, TCP health checks are configured to ensure that traffic is sent only to healthy instances. Note that TCP health checks are only supported with a backend service-based load balancer. Target pool-based load balancers can only use legacy HTTP health checks.
This example load balances TCP traffic, but you can use backend service-based external passthrough Network Load Balancers to load balance TCP, UDP, ESP, GRE, ICMP, and ICMPv6 traffic.
The external passthrough Network Load Balancer is a regional load balancer. All load balancer components (backend VMs, backend service, and forwarding rule) must be in the same region.
Before you begin
Install the Google Cloud CLI. For a complete overview of the tool, see the gcloud CLI overview . You can find commands related to load balancing in the API and gcloud references .
If you haven't run the Google Cloud CLI previously, first run gcloud init
to authenticate.
This guide assumes that you are familiar with bash .
Set up the network and subnets
The example on this page uses a custom mode VPC
network
named lb-network
. You can use an auto
mode VPC network if you only want to handle IPv4 traffic.
However, IPv6 traffic requires a custom mode
subnet
.
IPv6 traffic also requires a dual-stack
subnet ( stack-type
set to IPv4_IPv6
). When you create a dual stack subnet on a custom mode VPC network,
you choose an IPv6 access type
for the
subnet. For this example, we set the subnet's ipv6-access-type
parameter to EXTERNAL
. This means new VMs on this subnet can be assigned both external
IPv4 addresses and external IPv6 addresses. The forwarding rules can also be
assigned both external IPv4 addresses and external IPv6 addresses.
The backends and the load balancer components used for this example are located in this region and subnet:
- Region:
us-central1
- Subnet:
lb-subnet
, with primary IPv4 address range10.1.2.0/24
. Although you choose which IPv4 address range is configured on the subnet, the IPv6 address range is assigned automatically. Google provides a fixed size (/64) IPv6 CIDR block.
To create the example network and subnet, follow these steps.
Console
To support both IPv4 and IPv6 traffic, use the following steps:
-
In the Google Cloud console, go to the VPC networkspage.
-
Click Create VPC network.
-
Enter a Nameof
lb-network
. -
In the Subnetssection:
- Set the Subnet creation modeto Custom.
- In the New subnetsection, configure the following fields and click Done:
- Name:
lb-subnet
- Region:
us-central1
- IP stack type: IPv4 and IPv6 (dual-stack)
- IPv4 range:
10.1.2.0/24
Although you can configure an IPv4 range of addresses for the subnet, you cannot choose the range of the IPv6 addresses for the subnet. Google provides a fixed size (/64) IPv6 CIDR block. - IPv6 access type: External
- Name:
-
Click Create.
To support IPv4 traffic only, use the following steps:
-
In the Google Cloud console, go to the VPC networkspage.
-
Click Create VPC network.
-
Enter a Nameof
lb-network
. -
In the Subnetssection:
- Set the Subnet creation modeto Custom.
- In the New subnetsection, configure the following fields and click Done:
- Name:
lb-subnet
- Region:
us-central1
- IP stack type: IPv4 (single-stack)
- IPv4 range:
10.1.2.0/24
- Name:
-
Click Create.
gcloud
-
Create the custom mode VPC network:
gcloud compute networks create lb-network \ --subnet-mode=custom
-
Within the
lb-network
network, create a subnet for backends in theus-central1
region.For both IPv4 and IPv6 traffic, use the following command to create a dual-stack subnet:
gcloud compute networks subnets create lb-subnet \ --stack-type=IPV4_IPv6 \ --ipv6-access-type=EXTERNAL \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-central1
For IPv4 traffic only, use the following command:
gcloud compute networks subnets create lb-subnet \ --network=lb-network \ --range=10.1.2.0/24 \ --region=us-central1
Create the zonal managed instance groups
For this load balancing scenario, you create two Compute Engine zonal managed instance groups and install an Apache web server on each instance.
To handle both IPv4 and IPv6 traffic, configure the backend VMs to be
dual-stack. Set the VM's stack-type
to IPv4_IPv6
. The VMs also inherit the ipv6-access-type
setting (in this example, EXTERNAL
) from the subnet. For
more details about IPv6 requirements, see the External passthrough Network Load Balancer overview:
Forwarding
rules
.
To use existing VMs as backends, update the VMs to be dual-stack by
using the gcloud compute instances network-interfaces
update
command
.
Instances that participate as backend VMs for external passthrough Network Load Balancers must be running the appropriate Linux Guest Environment , Windows Guest Environment , or other processes that provide equivalent capability.
Setting up the instances
Console
-
Create an instance template. In the Google Cloud console, go to the Instance templatespage.
- Click Create instance template.
- For Name, enter
ig-us-template
. - In the Boot disksection, ensure that the Imageis set to a Debian image, such as Debian GNU/Linux 12 (bookworm). These instructions use commands that
are only available on Debian, such as
apt-get
. - Click Advanced options.
- Click Networking.
- For Network tags, enter
lb-tag
. - For Network interfaces, click the defaultinterface and
configure the following fields:
- Network:
lb-network
- Subnetwork:
lb-subnet
- Network:
- Click Done.
- For Network tags, enter
-
Click Managementand copy the following script into the Startup scriptfield.
#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2
-
Click Create.
-
Create a managed instance group. In the Google Cloud console, go to the Instance groupspage.
- Click Create instance group.
- Select New managed instance group (stateless). For more information, see Stateless or stateful MIGs .
- For Name, enter
ig-us-1
. - For Instance template, select
ig-us-template
. - For Location, select Single zone.
- For Region, select
us-central1
. - For Zone, select
us-central1-a
. -
Specify the number of instances that you want to create in the group.
For this example, specify the following options in the Autoscalingsection:
- For Autoscaling mode, select
Off:do not autoscale
. - For Maximum number of instances, enter
2
.
- For Autoscaling mode, select
-
Click Create.
-
Repeat the previous steps to create a second managed instance group in the
us-central1-c
zone with the following specifications:- Name:
ig-us-2
- Zone:
us-central1-c
- Instance template: Use the same
ig-us-template
template created in the previous section.
- Name:
gcloud
The gcloud
instructions in this guide assume that you are using Cloud
Shell
or another environment with bash installed.
-
Create a VM instance template with HTTP server with the
gcloud compute instance-templates create
command.To handle both IPv4 and IPv6 traffic, use the following command.
gcloud compute instance-templates create ig-us-template \ --region=us-central1 \ --network=lb-network \ --subnet=lb-subnet \ --ipv6-network-tier=PREMIUM \ --stack-type=IPv4_IPv6 \ --tags=lb-tag \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
To handle IPv4 traffic onlytraffic, use the following command.
gcloud compute instance-templates create ig-us-template \ --region=us-central1 \ --network=lb-network \ --subnet=lb-subnet \ --tags=lb-tag \ --image-family=debian-12 \ --image-project=debian-cloud \ --metadata=startup-script='#! /bin/bash apt-get update apt-get install apache2 -y a2ensite default-ssl a2enmod ssl vm_hostname="$(curl -H "Metadata-Flavor:Google" \ http://metadata.google.internal/computeMetadata/v1/instance/name)" echo "Page served from: $vm_hostname" | \ tee /var/www/html/index.html systemctl restart apache2'
-
Create a managed instance group in the zone with the
gcloud compute instance-groups managed create
command.gcloud compute instance-groups managed create ig-us-1 \ --zone us-central1-a \ --size 2 \ --template ig-us-template
-
Create a second managed instance group in the
us-central1-c
zone:gcloud compute instance-groups managed create ig-us-2 \ --zone us-central1-c \ --size 2 \ --template ig-us-template
Configuring firewall rules
Create firewall rules that allow external traffic (which includes health check probes) to reach the backend instances.
This example creates a firewall rule that allows TCP traffic from all source ranges to reach your backend instances on port 80. If you want to create separate firewall rules specifically for the health check probes, use the source IP address ranges documented in the Health checks overview: Probe IP ranges and firewall rules .
Console
-
In the Google Cloud console, go to the Firewall policiespage.
-
To allow IPv4 traffic, perform the following steps:
- Click Create firewall rule.
- For Name, enter
allow-network-lb-ipv4
. - In the Networklist, select
lb-network
. - For Targets, select Specified target tags.
- In the Target tagsfield, enter
lb-tag
. - For Source filter, select IPv4 ranges.
- Set the Source IPv4 rangesto
0.0.0.0/0
. This allows IPv4 traffic from any source. This also allows Google's health check probes to reach the backend instances. - For Specified protocols and ports, select the TCPcheckbox
and enter
80
. - Click Create. It might take a moment for the Google Cloud console to display the new firewall rule, or you might have to click Refreshto see the rule.
-
To allow IPv6 traffic, perform the following steps:
- Click Create firewall ruleagain.
- For Name, enter
allow-network-lb-ipv6
. - In the Networklist, select
lb-network
. - For Targets, select Specified target tags.
- In the Target tagsfield, enter
lb-tag
. - For Source filter, select IPv6 ranges.
- Set the Source IPv6 rangesto
::/0
. This allows IPv6 traffic from any source. This also allows Google's health check probes to reach the backend instances. - For Specified protocols and ports, select the TCPcheckbox
and enter
80
. - Click Create. It might take a moment for the Console to display the new firewall rule, or you might have to click Refreshto see the rule.
gcloud
-
To allow IPv4 traffic, run the following command:
gcloud compute firewall-rules create allow-network-lb-ipv4 \ --network=lb-network \ --target-tags=lb-tag \ --allow=tcp:80 \ --source-ranges=0.0.0.0/0
-
To allow IPv6 traffic, run the following command:
gcloud compute firewall-rules create allow-network-lb-ipv6 \ --network=lb-network \ --target-tags=lb-tag \ --allow=tcp:80 \ --source-ranges=::/0
Configure the load balancer
Next, set up the load balancer.
When you configure the load balancer, your virtual machine (VM) instances
will receive packets that are destined for the static external IP address you
configure. If you are using an image provided by Compute Engine
,
your instances are automatically configured to handle this IP address. If
you are using any other image, you must configure this address as
an alias on eth0
or as a loopback on each instance.
To setup the load balancer, use the following the instructions.
Console
Start your configuration
-
In the Google Cloud console, go to the Load balancing page.
- Click Create load balancer .
- For Type of load balancer , select Network Load Balancer (TCP/UDP/SSL) and click Next .
- For Proxy or passthrough , select Passthrough load balancer and click Next .
- For Public facing or internal , select Public facing (external) and click Next .
- Click Configure .
Backend configuration
- On the Create external passthrough Network Load Balancerpage, enter
the name
tcp-network-lb
for the new load balancer. - For Region, select
us-central1
. - Click Backend configuration.
- On the Backend configurationpage, make the following changes:
- For New Backend, select the IP stack type. If you created dual-stack backends to handle both IPv4 and IPv6 traffic, select IPv4 and IPv6 (dual-stack). To handle IPv4 traffic only, select IPv4(single-stack).
- In the Instance grouplist, select
ig-us-1
, and then click Done. - Click Add a backendand repeat this step to add
ig-us-2
. - For Health check, click Create a health checkor Create another health check, and then enter the following information:
- Name:
tcp-health-check
- Protocol:
TCP
- Port:
80
- Name:
- Click Save.
- Verify that there is a blue checkmark next to Backend configurationbefore continuing.
Frontend configuration
- Click Frontend configuration.
- For Name, enter
network-lb-forwarding-rule
. - To handle IPv4 traffic, use the following steps:
- For IP version, select IPv4.
- In the Internal IP purposesection, in the IP addresslist,
select Create IP address.
- On the Reserve a new static IP addresspage, for Name, enter
network-lb-ipv4
. - Click Reserve.
- On the Reserve a new static IP addresspage, for Name, enter
- For Ports, choose Single. For Port number, enter
80
. - Click Done.
-
To handle IPv6 traffic, use the following steps:
- For IP version, select IPv6.
- For Subnetwork, select lb-subnet.
- In the IPv6 rangelist, select Create
IP address.
- On the Reserve a new static IP addresspage, for Name, enter
network-lb-ipv6
. - Click Reserve.
- On the Reserve a new static IP addresspage, for Name, enter
- For Ports, select Single. For Port number, enter
80
. - Click Done.
A blue circle with a checkmark to the left of Frontend configurationindicates a successful setup.
Review the configuration
- Click Review and finalize.
- Review your load balancer configuration settings.
- Optional: Click Equivalent codeto view the REST API request that will be used to create the load balancer.
-
Click Create.
On the load balancing page, under the Backend column for your new load balancer, you should see a green checkmark showing that the new load balancer is healthy.
gcloud
-
Reserve a static external IP address.
For IPv4 traffic:Create a static external IPv4 address for your load balancer.
gcloud compute addresses create network-lb-ipv4 \ --region us-central1
For IPv6 traffic:Create a static external IPv6 address range for your load balancer. The subnet used must be a dual-stack subnet with an external IPv6 range.
gcloud compute addresses create network-lb-ipv6 \ --region us-central1 \ --subnet lb-subnet \ --ip-version IPV6 \ --endpoint-type NETLB
-
Create a TCP health check.
gcloud compute health-checks create tcp tcp-health-check \ --region us-central1 \ --port 80
-
Create a backend service .
gcloud compute backend-services create network-lb-backend-service \ --protocol TCP \ --health-checks tcp-health-check \ --health-checks-region us-central1 \ --region us-central1
-
Add the instance groups to the backend service.
gcloud compute backend-services add-backend network-lb-backend-service \ --instance-group ig-us-1 \ --instance-group-zone us-central1-a \ --region us-central1
gcloud compute backend-services add-backend network-lb-backend-service \ --instance-group ig-us-2 \ --instance-group-zone us-central1-c \ --region us-central1
-
Create the forwarding rules depending on whether you want to handle IPv4 traffic or IPv6 traffic. Create both forwarding rules to handle both types of traffic.
-
For IPv4 traffic:Create a forwarding rule to route incoming TCP traffic to the backend service. Use the IPv4 address reserved in step 1 as the static external IP address of the load balancer.
gcloud compute forwarding-rules create network-lb-forwarding-rule-ipv4 \ --load-balancing-scheme EXTERNAL \ --region us-central1 \ --ports 80 \ --address network-lb-ipv4 \ --backend-service network-lb-backend-service
-
For IPv6 traffic:Create a forwarding rule to handle IPv6 traffic. Use the IPv6 address range reserved in step 1 as the static external IP address of the load balancer. The subnet used must be a dual-stack subnet with an external IPv6 subnet range.
gcloud compute forwarding-rules create network-lb-forwarding-rule-ipv6 \ --load-balancing-scheme EXTERNAL \ --region us-central1 \ --network-tier PREMIUM \ --ip-version IPV6 \ --subnet lb-subnet \ --address network-lb-ipv6 \ --ports 80 \ --backend-service network-lb-backend-service
-
Test the load balancer
Now that the load balancing service is configured, you can start sending traffic to the load balancer's external IP address and watch traffic get distributed to the backend instances.
Look up the load balancer's external IP address
Console
-
On the Load balancing componentspage, go to the Forwarding rulestab.
-
Locate the forwarding rule used by the load balancer.
-
In the External IP addresscolumn, note the external IP address listed.
gcloud: IPv4
Enter the following command to view the external IPv4 address of the network-lb-forwarding-rule
forwarding rule used by the load balancer.
gcloud compute forwarding-rules describe network-lb-forwarding-rule-ipv4 \ --region us-central1
gcloud: IPv6
Enter the following command to view the external IPv6 address of the network-lb-forwarding-rule
forwarding rule used by the load balancer.
gcloud compute forwarding-rules describe network-lb-forwarding-rule-ipv6 \ --region us-central1
Send traffic to the load balancer
Make web requests to the load balancer using curl
to contact its IP
address.
-
From clients with IPv4 connectivity, run the following command:
$ while true; do curl -m1 IPV4_ADDRESS ; done
-
From clients with IPv6 connectivity, run the following command:
$ while true; do curl -m1 http:// IPV6_ADDRESS ; done
For example, if the assigned IPv6 address is
[2001:db8:1:1:1:1:1:1/96]:80
, the command should look like:$ while true; do curl -m1 http://[2001:db8:1:1:1:1:1:1]:80; done
Note the text returned by the curl
command. The name of the backend VM
generating the response is displayed in that text; for example: Page served
from: VM_NAME
The response from the curl
command alternates randomly among the backend
instances. If your response is initially unsuccessful, you might need to wait
approximately 30 seconds for the configuration to be fully loaded and for
your instances to be marked healthy before trying again.
Additional configuration options
This section expands on the configuration example to provide instructions about how to further customize your external passthrough Network Load Balancer. These tasks are optional. You can perform them in any order.
Configure session affinity
The example configuration creates a backend service with session affinity
disabled (value set to NONE
). This section shows you how to update the backend
service to change the load balancer's session affinity setting.
For supported session affinity types, see Session affinity options .
Console
-
In the Google Cloud console, go to the Load balancingpage.
-
In the Load balancerstab, click the name of the backend service, and then click Edit.
-
On the Edit external passthrough Network Load Balancerpage, click Backend configuration.
-
Select an option from the Session affinitylist.
-
Click Update.
gcloud
Use the following gcloud
command to update session affinity for the
backend service:
gcloud compute backend-services update BACKEND_SERVICE \ --region= REGION \ --session-affinity= SESSION_AFFINITY_OPTION
Replace the placeholders with valid values:
-
BACKEND_SERVICE
: the backend service that you're updating -
SESSION_AFFINITY_OPTION
: the session affinity option that you want to setFor the list of supported values for an external passthrough Network Load Balancer, see Session affinity options .
Configure a connection tracking policy
The example configuration creates a backend service with the default settings for its connection tracking policy. This section shows you how to update the backend service to change the load balancer's default connection tracking policy.
A connection tracking policy includes the following settings:
- Tracking mode
- Connection persistence on unhealthy backends
- Idle timeout (60 seconds, not configurable)
gcloud
Use the following gcloud compute
backend-services
command to update the connection tracking policy for the backend service:
gcloud compute backend-services update BACKEND_SERVICE \ --region= REGION \ --tracking-mode= TRACKING_MODE \ --connection-persistence-on-unhealthy-backends= CONNECTION_PERSISTENCE_BEHAVIOR
Replace the placeholders with valid values:
-
BACKEND_SERVICE
: the backend service that you're updating -
TRACKING_MODE
: the connection tracking mode to be used for incoming packets. For the list of supported values, see Tracking mode . -
CONNECTION_PERSISTENCE_BEHAVIOR
: the connection persistence behavior when backends are unhealthy. For the list of supported values, see Connection persistence on unhealthy backends .
Configure traffic steering
This section shows you how to update a load balancer's frontend configuration to set up source IP-based traffic steering. For details about how traffic steering works, see Traffic steering .
These instructions assume that you already have the parent base forwarding rule created. This example creates a second forwarding rule, which is the steering forwarding rule, with the same IP address, IP protocol, and ports as the parent. This steering forwarding rule is configured with source IP ranges so that you can customize how packets from those source IP ranges are forwarded.
gcloud
Use the following command to create a steering forwarding rule that points to a backend service:
gcloud compute forwarding-rules create STEERING_FORWARDING_RULE_BS \ --load-balancing-scheme=EXTERNAL \ --backend-service= BACKEND_SERVICE \ --address= LOAD_BALANCER_VIP \ --ip-protocol= IP_PROTOCOL \ --ports= PORTS \ --region= REGION \ --source-ip-ranges= SOURCE_IP_ADDRESS_RANGES
Use the following command to create a steering forwarding rule that points to a target instance:
gcloud compute forwarding-rules create STEERING_FORWARDING_RULE_TI \ --load-balancing-scheme=EXTERNAL \ --target-instance= TARGET_INSTANCE \ --address= LOAD_BALANCER_VIP \ --ip-protocol= IP_PROTOCOL \ --ports= PORTS \ --region= REGION \ --source-ip-ranges= SOURCE_IP_ADDRESS_RANGES
Replace the placeholders with valid values:
-
FORWARDING_RULE
: the name of the steering forwarding rule that you're creating. -
BACKEND_SERVICE
orTARGET_INSTANCE
: the name of the backend service or target instance to which this steering forwarding rule will send traffic. Even if the parent forwarding rule points to a backend service, you can create steering forwarding rules that point to target instances. -
LOAD_BALANCER_VIP
,IP_PROTOCOL
,PORTS
: the IP address, IP protocol, and ports, respectively, for the steering forwarding rule that you're creating. These settings should match a pre-existing base forwarding rule. -
REGION
: the region of the forwarding rule that you're creating. -
SOURCE_IP_ADDRESS_RANGES
: comma-separated list of IP addresses or IP address ranges. This forwarding rule will only forward traffic when the source IP address of the incoming packet falls into one of the IP ranges set here.
Use the following command to delete a steering forwarding rule. You must delete any steering forwarding rules that are being used by a load balancer before you can delete the load balancer itself.
gcloud compute forwarding-rules delete STEERING_FORWARDING_RULE \ --region= REGION
Configure failover policy
To configure the failover policy, see Configure failover for external passthrough Network Load Balancers .
Configure weighted load balancing
To configure weighted load balancing, see Configure weighted load balancing .
Create an IPv6 forwarding rule with BYOIP
The load balancer created in the previous steps has been configured with
forwarding rules with IP version
as IPv4
or IPv6
. This section provides
instructions to create an IPv6 forwarding rule with bring your own IP (BYOIP)
addresses.
Bring your own IP addresses lets you provision and use your own public IPv6 addresses for Google Cloud resources. For more information, see Bring your own IP addresses .
Before you start configuring an IPv6 forwarding rule with BYOIP addresses, you must complete the following steps:
- Create a public advertised IPv6 prefix
- Create public delegated prefixes
- Create IPv6 sub-prefixes
- Announce the prefix
To create a new forwarding rule, follow these steps:
Console
-
In the Google Cloud console, go to the Load balancing page.
- Click the name of the load balancer that you want to modify.
- Click Edit .
- Click Frontend configuration .
- Click Add frontend IP and port .
- In the New Frontend IP and port section, specify the following:
- The Protocol is TCP .
- In the IP version field, select IPv6 .
- In the Source of IPv6 range field, select BYOIP .
- In the IP collection list, select a sub-prefix created in the previous steps with the forwarding rule option enabled.
- In the IPv6 range field, enter the IPv6 address range. The IPv6 address range must adhere to the IPv6 sub-prefix specifications .
- In the Ports field, enter a port number.
- Click Done .
- Click Update .
Google Cloud CLI
Create the forwarding rule by using the gcloud compute forwarding-rules create
command
:
gcloud compute forwarding-rules create FWD_RULE_NAME \ --load-balancing-scheme EXTERNAL \ --ip-protocol PROTOCOL \ --ports ALL \ --ip-version IPV6 \ --region REGION_A \ --address IPV6_CIDR_RANGE \ --backend-service BACKEND_SERVICE \ --ip-collection PDP_NAME
Replace the following:
-
FWD_RULE_NAME
: the name of the forwarding rule -
PROTOCOL
: the IP protocol for the forwarding rule The default isTCP
. The IP protocol can be one ofTCP
,UDP
, orL3_DEFAULT
. -
REGION_A
: region for the forwarding rule -
IPV6_CIDR_RANGE
: the IPv6 address range that the forwarding rule serves. The IPv6 address range must adhere to the IPv6 sub-prefix specifications . -
BACKEND_SERVICE
: the name of the backend service -
PDP_NAME
: the name of the public delegated prefix. The PDP must be a sub-prefix in the EXTERNAL_IPV6_FORWARDING_RULE_CREATION mode
What's next
- To learn how to migrate an external passthrough Network Load Balancer from a target pool backend to a regional backend service, see Migrate external passthrough Network Load Balancers from target pools to backend services .
- To configure an external passthrough Network Load Balancer for multiple IP protocols (supporting IPv4 and IPv6 traffic), see Set up an external passthrough Network Load Balancer for multiple IP protocols .
- To configure an external passthrough Network Load Balancer with zonal network endpoint group (NEG) backends
that let you forward packets to non-
nic0
network interfaces of VM instances, see Set up an external passthrough Network Load Balancer with zonal NEG backends . - To configure advanced network DDoS protection for an external passthrough Network Load Balancer by using Google Cloud Armor, see Configure advanced network DDoS protection .
- To delete resources, see Clean up the load balancer setup .