This page helps you choose the most suitable API for deploying load balancers to distribute traffic across a fleet of Google Kubernetes Engine (GKE) clusters.
You can attach a load balancer to your fleet of GKE clusters in the following ways:
-
Use the Multi Cluster Ingress APIs such as the Multi Cluster Ingress and MultiClusterService resources.
-
Use the Gateway APIs ( GatewayClass , Gateway , HTTPRoute , Policy , ServiceExport , and ServiceImport resources).
-
Set up the Application Load Balancer using Google Cloud console, gcloud CLI, API, Terraform, Config Connector and attach Standalone NEGs to the user-managed backend services.
The following table lists the different ways in which you can attach a load balancer to your fleet of GKE clusters. Any features listed in the Load balancer feature comparison page that aren't listed in the following table should work with a user-managed load balancer with Standalone NEGs, instead of relying on the Kubernetes-native API for load balancing.
(Google Cloud infrastructure)
(Google Cloud infrastructure)
Cluster setting on Standard
- Forwarding rule
- Target proxy
- URL map
- Backend services
- Health checks
( Zonal NEGs only)
( Zonal NEGs only)
( Zonal NEGs only, annotation required on the Kubernetes Service)
( VPC firewall rules only, Managed rules )
( VPC firewall rules only, Managed rules )
(with firewall rules permissions in host project)
- Cloud Storage
- Public external endpoints (Internet NEGs)
- Private external endpoints (Hybrid NEGs)
- Private Service Connect (PSC NEGs)
- Cloud Run (Serverless NEGs)
(Load balancer-to-backend traffic remains IPv4)
(Load balancer-to-backend traffic remains IPv4)
(Load balancer-to-backend traffic remains IPv4)
(Prefix, Exact match)
(Prefix, Exact match)
(Exact match)