Data plane extensibility with EnvoyFilter
You can use the EnvoyFilter
API to extend data plane capabilities in
Cloud Service Mesh that are otherwise not achievable by using other
Istio APIs. With the EnvoyFilter
API, you can customize the Envoy
configuration generated from other policies applied to the workloads,
like adding filters to the HTTP filter chain.
Important considerations
- Note that the API surface is tied to internal implementation details and so
special care must be taken when using this feature since incorrect
configurations could destabilize the mesh. Only use the
EnvoyFilterAPI if other Istio APIs don't suit your needs. - The
EnvoyFilterAPI is supported with specific restrictions on which fields and extensions may be used for reliability and supportability purposes. For an exhaustive list of supported features in theEnvoyFilterAPI, refer to Supported features using Istio APIs (managed control plane) . - The scope of support Google offers is limited to propagating the user-provided config to the workloads with Envoy sidecars and does not extend to the correctness of the config specified using per-extension APIs.
Supported API Fields
The EnvoyFilter
API is supported with the TRAFFIC_DIRECTOR
control plane implementation only with limitedsupport as follows:
-
targetRefs: Not supported -
configPatches[].applyTo: onlyHTTP_FILTERis supported -
configPatches[].patch.operation: onlyINSERT_FIRSTandINSERT_BEFOREwhen used with the route filter are supported. -
configPatches[].patch.value.type_url: refer to Supported Extensions -
configPatches[].patch.filterClass: Not supported -
configPatches[].match.proxy: Not supported -
configPatches[].match.routeConfiguration: Not supported -
configPatches[].match.cluster: Not supported - The following fields are supported for the
INSERT_BEFOREoperation only:-
configPatches[].match.listener: onlyfilteris supported. -
configPatches[].match.listener.filter.name: onlyenvoy.filters.network.http_connection_manageris supported. -
configPatches[].match.listener.filter.subFilter.name: onlyenvoy.filters.http.routeris supported.
-
Supported Extensions
Following is the list of supported extensions along with their supported API fields across various release channels . The API definition and their semantics can be found at the official Envoy documentation .
type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit
| Field | Rapid | Regular | Stable |
|---|---|---|---|
stat_prefix
|
|||
status
|
|||
token_bucket
|
|||
filter_enabled
|
|||
filter_enforced
|
|||
response_headers_to_add
|
|||
request_headers_to_add_when_not_enforced
|
|||
local_rate_limit_per_downstream_connection
|
|||
enable_x_ratelimit_headers
|
type.googleapis.com/envoy.extensions.filters.http.grpc_web.v3.GrpcWeb
| Field | Rapid | Regular | Stable |
|---|---|---|---|
|
(No fields)
|
Sample Usage
In this tutorial, you'll learn how to use Envoy's built-in local rate limiting
to dynamically limit the traffic to a service using the EnvoyFilter
API.
Costs
This tutorial uses the following billable components of Google Cloud:
When you finish this tutorial, you can avoid ongoing costs by deleting the resources you created. For more information, see Clean up .
Before you begin
- Ensure billing is enabled for your project.
- Provision Cloud Service Mesh on a GKE cluster .
-
Clone the repository:
git clone https://github.com/GoogleCloudPlatform/anthos-service-mesh-samples cd anthos-service-mesh-samples
Deploy an ingress gateway
-
Set the current context for
kubectlto the cluster:gcloud container clusters get-credentials CLUSTER_NAME \ --project= PROJECT_ID \ --zone= CLUSTER_LOCATION -
Create a namespace for your ingress gateway:
kubectl create namespace asm-ingress -
Enable the namespace for injection. The steps depend on your control plane implementation .
Apply the default injection label to the namespace:
kubectl label namespace asm-ingress \ istio.io/rev- istio-injection = enabled --overwrite -
Deploy the example gateway in the
anthos-service-mesh-samplesrepository:kubectl apply -n asm-ingress \ -f docs/shared/asm-ingress-gatewayExpected output:
serviceaccount/asm-ingressgateway configured service/asm-ingressgateway configured deployment.apps/asm-ingressgateway configured gateway.networking.istio.io/asm-ingressgateway configured
Deploy the Online Boutique sample application
-
If you haven't, set the current context for
kubectlto the cluster:gcloud container clusters get-credentials CLUSTER_NAME \ --project= PROJECT_ID \ --zone= CLUSTER_LOCATION -
Create the namespace for the sample application:
kubectl create namespace onlineboutique -
Label the
onlineboutiquenamespace to automatically inject Envoy proxies:kubectl label namespace onlineboutique \ istio.io/rev- istio-injection=enabled --overwrite -
Deploy the sample app, the
VirtualServicefor the frontend, and service accounts for the workloads. For this tutorial, you will deploy Online Boutique , a microservice demo app.kubectl apply \ -n onlineboutique \ -f docs/shared/online-boutique/virtual-service.yamlkubectl apply \ -n onlineboutique \ -f docs/shared/online-boutique/service-accounts
View your services
-
View the pods in the
onlineboutiquenamespace:kubectl get pods -n onlineboutiqueExpected output:
NAME READY STATUS RESTARTS AGE adservice-85598d856b-m84m6 2/2 Running 0 2m7s cartservice-c77f6b866-m67vd 2/2 Running 0 2m8s checkoutservice-654c47f4b6-hqtqr 2/2 Running 0 2m10s currencyservice-59bc889674-jhk8z 2/2 Running 0 2m8s emailservice-5b9fff7cb8-8nqwz 2/2 Running 0 2m10s frontend-77b88cc7cb-mr4rp 2/2 Running 0 2m9s loadgenerator-6958f5bc8b-55q7w 2/2 Running 0 2m8s paymentservice-68dd9755bb-2jmb7 2/2 Running 0 2m9s productcatalogservice-84f95c95ff-c5kl6 2/2 Running 0 114s recommendationservice-64dc9dfbc8-xfs2t 2/2 Running 0 2m9s redis-cart-5b569cd47-cc2qd 2/2 Running 0 2m7s shippingservice-5488d5b6cb-lfhtt 2/2 Running 0 2m7sAll of the pods for your application should be up and running, with a
2/2in theREADYcolumn. This indicates that the pods have an Envoy sidecar proxy injected successfully. If it does not show2/2after a couple of minutes, visit the Troubleshooting guide . -
Get the external IP, and set it to a variable:
kubectl get services -n asm-ingress export FRONTEND_IP=$(kubectl --namespace asm-ingress \ get service --output jsonpath='{.items[0].status.loadBalancer.ingress[0].ip}' \ )You see output similar to the following:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE asm-ingressgateway LoadBalancer 10.19.247.233 35.239.7.64 80:31380/TCP,443:31390/TCP,31400:31400/TCP 27m -
Visit the
EXTERNAL-IPaddress in your web browser. You should expect to see the Online Boutique shop in your browser.
Apply Rate Limit Config
This section applies an EnvoyFilter
resource to limit all traffic to the frontend
service to 5 req/min.
-
Apply the CR to the
frontendservice:kubectl apply -f - <<EOF apiVersion: networking.istio.io/v1alpha3 kind: EnvoyFilter metadata: name: frontend-local-ratelimit namespace: onlineboutique spec: workloadSelector: labels: app: frontend configPatches: - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: filterChain: filter: name: "envoy.filters.network.http_connection_manager" subFilter: name: "envoy.filters.http.router" patch: operation: INSERT_BEFORE value: name: envoy.filters.http.local_ratelimit typed_config: "@type" : type.googleapis.com/udpa.type.v1.TypedStruct type_url: type.googleapis.com/envoy.extensions.filters.http.local_ratelimit.v3.LocalRateLimit value: stat_prefix: http_local_rate_limiter token_bucket: max_tokens: 5 tokens_per_fill: 5 fill_interval: 60s filter_enabled: runtime_key: local_rate_limit_enabled default_value: numerator: 100 denominator: HUNDRED filter_enforced: runtime_key: local_rate_limit_enforced default_value: numerator: 100 denominator: HUNDRED EOFExpected output:
envoyfilter.networking.istio.io/frontend-local-ratelimit created -
Verify that the CR status doesn't report any errors:
kubectl get envoyfilter -n onlineboutique frontend-local-ratelimit -o yamlExpected output:
... status : conditions : - lastTransitionTime : "2025-06-30T14:29:25.467017594Z" message : This resource has been accepted. This does not mean it has been propagated to all proxies yet reason : Accepted status : "True" type : Accepted -
Remove the
loadgeneratordeployment as it calls the service multiple times which consumes tokens:kubectl delete -n onlineboutique deployment loadgeneratorExpected output:
deployment.apps/loadgenerator deleted -
Using
curl, verify that no more than 5 requests are allowed in 60 sec. The429code indicates that the rate limiting is being enforced.for i in { 1 ..10 } ; do curl -s http:// ${ FRONTEND_IP } -o /dev/null -w "%{http_code}\n" ; sleep 1 ; doneExpected output:
200 200 200 200 200 429 429 429 429 429
Clean up
To avoid incurring continuing charges to your Google Cloud account for the resources used in this tutorial, you can either delete the project or delete the individual resources .
Delete the project
- Everything in the project is deleted. If you used an existing project for this tutorial, when you delete it, you also delete any other work you've done in the project.
- Custom project IDs are lost. When you created this project, you might have created a custom project ID that you want to use in the future. To preserve the URLs that use the project ID, such as an appspot.com URL, delete selected resources inside the project instead of deleting the whole project.
In Cloud Shell, delete the project:
gcloud
projects
delete
PROJECT_ID
Delete the resources
-
If you want to keep your cluster and remove the Online Boutique sample:
-
Delete the application namespaces:
kubectl delete namespace onlineboutiqueExpected output:
namespace "onlineboutique" deleted -
Delete the Ingress Gateway namespace:
kubectl delete namespace asm-ingressExpected output:
namespace "asm-ingress" deleted
-
-
If you want to prevent additional charges, delete the cluster:
gcloud container clusters delete CLUSTER_NAME \ --project= PROJECT_ID \ --zone= CLUSTER_LOCATION

