Stay organized with collectionsSave and categorize content based on your preferences.
Resolving resource limit issues in Cloud Service Mesh
This section explains common Cloud Service Mesh problems and how to resolve them.
If you need additional assistance, seeGetting support.
Cloud Service Mesh resource limit problems can be caused by any of the following:
LimitRangeobjects created in theistio-systemnamespace or any namespace with automatic sidecar injection enabled.
User-defined limits that are set too low.
Nodes run out of memory or other resources.
Potential symptoms of resource problems:
Cloud Service Mesh repeatedly not receiving configuration fromistiodindicated by
the error,Envoy proxy NOT ready. Seeing this error a few times at startup is
normal, but otherwise it is a concern.
Networking problems with some pods or nodes that become unreachable.
istioctl proxy-statusshowingSTALEstatuses in the output.
OOMKilledmessages in the logs of a node.
Memory usage by containers:kubectl top pod POD_NAME --containers.
Memory usage by pods inside a node:kubectl top node my-node.
Envoy out of memory:kubectl get podsshows statusOOMKilledin the output.
Istio sidecars take a long time to receive configuration
Slow configuration propagation can occur due to insufficient resources allocated
toistiodor an excessively large cluster size.
There are several possible solutions to this problem:
If your monitoring tools (prometheus, stackdriver, etc.) show high
utilization of a resource byistiod, increase the allocation of that resource,
for example increase the CPU or memory limit of theistioddeployment. This is a
temporary solution and we recommended that you investigate methods for reducing
resource consumption.
If you encounter this issue in a large cluster/deployment, reduce the amount
of configuration state pushed to each proxy by configuringSidecar resources.
If the problem persists, try horizontally scalingistiod.
If all other troubleshooting steps fail to resolve the problem, report a bug
detailing your deployment and the observed problems. Followthese stepsto include a CPU/Memory profile in the bug report if possible, along with a
detailed description of cluster size, number of pods, number of services, etc.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["Resolving resource limit issues in Cloud Service Mesh\n\nThis section explains common Cloud Service Mesh problems and how to resolve them.\nIf you need additional assistance, see [Getting support](/service-mesh/v1.20/docs/getting-support).\n\nCloud Service Mesh resource limit problems can be caused by any of the following:\n\n- `LimitRange` objects created in the `istio-system` namespace or any namespace with automatic sidecar injection enabled.\n- User-defined limits that are set too low.\n- Nodes run out of memory or other resources.\n\nPotential symptoms of resource problems:\n\n- Cloud Service Mesh repeatedly not receiving configuration from `istiod` indicated by the error, `Envoy proxy NOT ready`. Seeing this error a few times at startup is normal, but otherwise it is a concern.\n- Networking problems with some pods or nodes that become unreachable.\n- `istioctl proxy-status` showing `STALE` statuses in the output.\n- `OOMKilled` messages in the logs of a node.\n- Memory usage by containers: `kubectl top pod POD_NAME --containers`.\n- Memory usage by pods inside a node: `kubectl top node my-node`.\n- Envoy out of memory: `kubectl get pods` shows status `OOMKilled` in the output.\n\nIstio sidecars take a long time to receive configuration\n\nSlow configuration propagation can occur due to insufficient resources allocated\nto `istiod` or an excessively large cluster size.\n\nThere are several possible solutions to this problem:\n\n1. If your monitoring tools (prometheus, stackdriver, etc.) show high\n utilization of a resource by `istiod`, increase the allocation of that resource,\n for example increase the CPU or memory limit of the `istiod` deployment. This is a\n temporary solution and we recommended that you investigate methods for reducing\n resource consumption.\n\n2. If you encounter this issue in a large cluster/deployment, reduce the amount\n of configuration state pushed to each proxy by configuring\n [Sidecar resources](https://istio.io/v1.26/docs/reference/config/networking/sidecar/).\n\n3. If the problem persists, try horizontally scaling `istiod`.\n\n4. If all other troubleshooting steps fail to resolve the problem, report a bug\n detailing your deployment and the observed problems. Follow\n [these steps](https://github.com/istio/istio/wiki/Analyzing-Istio-Performance)\n to include a CPU/Memory profile in the bug report if possible, along with a\n detailed description of cluster size, number of pods, number of services, etc."]]