GKE Inference Gateway

This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from GKE Inference Gateway. This document shows you how to do the following:

  • Set up GKE Inference Gateway to report metrics.
  • Access a dashboard in Cloud Monitoring to view the metrics.

These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the GKE Inference Gateway documentation for installation information.

These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, then we recommend you consult open-source documentation for support.

For information about GKE Inference Gateway, see GKE Inference Gateway .

Prerequisites

To collect metrics from the GKE Inference Gateway exporter by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:

  • Your cluster must be running Google Kubernetes Engine version 1.28.15-gke.2475000 or later.
  • You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection .
The GKE Inference Gateway exporter exposes Prometheus-format metrics automatically; you don't have to install it separately.

To verify that the GKE Inference Gateway exporter is emitting metrics on the expected endpoints, do the following:

  1. Add a secret, ServiceAccount, ClusterRole, and ClusterBinding. The GKE Inference Gateway exporter observability endpoints are protected by the auth token. To obtain credentials, the client requires a Secret that maps to a service account with the connected ClusterRole, for the nonResourceURLs: "/metrics", verbs: get rule. For more information, see Create a secret for a service account .

  2. Set up port forwarding by using the following command:

    kubectl -n NAMESPACE_NAME 
    port-forward POD_NAME 
    9090
  3. In another window, do the following:

    1. Fetch the token by running the following command:

      TOKEN=$(kubectl -n default get secret inference-gateway-sa-metrics-reader-secret  -o jsonpath='{.secrets[0].name}' -o jsonpath='{.data.token}' | base64 --decode)
    2. Access the endpoint localhost:9090/metrics using the curl utility:

      curl -H "Authorization: Bearer $TOKEN" localhost:9090/metrics

Create a secret for a service account

For the protected the GKE Inference Gateway exporter endpoint, the Managed Service for Prometheus Operator requires a secret for authorized metric collection in the gmp-system namespace.

If your cluster is using Autopilot mode, then replace gmp-system with gke-gmp-system .

You can use the following Secret, ServiceAccount, ClusterRole and ClusterRoleBinding configuration:

  # Copyright 2025 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #     https://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
 apiVersion 
 : 
  
 rbac.authorization.k8s.io/v1 
 kind 
 : 
  
 ClusterRole 
 metadata 
 : 
  
 name 
 : 
  
 inference-gateway-metrics-reader 
 rules 
 : 
 - 
  
 nonResourceURLs 
 : 
  
 - 
  
 /metrics 
  
 verbs 
 : 
  
 - 
  
 get 
 --- 
 apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 ServiceAccount 
 metadata 
 : 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader 
  
 namespace 
 : 
  
 default 
 --- 
 apiVersion 
 : 
  
 rbac.authorization.k8s.io/v1 
 kind 
 : 
  
 ClusterRoleBinding 
 metadata 
 : 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader-role-binding 
  
 namespace 
 : 
  
 default 
 subjects 
 : 
 - 
  
 kind 
 : 
  
 ServiceAccount 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader 
  
 namespace 
 : 
  
 default 
 roleRef 
 : 
  
 kind 
 : 
  
 ClusterRole 
  
 name 
 : 
  
 inference-gateway-metrics-reader 
  
 apiGroup 
 : 
  
 rbac.authorization.k8s.io 
 --- 
 apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 Secret 
 metadata 
 : 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader-secret 
  
 namespace 
 : 
  
 default 
  
 annotations 
 : 
  
 kubernetes.io/service-account.name 
 : 
  
 inference-gateway-sa-metrics-reader 
 type 
 : 
  
 kubernetes.io/service-account-token 
 --- 
 apiVersion 
 : 
  
 rbac.authorization.k8s.io/v1 
 kind 
 : 
  
 ClusterRole 
 metadata 
 : 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader-secret-read 
 rules 
 : 
 - 
  
 resources 
 : 
  
 - 
  
 secrets 
  
 apiGroups 
 : 
  
 [ 
 "" 
 ] 
  
 verbs 
 : 
  
 [ 
 "get" 
 , 
  
 "list" 
 , 
  
 "watch" 
 ] 
  
 resourceNames 
 : 
  
 [ 
 "inference-gateway-sa-metrics-reader-secret" 
 ] 
 --- 
 apiVersion 
 : 
  
 rbac.authorization.k8s.io/v1 
 kind 
 : 
  
 ClusterRoleBinding 
 metadata 
 : 
  
 name 
 : 
  
 gmp-system:collector:inference-gateway-sa-metrics-reader-secret-read 
  
 namespace 
 : 
  
 default 
 roleRef 
 : 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader-secret-read 
  
 kind 
 : 
  
 ClusterRole 
  
 apiGroup 
 : 
  
 rbac.authorization.k8s.io 
 subjects 
 : 
 - 
  
 name 
 : 
  
 collector 
  
 namespace 
 : 
  
 gmp-system 
  
 kind 
 : 
  
 ServiceAccount 
 

For more information, see the exporter's Metric & Observability guide .

To apply configuration changes from a local file, run the following command:

kubectl apply -n NAMESPACE_NAME 
-f FILE_NAME 

You can also use Terraform to manage your configurations.

Define a ClusterPodMonitoring resource

For target discovery, the Managed Service for Prometheus Operator requires a ClusterPodMonitoring resource that corresponds to the GKE Inference Gateway exporter in the same namespace.

You can use the following ClusterPodMonitoring configuration:

  # Copyright 2025 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #     https://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
 apiVersion 
 : 
  
 monitoring.googleapis.com/v1 
 kind 
 : 
  
 ClusterPodMonitoring 
 metadata 
 : 
  
 name 
 : 
  
 inference-optimized-gateway-monitoring 
  
 labels 
 : 
  
 app.kubernetes.io/name 
 : 
  
 inference-optimized-gateway 
  
 app.kubernetes.io/part-of 
 : 
  
 google-cloud-managed-prometheus 
 spec 
 : 
  
 endpoints 
 : 
  
 - 
  
 port 
 : 
  
 metrics 
  
 scheme 
 : 
  
 http 
  
 interval 
 : 
  
 5s 
  
 path 
 : 
  
 /metrics 
  
 authorization 
 : 
  
 type 
 : 
  
 Bearer 
  
 credentials 
 : 
  
 secret 
 : 
  
 name 
 : 
  
 inference-gateway-sa-metrics-reader-secret 
  
 key 
 : 
  
 token 
  
 namespace 
 : 
  
 default 
  
 selector 
 : 
  
 matchLabels 
 : 
  
 app 
 : 
  
 inference-gateway-ext-proc 
 

GKE Inference Gateway uses the ClusterPodMonitoring resource instead of the PodMonitoring resource because it needs to access the secret from another namespace.

In the matchLabels selector of the ClusterPodMonitoring configuration, you can replace the app value of inference-gateway-ext-proc with labels from your GKE Inference Gateway deployment. Ensure that the values of the port and matchLabels fields match those of the GKE Inference Gateway pods you want to monitor.

To apply configuration changes from a local file, run the following command:

kubectl apply -n NAMESPACE_NAME 
-f FILE_NAME 

You can also use Terraform to manage your configurations.

Verify the configuration

You can use Metrics Explorer to verify that you correctly configured the GKE Inference Gateway exporter. It might take one or two minutes for Cloud Monitoring to ingest your metrics.

To verify the metrics are ingested, do the following:

  1. In the Google Cloud console, go to the Metrics explorer page:

    Go to Metrics explorer

    If you use the search bar to find this page, then select the result whose subheading is Monitoring .

  2. In the toolbar of the query-builder pane, select the button whose name is either MQL or PromQL .
  3. Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
  4. Enter and run the following query:
    inference_model_request_total{cluster=" CLUSTER_NAME 
    ", namespace=" NAMESPACE_NAME 
    "}

View dashboards

The Cloud Monitoring integration includes the GKE Inference Gateway Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.

To view an installed dashboard, do the following:

  1. In the Google Cloud console, go to the Dashboards page:

    Go to Dashboards

    If you use the search bar to find this page, then select the result whose subheading is Monitoring .

  2. Select the Dashboard List tab.
  3. Choose the Integrations category.
  4. Click the name of the dashboard, for example, GKE Inference Gateway Prometheus Overview .

To view a static preview of the dashboard, do the following:

  1. In the Google Cloud console, go to the Integrations page:

    Go to Integrations

    If you use the search bar to find this page, then select the result whose subheading is Monitoring .

  2. Click the Kubernetes Engine deployment-platform filter.
  3. Locate the GKE Inference Gateway integration and click View Details .
  4. Select the Dashboards tab.

Troubleshooting

For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems .

Design a Mobile Site
View Site in Mobile | Classic
Share by: