TorchServe

This document describes how to configure your Google Kubernetes Engine deployment so that you can use Google Cloud Managed Service for Prometheus to collect metrics from TorchServe. This document shows you how to do the following:

  • Set up TorchServe to report metrics.
  • Configure a PodMonitoring resource for Managed Service for Prometheus to collect the exported metrics.
  • Access a dashboard in Cloud Monitoring to view the metrics.

These instructions apply only if you are using managed collection with Managed Service for Prometheus. If you are using self-deployed collection, then see the TorchServe documentation for installation information.

These instructions are provided as an example and are expected to work in most Kubernetes environments. If you are having trouble installing an application or exporter due to restrictive security or organizational policies, then we recommend you consult open-source documentation for support.

For information about TorchServe, see TorchServe . For information about setting up TorchServe on Google Kubernetes Engine, see the GKE guide for TorchServe .

Prerequisites

To collect metrics from TorchServe by using Managed Service for Prometheus and managed collection, your deployment must meet the following requirements:

  • Your cluster must be running Google Kubernetes Engine version 1.21.4-gke.300 or later.
  • You must be running Managed Service for Prometheus with managed collection enabled. For more information, see Get started with managed collection .
TorchServe exposes Prometheus-format metrics automatically when the metrics_mode flag is specified either in the config.properties file or as an environment variable.

If you are setting up TorchServe yourself, then we recommend making the following additions to your config.properties file.

If you are following the Google Kubernetes Engine document Serve scalable LLMs on GKE with TorchServe , then these additions are part of the default setup.

  # Copyright 2025 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #     https://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
  
 inference_address 
 = 
 http 
 : 
 // 
 0.0 
 . 
 0.0 
 : 
 8080 
  
 management_address 
 = 
 http 
 : 
 // 
 0.0 
 . 
 0.0 
 : 
 8081 
 + 
  
 metrics_address 
 = 
 http 
 : 
 // 
 0.0 
 . 
 0.0 
 : 
 8082 
 + 
  
 metrics_mode 
 = 
 prometheus 
  
 number_of_netty_threads 
 = 
 32 
  
 job_queue_size 
 = 
 1000 
  
 install_py_dep_per_model 
 = 
 true 
  
 model_store 
 =/ 
 home 
 / 
 model 
 - 
 server 
 / 
 model 
 - 
 store 
  
 load_models 
 = 
 all 
 

Also, when deploying this image to GKE, modify your deployment and service YAML to expose the added metrics port:

  # 
  
 Copyright 
  
 2025 
  
 Google 
  
 LLC 
 # 
 # 
  
 Licensed 
  
 under 
  
 the 
  
 Apache 
  
 License 
 , 
  
 Version 
  
 2.0 
  
 ( 
 the 
  
 "License" 
 ); 
 # 
  
 you 
  
 may 
  
 not 
  
 use 
  
 this 
  
 file 
  
 except 
  
 in 
  
 compliance 
  
 with 
  
 the 
  
 License 
 . 
 # 
  
 You 
  
 may 
  
 obtain 
  
 a 
  
 copy 
  
 of 
  
 the 
  
 License 
  
 at 
 # 
 # 
  
 https 
 : 
 //www.apache.org/licenses/LICENSE-2.0 
 # 
 # 
  
 Unless 
  
 required 
  
 by 
  
 applicable 
  
 law 
  
 or 
  
 agreed 
  
 to 
  
 in 
  
 writing 
 , 
  
 software 
 # 
  
 distributed 
  
 under 
  
 the 
  
 License 
  
 is 
  
 distributed 
  
 on 
  
 an 
  
 "AS IS" 
  
 BASIS 
 , 
 # 
  
 WITHOUT 
  
 WARRANTIES 
  
 OR 
  
 CONDITIONS 
  
 OF 
  
 ANY 
  
 KIND 
 , 
  
 either 
  
 express 
  
 or 
  
 implied 
 . 
 # 
  
 See 
  
 the 
  
 License 
  
 for 
  
 the 
  
 specific 
  
 language 
  
 governing 
  
 permissions 
  
 and 
 # 
  
 limitations 
  
 under 
  
 the 
  
 License 
 . 
 apiVersion 
 : 
  
 apps 
 / 
 v1 
 kind 
 : 
  
 Deployment 
 metadata 
 : 
  
 name 
 : 
  
 t5 
 - 
 inference 
  
 labels 
 : 
  
 model 
 : 
  
 t5 
  
 version 
 : 
  
 v1 
 .0 
  
 machine 
 : 
  
 gpu 
 spec 
 : 
  
 replicas 
 : 
  
 1 
  
 selector 
 : 
  
 matchLabels 
 : 
  
 model 
 : 
  
 t5 
  
 version 
 : 
  
 v1 
 .0 
  
 machine 
 : 
  
 gpu 
  
 template 
 : 
  
 metadata 
 : 
  
 labels 
 : 
  
 model 
 : 
  
 t5 
  
 version 
 : 
  
 v1 
 .0 
  
 machine 
 : 
  
 gpu 
  
 spec 
 : 
  
 nodeSelector 
 : 
  
 cloud 
 . 
 google 
 . 
 com 
 / 
 gke 
 - 
 accelerator 
 : 
  
 nvidia 
 - 
 l4 
  
 containers 
 : 
  
 - 
  
 name 
 : 
  
 inference 
  
 ... 
  
 args 
 : 
  
 [ 
 "torchserve" 
 , 
  
 "--start" 
 , 
  
 "--foreground" 
 ] 
  
 resources 
 : 
  
 ... 
  
 ports 
 : 
  
 - 
  
 containerPort 
 : 
  
 8080 
  
 name 
 : 
  
 http 
  
 - 
  
 containerPort 
 : 
  
 8081 
  
 name 
 : 
  
 management 
 + 
  
 - 
  
 containerPort 
 : 
  
 8082 
 + 
  
 name 
 : 
  
 metrics 
 --- 
 apiVersion 
 : 
  
 v1 
 kind 
 : 
  
 Service 
 metadata 
 : 
  
 name 
 : 
  
 t5 
 - 
 inference 
  
 labels 
 : 
  
 model 
 : 
  
 t5 
  
 version 
 : 
  
 v1 
 .0 
  
 machine 
 : 
  
 gpu 
 spec 
 : 
  
 ... 
  
 ports 
 : 
  
 - 
  
 port 
 : 
  
 8080 
  
 name 
 : 
  
 http 
  
 targetPort 
 : 
  
 http 
  
 - 
  
 port 
 : 
  
 8081 
  
 name 
 : 
  
 management 
  
 targetPort 
 : 
  
 management 
 + 
  
 - 
  
 port 
 : 
  
 8082 
 + 
  
 name 
 : 
  
 metrics 
 + 
  
 targetPort 
 : 
  
 metrics 
 

To verify that TorchServe is emitting metrics on the expected endpoints, do the following:

  1. Set up port forwarding by using the following command:
    kubectl -n NAMESPACE_NAME 
    port-forward SERVICE_NAME 
    8082
  2. Access the endpoint localhost:8082/metrics by using the browser or the curl utility in another terminal session.

Define a PodMonitoring resource

For target discovery, the Managed Service for Prometheus Operator requires a PodMonitoring resource that corresponds to TorchServe in the same namespace.

You can use the following PodMonitoring configuration:

  # Copyright 2025 Google LLC 
 # 
 # Licensed under the Apache License, Version 2.0 (the "License"); 
 # you may not use this file except in compliance with the License. 
 # You may obtain a copy of the License at 
 # 
 #     https://www.apache.org/licenses/LICENSE-2.0 
 # 
 # Unless required by applicable law or agreed to in writing, software 
 # distributed under the License is distributed on an "AS IS" BASIS, 
 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 
 # See the License for the specific language governing permissions and 
 # limitations under the License. 
 apiVersion 
 : 
  
 monitoring.googleapis.com/v1 
 kind 
 : 
  
 PodMonitoring 
 metadata 
 : 
  
 name 
 : 
  
 torchserve 
  
 labels 
 : 
  
 app.kubernetes.io/name 
 : 
  
 torchserve 
  
 app.kubernetes.io/part-of 
 : 
  
 google-cloud-managed-prometheus 
 spec 
 : 
  
 endpoints 
 : 
  
 - 
  
 port 
 : 
  
 8082 
  
 scheme 
 : 
  
 http 
  
 interval 
 : 
  
 30s 
  
 path 
 : 
  
 /metrics 
  
 selector 
 : 
  
 matchLabels 
 : 
  
 model 
 : 
  
 t5 
  
 version 
 : 
  
 v1.0 
  
 machine 
 : 
  
 gpu 
 
Ensure that the values of the port and matchLabels fields match those of the TorchServe pods you want to monitor.

To apply configuration changes from a local file, run the following command:

kubectl apply -n NAMESPACE_NAME 
-f FILE_NAME 

You can also use Terraform to manage your configurations.

Verify the configuration

You can use Metrics Explorer to verify that you correctly configured TorchServe. It might take one or two minutes for Cloud Monitoring to ingest your metrics.

To verify the metrics are ingested, do the following:

  1. In the Google Cloud console, go to the Metrics explorer page:

    Go to Metrics explorer

    If you use the search bar to find this page, then select the result whose subheading is Monitoring .

  2. In the toolbar of the query-builder pane, select the button whose name is either MQL or PromQL .
  3. Verify that PromQL is selected in the Language toggle. The language toggle is in the same toolbar that lets you format your query.
  4. Enter and run the following query:
    up{job="torchserve", cluster=" CLUSTER_NAME 
    ", namespace=" NAMESPACE_NAME 
    "}

View dashboards

The Cloud Monitoring integration includes the TorchServe Prometheus Overview dashboard. Dashboards are automatically installed when you configure the integration. You can also view static previews of dashboards without installing the integration.

To view an installed dashboard, do the following:

  1. In the Google Cloud console, go to the Dashboards page:

    Go to Dashboards

    If you use the search bar to find this page, then select the result whose subheading is Monitoring .

  2. Select the Dashboard List tab.
  3. Choose the Integrations category.
  4. Click the name of the dashboard, for example, TorchServe Prometheus Overview .

To view a static preview of the dashboard, do the following:

  1. In the Google Cloud console, go to the Integrations page:

    Go to Integrations

    If you use the search bar to find this page, then select the result whose subheading is Monitoring .

  2. Click the Kubernetes Engine deployment-platform filter.
  3. Locate the TorchServe integration and click View Details .
  4. Select the Dashboards tab.

Troubleshooting

For information about troubleshooting metric ingestion problems, see Problems with collection from exporters in Troubleshooting ingestion-side problems .

Create a Mobile Website
View Site in Mobile | Classic
Share by: