This page explains Gemini Enterprise Agent Platform's PyTorch integration and provides resources that show you how to use PyTorch on Gemini Enterprise Agent Platform. Gemini Enterprise Agent Platform's PyTorch integration makes it easier for you to train, deploy, and orchestrate PyTorch models in production.
Run code in notebooks
Agent Platform provides two options for running your code in notebooks, Colab Enterprise and Vertex AI Workbench. To learn more about these options, see choose a notebook solution .
Prebuilt containers for training
Gemini Enterprise Agent Platform provides prebuilt Docker container images for model training. These containers are organized by machine learning frameworks and framework versions and include common dependencies that you might want to use in your training code. To learn about which PyTorch versions have prebuilt training containers and how to train models with a prebuilt training container, see Prebuilt containers for custom training .
Prebuilt containers for serving inferences
Gemini Enterprise Agent Platform provides prebuilt Docker container images for serving both batch and online inferences. These containers are organized by machine learning frameworks and framework versions and include common dependencies that you might want to use in your inference code. To learn about which PyTorch versions have prebuilt inference containers and how to serve models with a prebuilt inference container, see Prebuilt containers for custom training .
Distributed training
You can run distributed training of PyTorch models on Gemini Enterprise Agent Platform. For multi-worker training, you can use Reduction Server to optimize performance even further for all-reduce collective operations. To learn more about distributed training on Gemini Enterprise Agent Platform, see Distributed training .
Resources for using PyTorch on Gemini Enterprise Agent Platform
To learn more and start using PyTorch in Gemini Enterprise Agent Platform, see the following resources:
- How to train and tune PyTorch models on Gemini Enterprise Agent Platform : Learn how to use Gemini Enterprise Agent Platform Training to build and train a sentiment text classification model using PyTorch and Gemini Enterprise Agent Platform Hyperparameter Tuning to tune hyperparameters of PyTorch models.
- How to deploy PyTorch models on Gemini Enterprise Agent Platform : Walk through the deployment of a Pytorch model using TorchServe as a custom container, by deploying the model artifacts to a Vertex AI Inference service.
- Orchestrating PyTorch ML Workflows on Gemini Enterprise Agent Platform Pipelines : See how to build and orchestrate ML pipelines for training and deploying PyTorch models on Google Cloud Gemini Enterprise Agent Platform using Gemini Enterprise Agent Platform Pipelines .
- Scalable ML Workflows using PyTorch on Kubeflow Pipelines and Vertex Pipelines : Take a look at examples of PyTorch -based ML workflows on OSS Kubeflow Pipelines , (part of the Kubeflow project) and Gemini Enterprise Agent Platform Pipelines . We share new PyTorch built-in components added to the Kubeflow Pipelines.
- Serving PyTorch image models with prebuilt containers on Agent Platform : This notebook deploys a PyTorch image classification model on Agent Platform using prebuilt PyTorch serving images.
What's next
- Tutorial: Use Gemini Enterprise Agent Platform to train a PyTorch image
classification model in one of Gemini Enterprise Agent Platform's prebuilt container environments
by using the Google Cloud console.
To follow step-by-step guidance for this task directly in the Google Cloud console, click Guide me :

