TensorFlow integration

This page explains Gemini Enterprise Agent Platform's TensorFlow integration and provides resources that show you how to use TensorFlow on Gemini Enterprise Agent Platform. Gemini Enterprise Agent Platform's TensorFlow integration makes it easier for you to train, deploy, and orchestrate TensorFlow models in production.

Run code in notebooks

Agent Platform provides two options for running your code in notebooks, Colab Enterprise and Vertex AI Workbench. To learn more about these options, see choose a notebook solution .

Prebuilt containers for training

Gemini Enterprise Agent Platform provides prebuilt Docker container images for model training. These containers are organized by machine learning frameworks and framework versions and include common dependencies that you might want to use in your training code.

To learn about which TensorFlow versions have prebuilt training containers and how to train models with a prebuilt training container, see Prebuilt containers for custom training .

Distributed training

You can run distributed training of TensorFlow models on Gemini Enterprise Agent Platform. For multi-worker training, you can use Reduction Server to optimize performance even further for all-reduce collective operations. To learn more about distributed training on Gemini Enterprise Agent Platform, see Distributed training .

Prebuilt containers for inference

Similar to prebuilt containers for training, Gemini Enterprise Agent Platform provides prebuilt container images for serving inferences and explanations from TensorFlow models that you either created within or outside of Gemini Enterprise Agent Platform. These images provide HTTP inference servers that you can use to serve inferences with minimal configuration.

To learn about which TensorFlow versions have prebuilt training containers and how to train models with a prebuilt training container, see Prebuilt containers for custom training .

Optimized TensorFlow runtime

The optimized TensorFlow runtime uses model optimizations and new proprietary Google technologies to improve the speed and lower the cost of inferences compared to Gemini Enterprise Agent Platform's standard prebuilt inference containers for TensorFlow.

TensorFlow Cloud Profiler integration

Train models cheaper and faster by monitoring and optimizing the performance of your training job using Gemini Enterprise Agent Platform's TensorFlow Cloud Profiler integration. TensorFlow Cloud Profiler helps you understand the resource consumption of training operations so you can identify and eliminate performance bottlenecks.

To learn more about Agent Platform TensorFlow Cloud Profiler, see Profile model training performance using Profiler .

Resources for using TensorFlow on Gemini Enterprise Agent Platform

To learn more and start using TensorFlow in Gemini Enterprise Agent Platform, see the following resources.

Create a Mobile Website
View Site in Mobile | Classic
Share by: