TPU v2

This document describes the architecture and supported configurations of Cloud TPU v2.

System architecture

Architectural details and performance characteristics of TPU v2 are available in A Domain Specific Supercomputer for Training Deep Neural Networks .

Configurations

A TPU v2 slice is composed of 512 chips interconnected with reconfigurable high-speed links. To create a TPU v2 slice, use the --accelerator-type flag in the TPU creation command ( gcloud compute tpus tpu-vm ). You specify the accelerator type by specifying the TPU version and the number of TPU cores. For example, for a single v2 TPU, use --accelerator-type=v2-8 . For a v2 slice with 128 TensorCores, use --accelerator-type=v2-128 .

The following command shows how to create a v2 TPU slice with 128 TensorCores:

  
 $  
 
gcloud  
compute  
tpus  
tpu-vm  
create  
 tpu-name 
  
 \ 
  
--zone = 
us-central1-a  
 \ 
  
--accelerator-type = 
v2-128  
 \ 
  
--version = 
tpu-ubuntu2204-base

For more information about managing TPUs, see Manage TPUs . For more information about the TPU system architecture Cloud TPU, see System architecture .

The following table lists the supported v2 TPU types:

TPU version Support ends
v2-8 (End date not yet set)
v2-32 (End date not yet set)
v2-128 (End date not yet set)
v2-256 (End date not yet set)
v2-512 (End date not yet set)
Create a Mobile Website
View Site in Mobile | Classic
Share by: