Performance benchmarks

This page shows the performance limits of a single Google Cloud NetApp Volumes volume from multiple client virtual machines. Use the information on this page to size your workloads.

Random I/O versus sequential I/O

Workloads that are primarily random I/O in nature aren't able to drive the same amount of throughput as sequential I/O workloads.

Performance testing

The following test results display performance limits. In these tests, the volume has sufficient capacity so that the throughput doesn't affect benchmark testing. Allocating a single volume's capacity beyond the following throughput numbers doesn't yield additional performance gains.

Note that performance testing was completed using Fio.

For the performance testing results, be aware of the following considerations:

  • Standard, Premium, and Extreme service level performance scales throughput with volume capacity until limits are reached.

  • The Flex service level with custom performance allows independent scaling of capacity, IOPS, and throughput.

  • IOPS results are purely informational.

  • The numbers used to produce the following results are set up to show maximum results. The following results should be considered an estimate of the maximum achievable throughput capacity assignment.

  • Using multiple fast volumes per project may be subject to per project limits.

  • The following performance testing results cover only NFSv3, SMB, or both protocol types. Other protocol types such as NFSv4.1 were not used to test NetApp Volumes performance.

Volume throughput limits for NFSv3 access

The following sections provide details on volume throughput limits for NFSv3 access.

The tests were run using the Flex service level with custom performance, and Extreme service level. The following are the results captured.

Flex service level with custom performance

The following tests were run with a single volume in a Flex custom performance zonal storage pool. The pool was configured with the maximum throughput and IOPS, and the results were captured.

64 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 64 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 96 GiB working set for each virtual machine with a combined total of 576 GiB

  • nconnect mount option configured on each host for a value of 16

  • rsize and wsize mount options configured at 65536

  • Volume size was 10 TiB of the Flex service level with custom performance. For testing, the custom performance was set to its maximum values of 5,120 MiBps and 160,000 IOPS.

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling approximately 4,300 MiBps of pure sequential reads and 1,480 MiBps of pure sequential writes with a 64 KiB block size over NFSv3.

Benchmark results for NFS 64 KiB Sequential 6 n2-standard-32 Red Hat 9 VMs
100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps
4,304 2,963 1,345 464 0
Write MiBps
0 989 1,344 1,390 1,476

8 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 8 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 96 GiB working set for each virtual machine with a combined total of 576 GiB

  • nconnect mount option configured on each host for a value of 16

  • rsize and wsize mount options on each host configured at 65536

  • Volume size was 10 TiB of the Flex service level with custom performance. For testing, the custom performance was set to its maximum values of 5,120 MiBps and 160,000 IOPS.

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling approximately 126,400 pure random read IOPS and 78,600 of pure random write IOPS with an 8 KiB block size over NFSv3.

Benchmark results for NFS 8 KiB Random 6 n2-standard-32 Red Hat 9 VMs
100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS
126,397 101,740 57,223 23,600 0
Write IOPS
0 33,916 57,217 70,751 78,582

Extreme service level

The following tests were run with a single volume in an Extreme storage pool and results were captured.

64 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 64 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between around 5,240 MiBps of pure sequential reads and around 2,180 MiBps of pure sequential writes with a 64 KiB block size over NFSv3.

Benchmark results for NFS 64 KiB Sequential 6 n2-standard-32 Red Hat 9 VMs
100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps
5,237 2,284 1,415 610 0
Write MiBps
0 764 1,416 1,835 2,172

256 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 256 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between around 4,930 MiBps of pure sequential reads and around 2,440 MiBps of pure sequential writes with a 256 KiB block size over NFSv3.

Benchmark results for NFS 256 KiB Sequential 6 n2-standard-32 Red Hat 9 VMs
100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps
4,928 2,522 1,638 677 0
Write MiBps
0 839 1,640 2,036 2,440

4 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 4 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~380,000 pure random reads and around 120,000 of pure random writes with a 4 KiB block size over NFSv3.

Benchmark results for NFS 4 KiB Random 6 n2-standard-32 Red Hat 9 VMs
100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS
380,000 172,000 79,800 32,000 0
Write IOPS
0 57,300 79,800 96,200 118,000

8 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 8 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Red Hat 9 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • nconnect mount option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~270,000 pure random reads and ~110,000 of pure random writes with an 8 KiB block size over NFSv3.

Benchmark results for NFS 8 KiB 6 n2-standard-32 Red Hat 9 VMs
100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS
265,000 132,000 66,900 30,200 0
Write IOPS
0 44,100 66,900 90,500 104,000

Volume throughput limits for SMB access

The following sections provide details for volume throughput limits for SMB access.

64 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 64 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • SMB Connect Count Per RSS Network Interface client-side option configured on each virtual machine for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~5,130 MiBps of pure sequential reads and ~1,790 MiBps of pure sequential writes with a 64 KiB block size over SMB.

SMB 64 KiB Sequential 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps
5,128 2,675 1,455 559 0
Write MiBps
0 892 1,454 1,676 1,781

256 KiB block size (Sequential I/O)

These results were captured using Fio with the following settings:

  • 256 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine with a combined total of 6 TiB

  • SMB Connection Count Per RSS Network Interface client-side option configured on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~4,620 MiBps of pure sequential reads and ~1,830 MiBps of pure sequential writes with a 256 KiB block size over SMB.

SMB 256 KiB Sequential 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read MiBps
4,617 2,708 1,533 584 0
Write MiBps
0 900 1,534 1,744 1,826

4 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 4 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine for a combined total of 6 TiB

  • SMB Connection Count Per RSS Network Interface client-side option enabled on each host for a value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~390,000 pure random reads and ~110,000 of pure random writes with a 4 KiB block size over SMB.

Benchmark results for SMB 4 KiB Random 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS
390,900 164,700 84,200 32,822 0
Write IOPS
0 54,848 84,200 98,500 109,300

8 KiB block size (Random I/O)

These results were captured using Fio with the following settings:

  • 8 KiB block size against a single volume with six n2-standard-32 virtual machines

  • Windows 2022 OS

  • 1 TiB working set for each virtual machine for a combined total of 6 TiB

  • SMB Connection Count Per RSS Network Interface client-side option configured on each host for the value of 16

  • Volume size was 75 TiB of the Extreme service level

Fio was run with 8 jobs on each virtual machine for a total of 48 jobs. The following table demonstrates that a single volume is estimated to be capable of handling between ~280,000 pure random reads and ~90,000 of pure random writes with an 8 KiB block size over SMB.

Benchmark results for SMB 8 KiB Random 6 n2-standard-32 Windows 2022 VMs

100% Read and 0% Write 75% Read and 25% Write 50% Read and 50% Write 25% Read and 75% Write 0% Read and 100% Write
Read IOPS
271,800 135,900 65,700 28,093 0
Write IOPS
0 45,293 65,900 84,400 85,500

Electronic design automation workload benchmark

NetApp Volumes large volume support offers high-performance parallel file systems that are ideal for electronic design automation workloads. These file systems provide up to 1 PiB of capacity and deliver high I/O and throughput rates at low latency.

Electronic design automation workloads have different performance requirements between the frontend and backend phases. The frontend phase prioritizes metadata and IOPS, while the backend phase focuses on throughput.

An industry-standard electronic design automation benchmark with mixed frontend and backend workloads, using a large volume with multiple NFSv3 clients that are evenly distributed over 6 IP addresses, can achieve up to 21.5 GiBps throughput and up to 1,350,000 IOPS.

What's next

Monitor performance .

Create a Mobile Website
View Site in Mobile | Classic
Share by: