Local SSD performance limits provided in the Choose a storage option section were achieved by using specific settings on the Local SSD instance. If your virtual machine (VM) instance is having trouble reaching these performance limits and you have already configured the instance using the recommended local SSD settings , you can compare your performance limits against the published limits by replicating the settings used by the Compute Engine team.
These instructions assume that you are using a Linux operating system with the apt 
package manager installed.
Create a VM with one Local SSD device
The number of Local SSD that a VM can have is based on the machine type you use to create the VM. For details, see Choosing a valid number of Local SSDs .
-  Create a Local SSD instance that has four or eight vCPUs for each device, depending on your workload. For example, the following command creates a C3 VM with 4 vCPUs and 1 Local SSD. gcloud compute instances create c3-ssd-test-instance \ --machine-type "c3-standard-4-lssd"For second generation and earlier machine types, you specify the number of Local SSD to attach to the VM using the --local-ssdflag. The following command creates an N2 VM with 8 vCPUs and 1 Local SSD that uses the NVMe disk interface:gcloud compute instances create ssd-test-instance \ --machine-type "n2-standard-8" \ --local-ssd interface=nvme
-  Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the performance section . Note that the --bsparameter defines the block size, which affects the results for different types of read and write operations.# install tools sudo apt-get -y update sudo apt-get install -y fio util-linux # discard Local SSD sectors sudo blkdiscard /dev/disk/by-id/google-local-nvme-ssd-0 # full write pass - measures write bandwidth with 1M blocksize sudo fio --name = writefile \ --filename = /dev/disk/by-id/google-local-nvme-ssd-0 --bs = 1M --nrfiles = 1 \ --direct = 1 --sync = 0 --randrepeat = 0 --rw = write --end_fsync = 1 \ --iodepth = 128 --ioengine = libaio # rand read - measures max read IOPS with 4k blocks sudo fio --time_based --name = readbenchmark --runtime = 30 --ioengine = libaio \ --filename = /dev/disk/by-id/google-local-nvme-ssd-0 --randrepeat = 0 \ --iodepth = 128 --direct = 1 --invalidate = 1 --verify = 0 --verify_fatal = 0 \ --numjobs = 4 --rw = randread --blocksize = 4k --group_reporting # rand write - measures max write IOPS with 4k blocks sudo fio --time_based --name = writebenchmark --runtime = 30 --ioengine = libaio \ --filename = /dev/disk/by-id/google-local-nvme-ssd-0 --randrepeat = 0 \ --iodepth = 128 --direct = 1 --invalidate = 1 --verify = 0 --verify_fatal = 0 \ --numjobs = 4 --rw = randwrite --blocksize = 4k --group_reporting
Create a VM with the maximum number of Local SSD
-  If you want to attach 24 or more Local SSD devices to an instance, use a machine type with 32 or more vCPUs. The following commands create a VM with the maximum allowed number of Local SSD disks using the NVMe interface: Attach Local SSD to VMgcloud compute instances create ssd-test-instance \ --machine-type "n1-standard-32" \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvme \ --local-ssd interface=nvmeUse -lssd machine typesNewer machine series offer -lssdmachine types that come with a predetermined number of Local SSD disks. For example, to benchmark a VM with 32 Local SSD (12 TiB capacity), use the following command:gcloud compute instances create ssd-test-instance \ --machine-type "c3-standard-176-lssd"
-  Install the mdadmtool. The install process formdadmincludes a user prompt that halts scripts, so run the process manually:Debian and Ubuntusudo apt update && sudo apt install mdadm --no-install-recommendsCentOS and RHELsudo yum install mdadm -ySLES and openSUSEsudo zypper install -y mdadm
-  Use the findcommand to identify all of the Local SSDs that you want to mount together:find /dev/ | grep google-local-nvme-ssdThe output looks similar to the following: /dev/disk/by-id/google-local-nvme-ssd-23 /dev/disk/by-id/google-local-nvme-ssd-22 /dev/disk/by-id/google-local-nvme-ssd-21 /dev/disk/by-id/google-local-nvme-ssd-20 /dev/disk/by-id/google-local-nvme-ssd-19 /dev/disk/by-id/google-local-nvme-ssd-18 /dev/disk/by-id/google-local-nvme-ssd-17 /dev/disk/by-id/google-local-nvme-ssd-16 /dev/disk/by-id/google-local-nvme-ssd-15 /dev/disk/by-id/google-local-nvme-ssd-14 /dev/disk/by-id/google-local-nvme-ssd-13 /dev/disk/by-id/google-local-nvme-ssd-12 /dev/disk/by-id/google-local-nvme-ssd-11 /dev/disk/by-id/google-local-nvme-ssd-10 /dev/disk/by-id/google-local-nvme-ssd-9 /dev/disk/by-id/google-local-nvme-ssd-8 /dev/disk/by-id/google-local-nvme-ssd-7 /dev/disk/by-id/google-local-nvme-ssd-6 /dev/disk/by-id/google-local-nvme-ssd-5 /dev/disk/by-id/google-local-nvme-ssd-4 /dev/disk/by-id/google-local-nvme-ssd-3 /dev/disk/by-id/google-local-nvme-ssd-2 /dev/disk/by-id/google-local-nvme-ssd-1 /dev/disk/by-id/google-local-nvme-ssd-0 finddoes not guarantee an ordering. It's alright if the devices are listed in a different order as long as number of output lines match the expected number of SSD partitions.If using SCSI devices, use the following findcommand:find /dev/ | grep google-local-ssd NVMe devices should all be of form google-local-nvme-ssd-#and SCSI devices should all be of formgoogle-local-ssd-#.
-  Use the mdadmtool to combine multiple Local SSD devices into a single array named/dev/md0. The following example merges twenty four Local SSD devices that use the NVMe interface. For Local SSD devices that use SCSI, use the device names returned from thefindcommand in step 3.sudo mdadm --create /dev/md0 --level = 0 --raid-devices = 24 \ /dev/disk/by-id/google-local-nvme-ssd-0 \ /dev/disk/by-id/google-local-nvme-ssd-1 \ /dev/disk/by-id/google-local-nvme-ssd-2 \ /dev/disk/by-id/google-local-nvme-ssd-3 \ /dev/disk/by-id/google-local-nvme-ssd-4 \ /dev/disk/by-id/google-local-nvme-ssd-5 \ /dev/disk/by-id/google-local-nvme-ssd-6 \ /dev/disk/by-id/google-local-nvme-ssd-7 \ /dev/disk/by-id/google-local-nvme-ssd-8 \ /dev/disk/by-id/google-local-nvme-ssd-9 \ /dev/disk/by-id/google-local-nvme-ssd-10 \ /dev/disk/by-id/google-local-nvme-ssd-11 \ /dev/disk/by-id/google-local-nvme-ssd-12 \ /dev/disk/by-id/google-local-nvme-ssd-13 \ /dev/disk/by-id/google-local-nvme-ssd-14 \ /dev/disk/by-id/google-local-nvme-ssd-15 \ /dev/disk/by-id/google-local-nvme-ssd-16 \ /dev/disk/by-id/google-local-nvme-ssd-17 \ /dev/disk/by-id/google-local-nvme-ssd-18 \ /dev/disk/by-id/google-local-nvme-ssd-19 \ /dev/disk/by-id/google-local-nvme-ssd-20 \ /dev/disk/by-id/google-local-nvme-ssd-21 \ /dev/disk/by-id/google-local-nvme-ssd-22 \ /dev/disk/by-id/google-local-nvme-ssd-23The response is similar to the following: mdadm: Defaulting to version 1 .2 metadata mdadm: array /dev/md0 started.You can confirm the details of the array with mdadm --detail. Adding the--prefer=by-idflag will list the devices using the/dev/disk/by-idpaths.sudo mdadm --detail --prefer = by-id /dev/md0The output should look similar to the following for each device in the array. ... Number Major Minor RaidDevice State 0 259 0 0 active sync /dev/disk/by-id/google-local-nvme-ssd-0 ...
-  Run the following script on your VM. The script replicates the settings used to achieve the SSD performance figures provided in the performance section . that the --bsparameter defines the block size, which affects the results for different types of read and write operations.# install tools sudo apt-get -y update sudo apt-get install -y fio util-linux # full write pass - measures write bandwidth with 1M blocksize sudo fio --name = writefile \ --filename = /dev/md0 --bs = 1M --nrfiles = 1 \ --direct = 1 --sync = 0 --randrepeat = 0 --rw = write --end_fsync = 1 \ --iodepth = 128 --ioengine = libaio # rand read - measures max read IOPS with 4k blocks sudo fio --time_based --name = benchmark --runtime = 30 \ --filename = /dev/md0 --ioengine = libaio --randrepeat = 0 \ --iodepth = 128 --direct = 1 --invalidate = 1 --verify = 0 --verify_fatal = 0 \ --numjobs = 48 --rw = randread --blocksize = 4k --group_reporting --norandommap # rand write - measures max write IOPS with 4k blocks sudo fio --time_based --name = benchmark --runtime = 30 \ --filename = /dev/md0 --ioengine = libaio --randrepeat = 0 \ --iodepth = 128 --direct = 1 --invalidate = 1 --verify = 0 --verify_fatal = 0 \ --numjobs = 48 --rw = randwrite --blocksize = 4k --group_reporting --norandommap
Benchmarking Storage Optimized VMs
-  Storage Optimized VMs (like the Z3 Family) should be benchmarked directly against the device partitions. You can get the partition names with lsblklsblk -o name,size -lpn | grep 2 .9T | awk '{print $1}'The output looks similar to the following: /dev/nvme1n1 /dev/nvme2n1 /dev/nvme3n1 /dev/nvme4n1 /dev/nvme5n1 /dev/nvme6n1 /dev/nvme7n1 /dev/nvme8n1 /dev/nvme9n1 /dev/nvme10n1 /dev/nvme11n1 /dev/nvme12n1 
-  Directly run the benchmarks against the Local SSD partitions by separating them with colon delimiters. # install benchmarking tools sudo apt-get -y update sudo apt-get install -y fio util-linux # Full Write Pass. # SOVM achieves max read performance on previously written/discarded ranges. sudo fio --readwrite = write --blocksize = 1m --iodepth = 4 --ioengine = libaio \ --direct = 1 --group_reporting \ --name = job1 --filename = /dev/nvme1n1 --name = job2 --filename = /dev/nvme2n1 \ --name = job3 --filename = /dev/nvme3n1 --name = job4 --filename = /dev/nvme4n1 \ --name = job5 --filename = /dev/nvme5n1 --name = job6 --filename = /dev/nvme6n1 \ --name = job7 --filename = /dev/nvme7n1 --name = job8 --filename = /dev/nvme8n1 \ --name = job9 --filename = /dev/nvme9n1 --name = job10 --filename = /dev/nvme10n1 \ --name = job11 --filename = /dev/nvme11n1 --name = job12 --filename = /dev/nvme12n1 # rand read - measures max read IOPS with 4k blocks sudo fio --readwrite = randread --blocksize = 4k --iodepth = 128 \ --numjobs = 4 --direct = 1 --runtime = 30 --group_reporting --ioengine = libaio \ --name = job1 --filename = /dev/nvme1n1 --name = job2 --filename = /dev/nvme2n1 \ --name = job3 --filename = /dev/nvme3n1 --name = job4 --filename = /dev/nvme4n1 \ --name = job5 --filename = /dev/nvme5n1 --name = job6 --filename = /dev/nvme6n1 \ --name = job7 --filename = /dev/nvme7n1 --name = job8 --filename = /dev/nvme8n1 \ --name = job9 --filename = /dev/nvme9n1 --name = job10 --filename = /dev/nvme10n1 \ --name = job11 --filename = /dev/nvme11n1 --name = job12 --filename = /dev/nvme12n1 # rand write - measures max write IOPS with 4k blocks sudo fio --readwrite = randwrite --blocksize = 4k --iodepth = 128 \ --numjobs = 4 --direct = 1 --runtime = 30 --group_reporting --ioengine = libaio \ --name = job1 --filename = /dev/nvme1n1 --name = job2 --filename = /dev/nvme2n1 \ --name = job3 --filename = /dev/nvme3n1 --name = job4 --filename = /dev/nvme4n1 \ --name = job5 --filename = /dev/nvme5n1 --name = job6 --filename = /dev/nvme6n1 \ --name = job7 --filename = /dev/nvme7n1 --name = job8 --filename = /dev/nvme8n1 \ --name = job9 --filename = /dev/nvme9n1 --name = job10 --filename = /dev/nvme10n1 \ --name = job11 --filename = /dev/nvme11n1 --name = job12 --filename = /dev/nvme12n1
What's next
- Learn about Local SSD pricing .

