This guide helps you assess the storage requirements of your cloud workload, understand the available storage options in Google Cloud, and design a storage strategy that provides optimal business value.
For a visual summary of the main design recommendations, see the decision tree diagram .
For information about selecting storage services for AI and ML workloads, see Design storage for AI and ML workloads in Google Cloud .
Overview of the design process
As a cloud architect, when you plan storage for a cloud workload, you need to first consider the functional characteristics of the workload, security constraints, resilience requirements, performance expectations, and cost goals. Next, you need to review the available storage services and features in Google Cloud. Then, based on your requirements and the available options, you select the storage services and features that you need. The following diagram shows this three-phase design process:
Define your requirements
Use the questionnaires in this section to define the key storage requirements of the workload that you want to deploy in Google Cloud.
Guidelines for defining storage requirements
When answering the questionnaires, consider the following guidelines:
-
Define requirements granularly
For example, if your application needs Network File System (NFS)-based file storage, identify the required NFS version.
-
Consider future requirements
For example, your current deployment might serve users in countries within Asia, but you might plan to expand the business to other continents. In this case, consider any storage-related regulatory requirements of the new business territories.
-
Consider cloud-specific opportunities and requirements
-
Take advantage of cloud-specific opportunities.
For example, to optimize the storage cost for data stored in Cloud Storage, you can control the storage duration by using data retention policies and lifecycle configurations.
-
Consider cloud-specific requirements.
For example, the on-premises data might exist in a single data center, and you might need to replicate the migrated data across two Google Cloud locations for redundancy.
-
Questionnaires
The questionnaires that follow are not exhaustive checklists for planning. Use them as a starting point to systematically analyze all the storage requirements of the workload that you want to deploy to Google Cloud.
Assess your workload's characteristics
-
What kind of data do you need to store?
Examples
- Static website content
- Backups and archives for disaster recovery
- Audit logs for compliance
- Large data objects that users download directly
- Transactional data
- Unstructured, and heterogeneous data
-
How much capacity do you need? Consider your current and future requirements.
-
Should capacity scale automatically with usage?
-
What are the access requirements? For example, should the data be accessible from outside Google Cloud?
-
What are the expected read-write patterns?
Examples
- Frequent writes and reads
- Frequent writes, but occasional reads
- Occasional writes and reads
- Occasional writes, but frequent reads
-
Does the workload need file-based access, using NFS for example?
-
Should multiple clients be able to read or write data simultaneously?
Identify security constraints
-
What are your data-encryption requirements? For example, do you need to use keys that you control?
-
Are there any data-residency requirements?
Define data-resilience requirements
- Does your workload need low-latency caching or scratch space?
- Do you need to replicate the data in the cloud for redundancy?
- Do you need strict read-write consistency for replicated datasets?
Set performance expectations
-
What is the required I/O rate?
-
What levels of read and write throughput does your application need?
-
What environments do you need storage for? For a given workload, you might need high-performance storage for the production environment, but could choose a lower-performance option for the non-production environments.
Review the storage options
Google Cloud offers storage services for all the key storage formats: block, file, and object. Review and evaluate the features, design options, and relative advantages of the services available for each storage format.
Overview
Block storage
The data that you store in block storage is divided into chunks, each stored as a separate block with a unique address. Applications access data by referencing the appropriate block addresses. Block storage is optimized for high-IOPS workloads, such as transaction processing. It's similar to on-premises storage area network (SAN) and directly attached storage (DAS) systems.
The block storage options in Google Cloud are a part of the Compute Engine service.
Option | Overview |
---|---|
Persistent Disk | Dedicated hard-disk drives (HDD) and solid-state drives (SSD) for enterprise and database applications deployed to Compute Engine VMs and Google Kubernetes Engine (GKE) clusters. |
Google Cloud Hyperdisk | Fast and redundant network storage for Compute Engine VMs, with configurable performance and volumes that can be dynamically resized. |
Local SSD | Ephemeral, locally attached block storage for high-performance applications. |
File storage
Data is organized and represented in a hierarchy of files that are stored in folders, similar to on-premises network-attached storage (NAS). File systems can be mounted on clients using protocols such as NFS and Server Message Block (SMB). Applications access data using the relevant filename and directory path.
Google Cloud provides a range of fully managed and third-party solutions for file storage.
Solution | Overview |
---|---|
Filestore | File-based storage using NFS file servers for Compute Engine VMs and Google Kubernetes Engine clusters. You can choose a service tier (Basic, Zonal, or Regional) that suits your use case. |
Parallelstore | Low-latency parallel file system for AI, high performance computing (HPC), and data-intensive applications. |
NetApp Volumes | File-based storage using NFS or SMB. You can choose a service level (Flex, Standard, Premium, or Extreme) that suits your use case. |
More options | See Summary of file server options . |
Object storage
Data is stored as objects in a flat hierarchy of buckets . Each object is assigned a globally unique ID. Objects can have system-assigned and user-defined metadata, to help you organize and manage the data. Applications access data by referencing the object IDs, using REST APIs or client libraries.
Cloud Storage provides low-cost, highly durable, no-limit object storage for diverse data types. The data you store in Cloud Storage can be accessed from anywhere, within and outside Google Cloud. Optional redundancy across regions provides maximum reliability. You can select a storage class that suits your data-retention and access-frequency requirements.
Comparative analysis
The following table lists the key capabilities of the storage services in Google Cloud.
10 GiB to 64 TiB per disk
257 TiB per VM
4 GiB to 64 TiB per disk
512 TiB per VM
10 TiB to 1 PiB per storage pool
375 GiB per disk
12 TiB per VM
1 TiB to 10 PiB per storage pool
1 GiB to 100 TiB per volume
- Scale up
- Add and remove disks
- Autoscale
- Basic: scale up
- Zonal and Regional: scale up and down
- Read/write from anywhere
- Integrates with Cloud CDN and third-party CDNs
- Google-owned and Google-managed keys
- Customer-managed
- Customer-supplied
- Google-owned and Google-managed keys
- Customer-managed
- Customer-supplied
- Google-owned and Google-managed keys
- Customer-managed (Zonal and Regional tiers)
- Google-owned and Google-managed keys
- Customer-managed
- Google-owned and Google-managed keys
- Customer-managed
- Customer-supplied
- Zonal and regional replication
- Snapshots (manual or scheduled)
- Disk cloning
- Regional (Flex) or zonal (all levels)
- Backups
- Snapshots
- Cross-region replication
- Data redundant across zones
- Options for redundancy across regions
- Basic: consistent performance
- Zonal and Regional: linear scaling
Scalable performance
Expectations depend on the service level
The following table lists the workload types that each Google Cloud storage option is appropriate for:
- IOPS-intensive or latency-sensitive applications
- Databases
- Shared read-only storage
- Rapid, durable VM backups
- Performance-intensive workloads
- Scale-out analytics
- Flash-optimized databases
- Hot-caching for analytics
- Scratch disk
- Lift-and-shift on-premises file systems
- Shared configuration files
- Common tooling and utilities
- Centralized logs
- AI and ML workloads
- HPC
- Lift-and-shift on-premises file systems
- Shared configuration files
- Common tooling and utilities
- Centralized logs
- Windows workloads
- Streaming videos
- Media asset libraries
- High-throughput data lakes
- Backups and archives
- Long-tail content
Choose a storage option
There are two parts to selecting a storage option:
- Deciding which storage services you need.
- Choosing the required features and design options in a given service.
Examples of service-specific features and design options
Persistent Disk
- Deployment region and zone
- Regional replication
- Disk type, size, and IOPS (for Extreme Persistent Disk)
- Encryption keys: Google-owned and Google-managed, customer-managed, or customer-supplied
- Snapshot schedule
Hyperdisk
- Deployment zone
- Disk type, size, throughput (for Hyperdisk Throughput) and IOPS (for Hyperdisk Extreme)
- Encryption keys: Google-owned and Google-managed, customer-managed, or customer-supplied
- Snapshot schedule
Filestore
- Deployment region and zone
- Instance tier
- Capacity
- IP range: auto-allocated or custom
- Access control
NetApp Volumes
- Deployment region
- Service level for the storage pool
- Pool and volume capacity
- Volume protocol
- Volume export rules
Cloud Storage
- Location: multi-region, dual-region, single region
- Storage class: Standard, Nearline, Coldline, Archive
- Access control: uniform or fine-grained
- Encryption keys: Google-owned and Google-managed, customer-managed, or customer-supplied
- Retention policy
Storage recommendations
Use the following recommendations as a starting point to choose the storage services and features that meet your requirements. For guidance that's specific to AI and ML workloads, see Design storage for AI and ML workloads in Google Cloud .
General storage recommendations are also presented as a decision tree later in this document.
-
For applications that need a parallel file system , use Parallelstore .
-
For applications that need file-based access , choose a suitable file storage service based on your requirements for access protocol, availability, and performance.
Access protocolRecommendationNFS- If you need regional availability and high performance that scales with capacity, use Filestore Regional.
- If zonal availability is sufficient, but you need high performance that scales with capacity, use Filestore Zonal or NetApp Volumes Premium or Extreme.
- Otherwise, use either Filestore Basic or NetApp Volumes .
For information about the differences between the Filestore service tiers, see Service tiers .
SMBUse NetApp Volumes. -
For workloads that need primary storage with high performance , use local SSD, Persistent Disk, or Hyperdisk depending on your requirements.
RequirementRecommendationFast scratch disk or cacheUse local SSD disks (ephemeral).
Sequential IOPSUse Persistent Disks with thepd-standard
disk type.IOPS-intensive workloadUse Persistent Disks with thepd-extreme
orpd-ssd
disk type.Balance between performance and costUse Persistent Disks with thepd-balanced
disk type.Scale performance and capacity dynamicallyUse Hyperdisk.
Choose a suitable Hyperdisk type:
- Hyperdisk Balanced is recommended for general-purpose workloads and highly available applications that need shared write access.
- Hyperdisk Extreme is recommended for workloads that need high I/O, such as high-performance databases.
- Hyperdisk Throughput is recommended for scale-out analytics, data drives for cost-sensitive apps, and for cold storage.
- Hyperdisk ML is recommended for ML workloads that need high throughput to multiple VMs in read-only mode.
For more information, see About Google Cloud Hyperdisk .
- Depending on your redundancy requirements, choose between zonal and
regional disks.
Requirement Recommendation Redundancy within a single zone in a region Use zonal Persistent Disks or Hyperdisks. Redundancy across multiple zones within a region Use regional Persistent Disks.
-
For unlimited-scale and globally available storage, use Cloud Storage.
Depending on the data-access frequency and the storage duration, choose a suitable Cloud Storage class.
Requirement Recommendation> Access frequency varies, or the data-retention period is unknown or not predictable. Use the Autoclass feature to automatically transition objects in a bucket to appropriate storage classes based on each object's access pattern. Storage for data that's accessed frequently, including for high-throughput analytics, data lakes, websites, streaming videos, and mobile apps. Use the Standard storage class.
To cache frequently accessed data and serve it from locations that are close to the clients, use Cloud CDN .
Low-cost storage for infrequently accessed data that can be stored for at least 30 days (for example, backups and long-tail multimedia content). Use the Nearline storage class. Low-cost storage for infrequently accessed data that can be stored for at least 90 days (for example, disaster recovery). Use the Coldline storage class. Lowest-cost storage for infrequently accessed data that can be stored for at least 365 days, including regulatory archives. Use the Archive storage class. For a detailed comparative analysis, see Cloud Storage classes .
Data transfer options
After you choose appropriate Google Cloud storage services, to deploy and run workloads, you need to transfer your data to Google Cloud. The data that you need to transfer might exist on-premises or on other cloud platforms.
You can use the following methods to transfer data to Google Cloud:
- Transfer data online by using Storage Transfer Service : Automate the transfer of large amounts of data between object and file storage systems, including Cloud Storage, Amazon S3, Azure storage services, and on-premises data sources.
- Transfer data offline by using Transfer Appliance : Transfer and load large amounts of data offline to Google Cloud in situations where network connectivity and bandwidth are unavailable, limited, or expensive.
- Upload data to Cloud Storage: Upload data online to Cloud Storage buckets by using the Google Cloud console, gcloud CLI, Cloud Storage APIs, or client libraries.
When you choose a data transfer method, consider factors like the data size, time constraints, bandwidth availability, cost goals, and security and compliance requirements. For information about planning and implementing data transfers to Google Cloud, see Migrate to Google Cloud: Transfer your large datasets .
Storage options decision tree
The following decision tree diagram guides you through the Google Cloud storage recommendations discussed earlier. For guidance that's specific to AI and ML workloads, see Design storage for AI and ML workloads in Google Cloud .
What's next
- Estimate storage cost using the Google Cloud Pricing Calculator .
- Learn about the best practices for building a cloud topology that's optimized for security, resilience, cost, and performance.
- Learn when to use parallel file systems like Lustre for HPC workloads .
Contributors
Author: Kumar Dhanagopal | Cross-Product Solution Developer
Other contributors:
- Brennan Doyle | Solutions Architect
- Dean Hildebrand | Technical Director, Office of the CTO
- Geoffrey Noer | Group Product Manager
- Jack Zhou | Technical Writer
- Jason Wu | Director, Product Management
- Jeff Allen | Solutions Architect
- Sean Derrington | Group Outbound Product Manager, Storage