Google Cloud Data Portability and Switching Procedures

Google Cloud is dedicated to offering customers transparent and flexible methods for managing and moving their data. This page outlines the procedures, methods and formats available for switching and porting data to and from Google Cloud services, along with relevant technical specifications and limitations. (This page does not apply to Google Workspace. Google Workspace customers should refer to the Google Workspace Admin Help Center for information about how to move their data.)

1. Transitioning to Google Cloud: Your Guide to Data Management and Analytics

For customers considering a switch to Google Cloud services, our platform is designed to provide powerful, flexible, and secure solutions for managing and analyzing your data and workloads. Google Cloud offers a comprehensive suite of services and tools to facilitate seamless migration and robust operation once you're on board.

For more information, follow the links below:

2. Transitioning from Google Cloud: Your Guide to Switching and Portability Procedures

Google Cloud provides various mechanisms to facilitate data movement. These mechanisms are primarily implemented through user interface frontends (e.g., Cloud Console), publicly accessible and documented APIs (e.g., OnePlatform APIs), and client libraries (SDKs).

Google Cloud Exit Program

For customers planning to migrate all Google Cloud workloads and data to another cloud provider or an on-premises data center and subsequently terminate their Google Cloud agreement, a free data transfer program is available. This program covers data residing in Google Cloud data storage and data management products, including BigQuery, Cloud Bigtable, Cloud SQL, Cloud Storage, Datastore, Filestore, Spanner, and Persistent Disk. The process involves submitting an Exit Notice form, initiating the migration within the defined "Initiation Period" and informing Google by submitting a Completion Notice when the migration has been successfully completed. See the Steps to exit Google Cloud with free data transfer and, for customers with billing addresses in the European Economic Area, the EU Data Act Terms in the General Service Terms here . In either case, the Exit Notice must be submitted prior to the termination of the Google Cloud agreement.

General Guidelines

  • Data Residency Considerations: Some exports , such as those from Security Command Center, are subject to data residency controls and must be configured within the same Security Command Center location as selected via those controls. This feature helps customers with their data residency commitments. 
  • VPC Service Controls Impact: Projects protected by VPC Service Controls might encounter restrictions or require specific configurations or manual processes for certain exports, such as Compute Engine images or Cloud Billing data.
  • Time-Based Limitations: Compute Engine image exports that leverage Cloud Build are limited to a maximum session duration of 24 hours; larger images may necessitate manual export. Cloud Asset Inventory prevents new export requests to the same destination if a prior request initiated less than 15 minutes previously is still active.
  • Data Consistency for Security Command Center Exports: Due to the application of filters, snapshots of findings exported from Security Command Center to BigQuery may not always precisely reflect the absolute latest state of findings within Security Command Center. This can lead to discrepancies in counts of active findings.

IAM Security Posture and Data Exfiltration Controls for Exports

When engineering a data export process, your Identity and Access Management (IAM) configuration is a critical security boundary. A misconfigured identity can lead to unauthorized data access or exfiltration. We recommend following the guidelines below:

1. Enforce the Principle of Least Privilege (PoLP) with Granular Roles

Avoid using primitive roles like roles/owner or roles/editor for export tasks, as they grant excessive permissions beyond the scope of the operation.

  • Use Predefined Roles: Assign highly specific, predefined roles. For an export from BigQuery to Cloud Storage, the exporting identity should, at a minimum, have roles/bigquery.dataViewer on the source dataset and roles/storage.objectCreator on the destination bucket.
  • Create Custom IAM Roles: For maximum control, create a custom IAM role . This allows you to bundle the exact permissions required for the task, such as bigquery.tables.getData, bigquery.jobs.create, and storage.objects.create. This ensures the identity has no ancillary permissions, effectively creating a purpose-built "key" that only opens the required doors.

2. Utilize Dedicated Service Accounts for Programmatic Access

For any automated or recurring export pipelines, do not use user credentials. Instead, leverage a dedicated service account .

  • Isolate Identity: Service accounts are non-human identities designed for application-level access. Using a dedicated service account for each export pipeline isolates the process and ensures that its permissions and audit trail are not conflated with other infrastructure or user activities.
  • Leverage Impersonation: For added security, grant a primary identity (like a Cloud Build service account or a GKE workload identity) the permission to impersonate this dedicated export service account (iam.serviceAccounts.getAccessToken permission). This allows the primary identity to generate short-lived credentials for the export service account, reducing the risk associated with static, long-lived keys.

3. Harden the Export Destination and Network Perimeter

Securing the data's destination is as critical as securing the source. The integrity of the export is compromised if the destination bucket is publicly accessible or vulnerable to exfiltration.

  • Enforce Uniform Bucket-Level Access: On your destination Cloud Storage buckets, enable Uniform Bucket-Level Access . This disables legacy object-level ACLs and ensures that only bucket-level IAM policies govern access, simplifying permission management and preventing misconfigurations.
  • Utilize VPC Service Controls: To prevent data exfiltration, place both the source service (e.g., BigQuery) and the destination service (e.g., Cloud Storage) within the same VPC Service Controls perimeter . This creates a virtual security boundary designed to block the exported data from being moved to an unauthorized bucket or project outside the perimeter, even if an identity has valid IAM permissions.

4. Implement Comprehensive Auditing and Monitoring

A complete audit trail is necessary for understanding activities and accesses within your Google Cloud resources and investigating security incidents.

  • Enable Data Access Audit Logs: By default, Cloud Audit Logs track administrative changes and system events. You must manually enable Data Access audit logs for your source and destination services. These logs track who accessed the data (DATA_READ) and where it was written (DATA_WRITE), providing the granular detail needed to monitor export activity.
  • Configure Immutable Log Sinks: Use the Log Explorer to query and analyze these logs. For long-term retention, configure a log sink to export all relevant audit logs to an immutable storage location, such as a separate, locked-down BigQuery dataset or a Cloud Storage bucket with a strict retention policy and object holds enabled.

Key Google Cloud Services and Their Data Portability Methods

  • Batch: Data portability usually involves bringing in data for processing and exporting the results, likely using other Google Cloud storage or database services.
  • Compute Engine
  • Image Export : You can export custom boot disk images to Cloud Storage as tar.gz files for backup or sharing, which is useful for projects without direct access. This can be done via the Google Cloud console, the Google Cloud CLI, or the REST API. Other supported formats include vmdk, vhdx, vpc, vdi, and qcow2. 
  • Limitations : Image export, which uses Cloud Build, has a 24-hour limit; larger images might need manual export. VPC Service Controls can also require manual exemption or specific configurations . If a default Compute Engine service account is disabled, a custom service account with roles/compute.storageAdmin and roles/storage.objectAdmin is needed for export.
  • Google Cloud VMware Engine (GCVE): GCVE is a fully managed, native VMware Cloud Foundation software stack. Data movement generally uses standard VMware tools or integrations with Google Cloud storage services. The Migrate to Virtual Machines service can be used to migrate VMs from a GCVE source .
  • Cloud Storage
  • Data Insights Export : You can export inventory reports for comprehensive data insights across buckets.
  • Usage and Storage Logs : Cloud Storage provides hourly usage logs and daily storage logs in CSV format for download. These are automatically stored in a designated bucket and can be analyzed in BigQuery . While Cloud Audit Logs are for continuous API tracking, usage logs help track public access, Object Lifecycle Management changes, and detailed request info.
  • Direct Object Management : You can download objects as files or into memory , perform sliced downloads , or use streaming downloads . Objects can be uploaded from files or memory , with support for resumable, XML API multipart , parallel composite , and streaming uploads .
  • Persistent Disk: Data portability for Persistent Disk , which provides block storage for VM instances, is primarily achieved via Compute Engine's image export or by mounting the disk to a VM for file transfers.
  • Cloud Filestore: Filestore offers scalable file storage. Cloud Filestore data export is performed by copying files from the mounted file share using standard file transfer tools, typically from a Compute Engine VM instance. There isn't a single "export" button or API. Instead, the process involves mounting the file share and then using tools like gcloud storage (formerly gsutil) or gcloud compute scp to move the data.
  • Cloud Storage for Firebase: This service provides object storage for user-generated content, with portability methods similar to Cloud Storage , which it uses.
  • AlloyDB: AlloyDB for PostgreSQL is a fully managed, PostgreSQL-compatible database. BigQuery supports federated queries to AlloyDB. Resource-level tags for AlloyDB instances are in BigQuery billing exports. Exporting data from AlloyDB for PostgreSQL is accomplished by using standard, native PostgreSQL tools and commands, as AlloyDB is fully PostgreSQL-compatible like ( pg_dump , psql with the \copy command) or by moving data from AlloyDB directly into other Google Cloud services like BigQuery federated query, Another way for exporting that data is using Datastream for continuously exporting data changes (inserts, updates, deletes) from AlloyDB to destinations like BigQuery, Cloud Storage, or Cloud Spanner.
  • Cloud Bigtable: BigQuery query results can be exported to Bigtable , and BigQuery can query Cloud Bigtable data . Resource-level tags for Bigtable instances are in BigQuery billing exports. Dataflow is Google Cloud's service for large-scale data processing. It is the most powerful and flexible Google Cloud method for exporting Bigtable data into standard, readable formats like CSV, Parquet, or Avro.
  • Datastore: BigQuery supports batch-loading data from Datastore exports . The process of exporting data from Datastore is handled by the managed export and import service. This service is designed to create a consistent, point-in-time copy of your Datastore entities and save them to a Google Cloud Storage bucket.
  • Firestore: Customers can export data from Firestore by using the managed export and import service. This process, typically initiated via the gcloud command-line tool, creates a complete, point-in-time copy of either the entire database or specific collections. The service exports the data into a new folder within a designated Google Cloud Storage bucket. This output in Cloud Storage is specifically formatted to be used for either restoring to Firestore or for loading into other Google Cloud services like BigQuery. 
  • MemoryStore: MemoryStore offers fully managed Redis and Memcached services. Resource-level tags for MemoryStore for Redis instances are in BigQuery billing exports. For Memorystore for Redis, Google provides a managed export feature that performs a BGSAVE operation and saves a point-in-time snapshot of the instance's data as a Redis Database Backup (.rdb) file to a designated Cloud Storage bucket. In contrast, Memorystore for Memcached does not support any data export feature; because Memcached is a volatile cache by design, data persistence and backup are the responsibility of the client application, which should treat the cache as ephemeral. 
  • Cloud Spanner: Customers export data from Cloud Spanner by using a Google-provided Dataflow template. This method is designed for bulk export and creates a consistent, point-in-time snapshot of a database. The process involves running a Dataflow job that reads from the Spanner database and writes the data into a set of Avro files in a specified Cloud Storage bucket. This export can be of an entire database or the results of a specific SQL query, and it can be initiated via the Google Cloud Console, the gcloud command-line tool, or the REST API.
  • Cloud SQL: Customers can export data from Cloud SQL by using the fully managed export feature , which is integrated directly into the Google Cloud Console. This process allows users to export an entire database or specific tables into a SQL dump file (.sql) or a CSV file. The export operation saves the resulting file to a designated Cloud Storage bucket, from where it can be downloaded or imported into other systems. For automation, the same export functionality can be triggered via the gcloud command-line tool or the Cloud SQL Admin API. BigQuery can also query Cloud SQL data
  • Virtual Private Cloud (VPC): VPC is a networking service and doesn't inherently store data. Its role in portability is facilitating secure network connections for data transfer by other services. VPC Service Controls can affect BigQuery exports, sometimes requiring manual exemption.
  • BigQuery
  • Data Export : Query results can be exported to Cloud Storage, Bigtable, Spanner, or Pub/Sub. The bq extract command-line tool supports exporting table data to Cloud Storage. Export to Azure Blob Storage is also supported.
  • Data Formats and Standards : Uses GoogleSQL, an ANSI-standard SQL:2011 compliant dialect, and supports open table formats like Apache Iceberg, Delta, and Hudi.
  • APIs and Client Libraries : Exposes REST and RPC APIs and provides client libraries for Python, Java, JavaScript, Go, C#, and Ruby. ODBC and JDBC drivers are available for integration.
  • Data Catalog: Data Catalog is a metadata management service. It primarily helps discover and curate data and manage metadata and data quality, focusing on metadata rather than direct user data portability. Since Data Catalog stores metadata and not the underlying user data, "exporting" from this service means exporting the metadata itself. 
  • Dataform: Dataform helps build, version control, and deploy SQL workflows in BigQuery. It indirectly facilitates data portability by managing SQL transformations within BigQuery. Dataform itself doesn't store user data or move data between different systems. Its role is to orchestrate the transformation of data within BigQuery. By defining these transformations (which can include moving data from staging tables to production tables, or restructuring data), Dataform facilitates the logical flow and availability of data within the BigQuery ecosystem, thereby indirectly supporting the preparation of data for export or consumption by other services.
  • Dataflow: Dataflow is a streaming analytics service. Cloud Spanner uses Dataflow to export data to Cloud Storage. Dataflow acts as a processing engine that enables data movement between various storage and messaging services. Since Dataflow is a transient processing engine and does not persistently store data itself, "exporting" from Dataflow means defining the output destination (a "sink") for your data processing pipeline . This destination can be a Cloud Storage bucket where data is saved as files (e.g., CSV, Parquet), a table in BigQuery, a topic in Pub/Sub, or many other supported storage and messaging systems. The export is therefore an integral part of the pipeline's logic, not a separate action taken on the service itself.
  • Dataproc: Dataproc is a service for running Apache Spark and Apache Hadoop clusters. It facilitates data processing pipelines that can read data from one storage location, transform data, and write data to another storage location. Since Dataproc is a data processing engine and does not persistently store user data itself, "exporting" from Dataproc means defining the output destination within the code of the job that runs on the cluster .
  • Dataproc Metastore: Dataproc Metastore mainly manages metadata for big data processing frameworks. Since Dataproc Metastore manages metadata (like table schemas, partitions, and locations of data files) rather than the actual user data, "exporting" from this service typically refers to creating a backup of the metadata database . Customers can initiate a managed backup operation for their Dataproc Metastore service, which creates a snapshot of the metadata database and stores it in a Cloud Storage bucket.
  • Datastream: Datastream is a serverless change data capture and replication service that enables real-time data synchronization between databases and other destinations. Datastream is a replication service that streams data from a source to a destination, it does not permanently store data itself. Therefore, "exporting" from Datastream means configuring the destination ("sink") of the data stream .
  • Looker (Google Cloud core): Customers export data from Looker by downloading the content they have created, such as a Look (a saved report) or the results from an Explore page. From a Look or an Explore, users can download the data behind a visualization in various file formats, including CSV, JSON, Microsoft Excel, and TXT. For Dashboards, users can download the entire dashboard as a PDF or download the data from its individual tiles as a collection of CSV files in a zip folder. This functionality allows users to take the results of their analysis out of Looker for use in presentations, spreadsheets, or other tools.
  • Looker Studio: Customers export from Looker Studio in two primary ways. They can download an entire report as a static PDF file to preserve its visual layout for presentations or offline sharing. Alternatively, to export the underlying data from a specific chart or table within the report, they can use the "Export data" option. This allows them to save the data directly to Google Sheets or download it as a CSV or Excel file, making the summarized and filtered data from that visualization portable for further analysis in other tools. BigQuery data can be visualized with Looker Studio , as can Cloud Billing data
  • Looker Studio Pro: Looker Studio Pro offers the same portability methods as Looker Studio. Customers export data from Looker Studio Pro using the same methods as the standard Looker Studio. They can download an entire report as a static PDF file to save a visual snapshot for offline use. To export the data from a specific visualization, they can right-click on a chart or table and select the " Export data " option. This allows them to save the underlying data for that chart directly to Google Sheets or download it as a CSV or Excel file, making the data portable for analysis in other applications.
  • Pub/Sub: Pub/Sub is a real-time messaging service and not a permanent data store, meaning you don't "export" data from it in the traditional sense. Instead, you consume the messages. A customer "exports" data by creating a subscriber application that pulls messages from a Pub/Sub subscription and writes them to a persistent storage location. A common and easy way to do this is by using a pre-built, managed subscriber in the form of a Dataflow template, such as the "Cloud Pub/Sub to Cloud Storage" template, which automatically reads messages from a topic and saves them as text files in a Cloud Storage bucket. 
  • Google Kubernetes Engine (GKE): GKE is a managed environment for running containerized apps. You can enable cost allocation for GKE in Cloud Billing detailed exports to see cluster cost breakdowns. Backup for GKE is available for data resource protection. As GKE itself is an orchestration platform and does not directly store user data, "exporting data" from GKE primarily refers to backing up or moving data from the persistent volumes used by applications running within the cluster. Customers use the Backup for the GKE service to create consistent backups of their cluster configuration and the data stored in Persistent Volumes. These backups are stored in a Cloud Storage bucket, which effectively serves as the "export" mechanism for the application data that requires persistence, allowing for restoration, migration, or external analysis.
  • Config Sync: Config Sync manages configurations for Kubernetes clusters. It doesn't store user data directly. Config Sync does not store any data itself but instead synchronizes configuration files from a source of truth. "Exporting" from this service means accessing the source Git repository. All the configurations, policies, and resource definitions that Config Sync applies to your clusters are stored as standard YAML or JSON files within that Git repository. Therefore, a customer can export all their configurations simply by cloning, pulling, or browsing the Git repository, which provides a complete, version-controlled history of all managed configurations.
  • BigQuery Omni: BigQuery Omni is a Google-Managed Multi-Cloud service that allows BigQuery to query data in other cloud environments like AWS and Azure Blob Storage, enhancing portability by enabling analytics on multi-cloud data without explicit transfers. BigQuery Omni's primary function is to query data that resides in other clouds (like AWS S3 or Azure Blob Storage) without moving it, so a traditional "export" is not its main purpose. However, after running a query, a customer can export the query results. This is done by using the EXPORT DATA SQL statement, similar to standard BigQuery. The EXPORT DATA statement will save the results of your Omni query to a location in your designated Cloud Storage bucket. From there, the data can be downloaded or moved to another service as needed.
  • Cloud Deployment Manager: Cloud Deployment Manager is an "Infrastructure as Code" service. Customers "export" from it by copying their configuration files. These are the human-readable text files (typically YAML, with optional Python or Jinja2 templates) that define the resources to be deployed. These files reside on the customer's local workstation or in their own version control system (like Git).
  • Cloud Shell: Cloud Shell is an interactive shell environment for managing Google Cloud services (like initiating data exports). Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOME directory on the virtual machine instance. This storage is on a per-user basis and is available across projects. All files you store in your home directory, including installed software, scripts, and user configuration files like .bashrc and .vimrc, persist between sessions and count towards the 5 GB limit. Data in Cloud Shell's home directory can be moved using standard Linux utilities or integrations with Cloud Storage. 
  • Cloud Console: The Cloud Console is a web-based user interface that does not directly store user data. It's the primary way users initiate data portability operations for other Google Cloud services (e.g., exporting Compute Engine images, Security Command Center findings/assets, or Migration Center reports).
  • Chronicle Security Operations - Primary Export Methods:
  • BigQuery Export: Chronicle offers a native, continuous export feature called "Bring Your Own BigQuery" (BYOBQ) . This allows you to export Unified Data Model (UDM) events, detection data, and Indicators of Compromise (IOC) matches directly to a BigQuery dataset within your own Google Cloud project. This is the most powerful Google Cloud method for bulk analysis and long-term data retention.
  • Data Export API: For more targeted or on-demand exports, the Data Export API can be used to export raw logs to a Google Cloud Storage bucket. You can export a maximum of 10 TB of compressed data per request with a maximum of three export requests that can be in process at any time.This method is suitable for specific, point-in-time data pulls.
  • Data Formats and Standards:
  • When exported to BigQuery , data is stored in structured tables that adhere to the Unified Data Model (UDM) schema , making it optimized for analysis.
  • Data exported via the API is typically in JSON format .

3. Data Structures, Data Formats, and Open Interoperability Specifications

Google Cloud services provide data in widely adopted, machine-readable formats that align with industry standards whenever applicable. This approach ensures high interoperability and practical utility for customers.

Specific Service Details on Data Formats and Structures:

BigQuery

  • Data Formats : Supports loading and exporting data in CSV, JSONL, Avro, and Parquet.
  • SQL Dialect : Uses GoogleSQL , an ANSI-compliant dialect, offering features like joins, nested fields, analytic functions, multi-statement queries, and geospatial functions.
  • Open Table Formats : BigQuery offers flexible options for exporting data, primarily to Google Cloud Storage, allowing you to move your data for use in other applications, for long-term archival, or for sharing with external partners. The supported export formats are CSV, JSON (newline-delimited), Avro, and Parquet. 
  • APIs and Client Libraries : Exposes REST and RPC APIs and provides client libraries for Python, Java, Go, C#, Node.js, PHP and Ruby . ODBC and JDBC drivers are also available for integration.

Cloud Storage

  • Data Insights Export : Users can export inventory reports for comprehensive data insights across buckets.
  • Log Formats : Offers usage logs and storage logs in CSV files . Usage logs are hourly for bucket requests, while storage logs are daily for consumption. Usage logs and Data Access audit logs are two separate logging systems that complement each other and must be enabled independently. Google recommends enabling both for a complete record of activity. Data Access audit logs, specifically, are designed to capture detailed caller identity for audit purposes.

Compute Engine

  • Image Export : Custom boot disk images can be exported to Cloud Storage as tar.gz files . Other supported formats include vmdk, vhdx, vpc, vdi, and qcow2 . Exports can be done via the Cloud Console, Google Cloud CLI, or REST API.

Cloud Billing

  • Data Export : Billing data (standard and detailed costs, rebilling, pricing) can be automatically exported daily to a specified BigQuery dataset for financial analysis. Enabling this early is recommended for comprehensive data.
  • Data Schema : Standard usage cost data goes into gcp_billing_export_v1_<BILLING_ACCOUNT_ID> in BigQuery , including fields like account ID, invoice date, services, SKUs, project details, labels, locations, cost, usage, credits, adjustments, currency, and resource tags. Detailed usage data adds resource-level granularity for services like Compute Engine, GKE, and Cloud Run.

Cloud Asset Inventory

  • Metadata Export : Facilitates exporting asset snapshots from your organization, folders, or projects to a BigQuery table or Cloud Storage for analysis.
  • BigQuery Schemas : The schema for BigQuery exports is dynamic based on content type:
  • RESOURCE or Unspecified :
  • Single Table (when per-asset-type is false or unspecified) : A single BigQuery table is created. The resource.data column contains the resource metadata as a JSON string.
  • Separate Tables (when per-asset-type is true) : Separate tables are created for each asset type. The schema of each table includes RECORD-type columns that map to the nested fields in the Resource.data field, up to the 15 nested levels that BigQuery supports.
  • IAM_POLICY : A BigQuery table is created with the schema for IAM policies.
  • ORG_POLICY : A BigQuery table is created with the schema for organization policies.
  • ACCESS_POLICY : A BigQuery table is created with the schema for VPC Service Controls (VPC SC) policies.
  • OS_INVENTORY : A BigQuery table is created with the schema for OS Config instance inventory.
  • RELATIONSHIP : A BigQuery table is created with the schema for asset relationships.
  • Exports can also create separate tables for each asset type, with RECORD-type columns for nested fields up to 15 levels deep.

Security Command Center

  • Data Export Options: Offers one-time manual exports and continuous exports.
  • Export Formats: One-time exports of findings can be in JSON, JSONL, or CSV to a Cloud Storage bucket, or as a CSV download .
  • Streaming Data: Continuous exports use Pub/Sub for near real-time delivery of finding snapshots. Findings can also be streamed to BigQuery for direct analysis.

Migration Center

  • Report Formats: Generates reports on asset inventory (servers and databases) and performance data, available for download in CSV, Microsoft Excel spreadsheet , or exportable to Google Sheets .

Cloud Logging

  • Log Export Formats: Logs can be downloaded in either CSV or JSON format. Cloud Logging adheres to a defined log entry data model. Audit log entries, for example, are LogEntry objects with a protoPayload field containing an AuditLog object.

Policy Intelligence (IAM Role Recommendations)

  • Access Data Export: Aggregated IAM access data, used for role recommendations, can be exported to BigQuery via the BigQuery Data Transfer Service. The dataset location must be US or EU. The location is immutable once the dataset and transfer are created.

General Data Format Principles:

  • Machine-Readable Standards: For customer-facing data, if an industry standard format exists for a particular data type, the data should be made available in that format. If multiple industry standards are present, users should have the option to select one or more to ensure a high-quality product experience.
  • Fallback Formats: In situations where specific industry standards are not applicable, CSV or HTML are used for customer-facing data exports.
  • Internal Data Formats: For certain other kinds of exportable data, formats such as CSV, HTML, ASCII-Proto, or TXT are utilized.
  • Flexible Data Structures: For complex or evolving data, fields may incorporate a collection of JSON objects within a single unstructured string field. External documentation clarifies that the format and content of such exported "blob" data are subject to change and not strictly guaranteed.

API Interfaces and Design:

  • Programmatic Access: Google Cloud APIs serve as programmatic interfaces to Google Cloud services, enabling developers to integrate a wide array of cloud functionalities—from computing to machine learning—into their applications.
  • Interface Support: Cloud APIs support both JSON HTTP and gRPC interfaces.
  • Client Libraries: Google Cloud Client Libraries (available for Python, Java,, Go, Node.js, Ruby, C+, C# and PHP) leverage the gRPC interface for enhanced performance and usability. Support for third-party clients is also provided.
  • Security: All Cloud APIs exclusively accept secure requests that use TLS encryption. In-transit encryption is either managed automatically by the client libraries or, for custom clients, requires adherence to gRPC authentication guidelines.
  • Design Principles: All Cloud APIs are designed following resource-oriented principles , as detailed in the API Design Guide, to ensure a simple and consistent developer experience.

4. Google Cloud services covered by EU Data Act Terms / free data transfer program for Google Cloud exit

Category

Service Name

Compute

Batch

Compute Engine

Google Cloud VMware Engine (GCVE)

Storage

Cloud Storage

Persistent Disk

Cloud Filestore

Cloud Storage for Firebase

Databases

AlloyDB

Cloud Bigtable

Datastore

Firestore

Memorystore

Cloud Spanner

Cloud SQL

Firebase Data Connect (Gated Preview)

Networking

Cloud CDN

Cloud VPN

Media CDN

Network Connectivity Center

Network Service Tiers

Spectrum Access System

Virtual Private Cloud

Data Analytics

BigQuery

Cloud Composer

Cloud Data Fusion

Cloud Life Sciences (formerly Google Genomics)

Data Catalog

Dataform

Dataplex

Dataflow

Dataproc

Dataproc Metastore

Datastream

Looker (Google Cloud core)

Looker Studio

Looker Studio Pro

Pub/Sub

Container Services

Google Kubernetes Engine

GKE Enterprise

Config Sync

Connect

Google-Managed Multi-Cloud Services

BigQuery Omni

Management Tools

Google Cloud App

Cloud Deployment Manager

Cloud Shell

Console

Cloud Console

Hosting

Firebase App Hosting (Preview)

Category

Service Name

Compute

Batch

Compute Engine

Google Cloud VMware Engine (GCVE)

Storage

Cloud Storage

Persistent Disk

Cloud Filestore

Cloud Storage for Firebase

Databases

AlloyDB

Cloud Bigtable

Datastore

Firestore

Memorystore

Cloud Spanner

Cloud SQL

Firebase Data Connect (Gated Preview)

Networking

Cloud CDN

Cloud VPN

Media CDN

Network Connectivity Center

Network Service Tiers

Spectrum Access System

Virtual Private Cloud

Data Analytics

BigQuery

Cloud Composer

Cloud Data Fusion

Cloud Life Sciences (formerly Google Genomics)

Data Catalog

Dataform

Dataplex

Dataflow

Dataproc

Dataproc Metastore

Datastream

Looker (Google Cloud core)

Looker Studio

Looker Studio Pro

Pub/Sub

Container Services

Google Kubernetes Engine

GKE Enterprise

Config Sync

Connect

Google-Managed Multi-Cloud Services

BigQuery Omni

Management Tools

Google Cloud App

Cloud Deployment Manager

Cloud Shell

Console

Cloud Console

Hosting

Firebase App Hosting (Preview)

Last modified September 9, 2025
Google Cloud
Create a Mobile Website
View Site in Mobile | Classic
Share by: