This page describes the process for relocating buckets from one location to another. For information about bucket relocation, see Bucket relocation .
Before you begin
Before you can relocate buckets, complete the following steps:
-  Check the quotas and limits to ensure that the new location has sufficient quotas to accommodate the bucket's data. 
-  Determine the bucket relocation type to understand whether write downtime is required. 
-  If you use inventory reports, save your configurations . 
-  Get required roles, which are described in the following section. 
Get required roles
To get the permissions that
      you need to relocate buckets,
    
      ask your administrator to grant you the Storage Admin 
( roles/storage.admin 
)
     IAM role on the project.
  
  
  
  
  For more information about granting roles, see Manage access to projects, folders, and organizations 
.
This predefined role contains the permissions required to relocate buckets. To see the exact permissions that are required, expand the Required permissionssection:
Required permissions
The following permissions are required to relocate buckets:
- To relocate a bucket: storage.buckets.relocate
- To view the status of a bucket relocation operation: storage.bucketOperations.get
- To view the list of bucket relocation operations for a project: storage.bucketOperations.list
- To cancel a bucket relocation operation: storage.bucketOperations.cancel
- To view the metadata of a bucket during the bucket relocation's dry run 
and incremental data copy 
phases: storage.buckets.get
- To get an object in a bucket you want to relocate: storage.objects.get
- To list the objects in a bucket you want to relocate: storage.objects.list
You might also be able to get these permissions with custom roles or other predefined roles .
Relocate buckets
This section describes the process of relocating Cloud Storage buckets from one location to another. When you relocate a bucket, you initiate the incremental data copy process, monitor the process, and then initiate the final synchronization step. For more information about these steps, see Understand the bucket relocation process .
Perform a dry run
To minimize potential issues during the bucket relocation process, we recommend you perform a dry run. A dry run simulates the bucket relocation process without moving data, helping you to catch and resolve issues early on. The dry run checks for the following incompatibilities:
- Customer-managed encryption keys (CMEK) or Customer-supplied encryption keys (CSEK)
- Locked retention policies
- Objects with temporary holds
- Multipart uploads
While a dry run can't identify every possible issue as some issues might only surface during the live migration due to factors such as real-time resource availability, it reduces the risk of facing time-consuming issues during the actual relocation.
Command line
Simulate the dry run of bucket relocation:
gcloud storage buckets relocate gs:// BUCKET_NAME --location= LOCATION --dry-run
Where:
-  BUCKET_NAMEis the name of the bucket that you want to relocate.
-  LOCATIONis the destination location of the bucket.
After you initiate a dry run, a long-running operation starts. You'll receive an operation ID and a description of the operation. Track the progress and completion of the dry run by getting the details of the long-running operation .
If the dry run reveals any issues, address them before proceeding with the Initiate the incremental data copy step .
REST APIs
JSON API
-  Have gcloud CLI installed and initialized , which lets you generate an access token for the Authorizationheader.
-  Create a JSON file that contains the settings for the bucket, which must include the destinationLocationandvalidateOnlyparameters. See theBuckets: relocatedocumentation for a complete list of settings. The following are common settings to include:{ "destinationLocation" : " DESTINATION_LOCATION " , "destinationCustomPlacementConfig" : { "dataLocations" : [ LOCATIONS , ... ] }, "validateOnly" : "true" } Where: -  DESTINATION_LOCATIONis the destination location of the bucket.
-  LOCATIONSis a list of location codes to be used for the configurable dual-region .
-  validateOnlyis set totrueto perform a dry run.
 
-  
-  Use cURLto call the JSON API :curl -X POST --data-binary @ JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/ BUCKET_NAME /relocate" Where: -  JSON_FILE_NAMEis the name of the JSON file you created.
-  BUCKET_NAMEis the name of the bucket you want to relocate.
 After you initiate a dry run, a long-running operation starts. The dry run succeeds when the following conditions are met: - The dry run reports no errors.
-  The operationsresource returns adonefield value oftrue.{ "kind": "storage#operation", "name": "projects/_/buckets/bucket/operations/operation_id", "metadata": { "@type": OperationMetadataType*, metadata OperationMetadata* }, "done": "true", "response": { "@type": ResponseResourceType*, response ResponseResource* } }
 If the dry run reveals any issues, address them before proceeding with the Initiate the incremental data copy step . 
-  
Initiate incremental data copy
Command line
Initiate the bucket relocation operation:
gcloud storage buckets relocate gs:// BUCKET_NAME --location= LOCATION
Where:
-  BUCKET_NAMEis the name of the bucket that you want to relocate.
-  LOCATIONis the destination location of the bucket.
REST APIs
JSON API
-  Have gcloud CLI installed and initialized , which lets you generate an access token for the Authorizationheader.
-  Create a JSON file that contains the settings for the bucket. See the Buckets: relocatedocumentation for a complete list of settings. The following are common settings to include:{ "destinationLocation" : " DESTINATION_LOCATION " , "destinationCustomPlacementConfig" : { "dataLocations" : [ LOCATIONS , ... ] }, "validateOnly" : "false" } Where: -  DESTINATION_LOCATIONis the destination location of the bucket.
-  LOCATIONSis a list of location codes to be used for the configurable dual-region .
-  validateOnlyis set tofalseto initiate the incremental data copy step of bucket relocation.
 
-  
-  Use cURLto call the JSON API :curl -X POST --data-binary @ JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/ BUCKET_NAME /relocate" Where: -  JSON_FILE_NAMEis the name of the JSON file you created.
-  BUCKET_NAMEis the name of the bucket you want to relocate.
 
-  
Monitor incremental data copy
The bucket relocation process is a long-running operation that must be monitored to see how it's progressing. You can regularly check the long-running operations list to see the status of the incremental data copy step. For information about how to get the details about a long-running operation, list, or cancel long-running operations, see Use long-running operations in Cloud Storage .
The following example shows the output generated by an incremental data copy operation:
done: false
  kind: storage#operation
  metadata:
  '@type': type.googleapis.com/google.storage.control.v2.RelocateBucketMetadata
  commonMetadata:
    createTime: '2024-10-21T04:26:59.666Z
    endTime: '2024-12-29T23:39:53.340Z'
    progressPercent: 99
    requestedCancellation: false
    type: relocate-bucket
    updateTime: '2024-10-21T04:27:03.2892'
  destinationLocation: US-CENTRAL1
  finalizationState: 'READY'
  progress:
    byteProgressPercent: 100
    discoveredBytes: 200
    remainingBytes: 0
    discoveredObjectCount: 10
    remainingObjectCount: 8
    objectProgressPercent: 100
    discoveredSyncCount: 8
    remainingSyncCount: 0
    syncProgressPercent: 100
  relocationState: SYNCING
  sourceLocation: US
  validateOnly: false
  estimatedWriteDowntimeDuration: '7200s'
  writeDowntimeExpireTime: '2024-12-30T10:34:01.786Z'
  name: projects//buckets/my-bucket1/operations/Bar7-1b0khdew@nhenUQRTF_R-Kk4dQ5V1f8fzezkFcPh3XMvlTqJ6xhnqJ1h_QXFIeAirrEqkjgu4zPKSRD6WSSG5UGXil6w
  response:
    '@type': type.googleapis.com/google.storage.control.v2.RelocateBucketResponse
      selfLink: https://storage.googleusercontent.com/storage/v1_ds/b/my-bucket1/operations/Bar7-1b0khdew@nhenUQRTF_R-Kk4dQ5V1f8fzezkFcPh3XMvlTqJ6xhnqJ1h_QXFIeAirrEqkjgu4zPKSRD6WSSG5UGXil6w 
The following table provides information about the key fields in the output generated by the incremental data copy operation:
done 
true 
, false 
kind 
metadata 
metadata.@type 
metadata.commonMetadata 
metadata.commonMetadata.createTime 
metadata.commonMetadata.endTime 
metadata.commonMetadata.requestedCancellation 
true 
, false 
metadata.commonMetadata.type 
metadata.commonMetadata.updateTime 
metadata.destinationLocation 
metadata.progress.discoveredBytes 
metadata.progress.discoveredObjectCount 
metadata.progress.discoveredSyncCount 
metadata.progress.remainingBytes 
metadata.progress.remainingObjectCount 
metadata.progress.remainingSyncCount 
metadata.relocationState 
-  SYNCING: Indicates that the incremental data copy step is actively copying objects from the source bucket to the destination bucket.
-  FINALIZING: Indicates that the finalization step has been initiated.
-  FAILED: Indicates that the incremental data copy step encountered an error and did not complete successfully.
-  SUCCEEDED: Indicates that the incremental data copy step has completed successfully.
-  CANCELLED: Indicates that the incremental data copy step was canceled.
metadata.sourceLocation 
metadata.validateOnly 
true 
, false 
metadata.estimatedWriteDowntimeDuration 
finalizationState 
is READY 
.7200s 
.metadata.writeDowntimeExpireTime 
name 
Format:
projects/_/buckets/ bucket-name 
/operations/ operation-id 
 
response 
response.@type 
selfLink 
If you encounter issues when interacting with other Cloud Storage features, see Limitations .
Initiate the final synchronization step
The final synchronization step involves a period where you cannot perform write operations on the bucket. We recommend that you schedule the final synchronization step at a time that minimizes disruption to your applications.
Before you proceed, confirm that the bucket is fully prepared by checking the  finalizationState 
 
value in the output of the incremental data
copy 
step. The finalizationState 
value must be READY 
to proceed.
If you initiate the final synchronization step prematurely, the command returns
an error message The relocate bucket operation is not ready to advance to finalization running state 
but the relocation process continues.
We recommend that you wait until the  progressPercent 
 
value is 99 
before initiating the final synchronization step.
Command line
Initiate the final synchronization step of the bucket relocation
operation once the finalizationState 
value is READY 
:
gcloud storage buckets relocate --finalize --operation=projects/_/buckets/ BUCKET_NAME /operations/ OPERATION_ID
Where:
-  BUCKET_NAMEis the name of the bucket that you want to relocate.
-  OPERATION_IDis the ID of the long-running operation, which is returned in the response of methods you call. For example, the following response is returned from callinggcloud storage operations listand the long-running operation ID isAbCJYd8jKT1n-Ciw1LCNXIcubwvij_TdqO-ZFjuF2YntK0r74.
`name: projects/_/buckets/my-bucket/operations/AbCJYd8jKT1n-Ciw1LCNXIcubwvij_TdqO-ZFjuF2YntK0r74`
Set the ttl 
flag to have greater control over the relocation process. For example:
gcloud storage buckets relocate --finalize --ttl TTL_DURATION --operation=projects/_/buckets/ BUCKET_NAME /operations/ OPERATION_ID
Where:
  TTL_DURATION 
 
is the Time to live (TTL) for the write
downtime phase during a relocation process. It is expressed as a string,
such as 12h 
for 12 hours. The TTL_DURATION 
determines the maximum allowed
duration for the write downtime phase. If the write downtime exceeds this limit,
the relocation process automatically reverts to the incremental copy step,
and write operations to the bucket are re-enabled. The value must be within the
range of 6h 
(6 hours) to 48h 
(48 hours). If not specified, the default
value is 12h 
(12 hours).
REST APIs
JSON API
-  Have gcloud CLI installed and initialized , which lets you generate an access token for the Authorizationheader.
-  Create a JSON file that contains the settings for bucket relocation. See the Buckets: advanceRelocateBucketdocumentation for a complete list of settings. The following are common settings to include:{ "expireTime" : " EXPIRE_TIME " , "ttl" : " TTL_DURATION " } Where: -  EXPIRE_TIMEis the time the write downtime expires.
-  TTL_DURATIONis the Time to live (TTL) for the write downtime phase during a relocation process. It is expressed as a string, such as12hfor 12 hours. TheTTL_DURATIONdetermines the maximum allowed duration for the write downtime phase. If the write downtime exceeds this limit, the relocation process automatically reverts to the incremental copy step, and write operations to the bucket are re-enabled. The value must be within the range of6h(6 hours) to48h(48 hours). If not specified, the default value is12h(12 hours).
 
-  
-  Use cURLto call the JSON API :curl -X POST --data-binary @ JSON_FILE_NAME \ -H "Authorization: Bearer $(gcloud auth print-access-token)" \ -H "Content-Type: application/json" \ "https://storage.googleapis.com/storage/v1/b/bucket/ BUCKET_NAME /operations/ OPERATION_ID /advanceRelocateBucket" Where: -  JSON_FILE_NAMEis the name of the JSON file you created.
-  BUCKET_NAMEis the name of the bucket you want to relocate.
-  OPERATION_IDis the ID of the long-running operation, which is returned in the response of methods you call. For example, the following response is returned from callingOperations: listand the long-running operation ID isAbCJYd8jKT1n-Ciw1LCNXIcubwvij_TdqO-ZFjuF2YntK0r74.
 
-  
Validate the bucket relocation process
After initiating a relocation, verify its successful completion. This section provides guidance on confirming the successful transfer of data.
Validate the success of the relocation process using the following methods:
-  Poll long-running operations: Bucket relocation is a long-running operation. You can poll the long-running operation using the operation idto monitor the operation's progress and confirm its successful completion by verifying thesuccessstate. This involves periodically querying the operation's status until it reaches a terminal state. For information about monitoring long-running operations, see Use long-running operations in Cloud Storage .
-  Analyze Cloud Audit Logs entries: Cloud Audit Logs provides a detailed record of events and operations in your Google Cloud environment. You can analyze the Cloud Audit Logs entries associated with the relocation to validate its success. Analyze the logs for any errors, warnings, or unexpected behavior that might indicate issues during the transfer. For information about viewing Cloud Audit Logs logs, see Viewing audit logs . The following log entries help you to determine if your move is a success or a failure: -  Successful relocation: Relocate bucket succeeded. All existing objects are now in the new placement configuration.
-  Failed relocation: Relocate bucket has failed. Bucket location remains unchanged.
 Using Pub/Sub notifications, you can also set up alerts that notify when the specific success or failure event appears in the logs. For information about setting up Pub/Sub notifications, see Configure Pub/Sub notifications for Cloud Storage . 
-  
Complete the post bucket relocation tasks
After you have successfully relocated your bucket, complete the following steps:
- Optional: Restore any tag-based access controls on your bucket.
- Existing inventory report configurations are not preserved during the relocation process and you'll need to manually recreate them. For information about creating an inventory report configuration, see Create an inventory report configuration .
- Update your infrastructure as code configurations such as Terraform and Google Kubernetes Engine configuration connector to specify the bucket's new location.
- Regional endpoints are tied to specific locations, and you'll need to modify your application code to reflect the new endpoint.
How to handle failed bucket relocation operations
Consider the following factors before handling failed bucket relocation operations:
-  A failed bucket relocation might leave obsolete resources, such as temporary files or incomplete data copies, at the destination. You must wait 7 to 14 days before initiating another bucket relocation to the same destination. You can initiate a bucket relocation to a different destination immediately. 
-  If the destination location is not the optimal location for your data, you might want to roll back the relocation. However, you cannot initiate a relocation immediately. A waiting period of up to 14 days is required before you can initiate the relocation process again. This restriction is in place to ensure stability and prevent data conflicts. 
What's next
- Learn about bucket relocation .

