For more information about creating and using tables including getting table
information, listing tables, and controlling access to table data, seeCreating and using tables.
Before you begin
Grant Identity and Access Management (IAM) roles that give users the necessary permissions
to perform each task in this document. The permissions required to perform a
task (if any) are listed in the "Required permissions" section of the task.
To get the permissions that you need to update table properties,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor) IAM role on a table.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to update table properties. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to update table properties:
You can't add a description when you create a table using the
Google Cloud console. After the table is created, you can add a
description on theDetailspage.
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, clickDetails.
In theDescriptionsection, click
the pencil icon to edit the description.
Enter a description in the box, and clickUpdateto save.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq updatecommand with the--descriptionflag. If you are
updating a table in a project other than your default project, add the
project ID to the dataset name in the following format:project_id:dataset.
description: the text describing the table in quotes
project_id: your project ID
dataset: the name of the dataset that contains the table
you're updating
table: the name of the table you're updating
Examples:
To change the description of themytabletable in themydatasetdataset to
"Description of mytable", enter the following command. Themydatasetdataset is in
your default project.
bq update --description "Description of mytable" mydataset.mytable
To change the description of themytabletable in themydatasetdataset to
"Description of mytable", enter the following command. Themydatasetdataset is in themyotherprojectproject, not your default project.
bq update \
--description "Description of mytable" \
myotherproject:mydataset.mytable
API
Call thetables.patchmethod and use thedescriptionproperty in thetable resourceto update the table's description. Because thetables.updatemethod
replaces the entire table resource, thetables.patchmethod is preferred.
import("context""fmt""cloud.google.com/go/bigquery")// updateTableDescription demonstrates how to fetch a table's metadata and updates the Description metadata.funcupdateTableDescription(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient:%v",err)}deferclient.Close()tableRef:=client.Dataset(datasetID).Table(tableID)meta,err:=tableRef.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.TableMetadataToUpdate{Description:"Updateddescription.",}if_,err=tableRef.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;publicclassUpdateTableDescription{publicstaticvoidrunUpdateTableDescription(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";StringnewDescription="thisisthenewtabledescription";updateTableDescription(datasetName,tableName,newDescription);}publicstaticvoidupdateTableDescription(StringdatasetName,StringtableName,StringnewDescription){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setDescription(newDescription).build());System.out.println("Tabledescriptionupdatedsuccessfullyto"+newDescription);}catch(BigQueryExceptione){System.out.println("Tabledescriptionwasnotupdated\n"+e.toString());}}}
# from google.cloud import bigquery# client = bigquery.Client()# project = client.project# dataset_ref = bigquery.DatasetReference(project, dataset_id)# table_ref = dataset_ref.table('my_table')# table = client.get_table(table_ref) # API requestasserttable.description=="Originaldescription."table.description="Updateddescription."table=client.update_table(table,["description"])# API requestasserttable.description=="Updateddescription."
Update a table's expiration time
You can set a default table expiration time at the dataset level, or you can set
a table's expiration time when the table is created. A table's expiration time
is often referred to as "time to live" or TTL.
When a table expires, it is deleted along with all of the data it contains.
If necessary, you can undelete the expired table within the time travel window
specified for the dataset, seeRestore deleted tablesfor more
information.
If you set the expiration when the table is created, the dataset's default table
expiration is ignored. If you do not set a default table expiration at the
dataset level, and you do not set a table expiration when the table is created,
the table never expires and you mustdeletethe table
manually.
At any point after the table is created, you can update the table's expiration
time in the following ways:
Using the Google Cloud console.
Using a data definition language (DDL)ALTER TABLEstatement.
You can't add an expiration time when you create a table using the
Google Cloud console. After a table is created, you can add or update a
table expiration on theTable Detailspage.
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, clickDetails.
Click the pencil icon next toTable info
ForTable expiration, selectSpecify date. Then select the
expiration date using the calendar widget.
ClickUpdateto save. The updated expiration time appears in theTable infosection.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq updatecommand with the--expirationflag. If you are
updating a table in a project other than your default project,
add the project ID to the dataset name in the following format:project_id:dataset.
integer: the default lifetime (in seconds) for
the table. The minimum value is 3600 seconds (one hour). The expiration
time evaluates to the current time plus the integer value. If you specify0, the table expiration is removed, and the table never expires. Tables
with no expiration must be manually deleted.
project_id: your project ID.
dataset: the name of the dataset that contains
the table you're updating.
table: the name of the table you're updating.
Examples:
To update the expiration time of themytabletable in themydatasetdataset to 5 days
(432000 seconds), enter the following command. Themydatasetdataset is in your
default project.
bq update --expiration 432000 mydataset.mytable
To update the expiration time of themytabletable in themydatasetdataset to 5 days
(432000 seconds), enter the following command. Themydatasetdataset is in themyotherprojectproject, not your default project.
Call thetables.patchmethod and use theexpirationTimeproperty in thetable resourceto update the table expiration in milliseconds. Because thetables.updatemethod replaces the entire table resource, thetables.patchmethod is
preferred.
import("context""fmt""time""cloud.google.com/go/bigquery")// updateTableExpiration demonstrates setting the table expiration of a table to a specific point in time// in the future, at which time it will be deleted.funcupdateTableExpiration(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient:%v",err)}deferclient.Close()tableRef:=client.Dataset(datasetID).Table(tableID)meta,err:=tableRef.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.TableMetadataToUpdate{ExpirationTime:time.Now().Add(time.Duration(5*24)*time.Hour),// table expiration in 5 days.}if_,err=tableRef.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;importjava.util.concurrent.TimeUnit;publicclassUpdateTableExpiration{publicstaticvoidrunUpdateTableExpiration(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";// Update table expiration to one day.LongnewExpiration=TimeUnit.MILLISECONDS.convert(1,TimeUnit.DAYS);updateTableExpiration(datasetName,tableName,newExpiration);}publicstaticvoidupdateTableExpiration(StringdatasetName,StringtableName,LongnewExpiration){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setExpirationTime(newExpiration).build());System.out.println("Tableexpirationupdatedsuccessfullyto"+newExpiration);}catch(BigQueryExceptione){System.out.println("Tableexpirationwasnotupdated\n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionupdateTableExpiration(){// Updates a table's expiration./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = 'my_dataset', // Existing dataset// const tableId = 'my_table', // Existing table// const expirationTime = Date.now() + 1000 * 60 * 60 * 24 * 5 // 5 days from current time in ms// Retreive current table metadataconsttable=bigquery.dataset(datasetId).table(tableId);const[metadata]=awaittable.getMetadata();// Set new table expiration to 5 days from current timemetadata.expirationTime=expirationTime.toString();const[apiResponse]=awaittable.setMetadata(metadata);constnewExpirationTime=apiResponse.expirationTime;console.log(`${tableId}expiration:${newExpirationTime}`);}
# Copyright 2022 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.importdatetimedefupdate_table_expiration(table_id,expiration):orig_table_id=table_idorig_expiration=expirationfromgoogle.cloudimportbigqueryclient=bigquery.Client()# TODO(dev): Change table_id to the full name of the table you want to update.table_id="your-project.your_dataset.your_table_name"# TODO(dev): Set table to expire for desired days days from now.expiration=datetime.datetime.now(datetime.timezone.utc)+datetime.timedelta(days=5)table_id=orig_table_idexpiration=orig_expirationtable=client.get_table(table_id)# Make an API request.table.expires=expirationtable=client.update_table(table,["expires"])# API requestprint(f"Updated{table_id},expires{table.expires}.")
To update the default dataset partition expiration time:
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Dataset;importjava.util.concurrent.TimeUnit;// Sample to update partition expiration on a dataset.publicclassUpdateDatasetPartitionExpiration{publicstaticvoidmain(String[]args){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";// Set the default partition expiration (applies to new tables, only) in// milliseconds. This example sets the default expiration to 90 days.LongnewExpiration=TimeUnit.MILLISECONDS.convert(90,TimeUnit.DAYS);updateDatasetPartitionExpiration(datasetName,newExpiration);}publicstaticvoidupdateDatasetPartitionExpiration(StringdatasetName,LongnewExpiration){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Datasetdataset=bigquery.getDataset(datasetName);bigquery.update(dataset.toBuilder().setDefaultPartitionExpirationMs(newExpiration).build());System.out.println("Datasetdefaultpartitionexpirationupdatedsuccessfullyto"+newExpiration);}catch(BigQueryExceptione){System.out.println("Datasetpartitionexpirationwasnotupdated\n"+e.toString());}}}
# Copyright 2019 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.defupdate_dataset_default_partition_expiration(dataset_id:str)->None:fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset to fetch.# dataset_id = 'your-project.your_dataset'dataset=client.get_dataset(dataset_id)# Make an API request.# Set the default partition expiration (applies to new tables, only) in# milliseconds. This example sets the default expiration to 90 days.dataset.default_partition_expiration_ms=90*24*60*60*1000dataset=client.update_dataset(dataset,["default_partition_expiration_ms"])# Make an API request.print("Updateddataset{}.{}withnewdefaultpartitionexpiration{}".format(dataset.project,dataset.dataset_id,dataset.default_partition_expiration_ms))
When you add aNUMERICorBIGNUMERICfield to a table and do not specify
arounding mode, then the rounding mode
is automatically set to the table's default rounding mode. Changing a table's
default rounding mode doesn't alter the rounding mode of existing fields.
Update a table's schema definition
For more information about updating a table's schema definition, seeModifying table schemas.
Rename a table
You can rename a table after it has been created by using theALTER TABLE RENAME TOstatement.
The following example renamesmytabletomynewtable:
ALTERTABLEmydataset.mytableRENAMETOmynewtable;
Limitations on renaming tables
If you want to rename a table that has data streaming into it, you must stop
the streaming and wait for BigQuery to indicate that streaming
is not in use.
A table can usually be renamed within 72 hours of the last streaming
operation, but it might take longer.
Existing table ACLs and row access policies are preserved, but table ACL and
row access policy updates made during the table rename are not preserved.
You can't concurrently rename a table and run a DML statement on that table.
Call thejobs.insertAPI method and configure acopyjob.
Use the client libraries.
Limitations on copying tables
Table copy jobs are subject to the following limitations:
When you copy a table, the name of the destination table must adhere to the
same naming conventions as when youcreate a table.
Table copies are subject to BigQuerylimitson copy jobs.
The Google Cloud console supports copying only one table at a time. You
can't overwrite an existing table in the destination dataset. The table must
have a unique name in the destination dataset.
Copying multiple source tables into a destination table is not supported by
the Google Cloud console.
When copying multiple source tables to a destination table using the API,
bq command-line tool, or the client libraries, all source tables must have identical
schemas, including any partitioning or clustering.
Certain table schema updates, such as dropping or renaming
columns, can cause tables to have apparently identical schemas but different
internal representations. This might cause a table copy job to fail with the
errorMaximum limit on diverging physical schemas reached. In this case, you
can use theCREATE TABLE LIKEstatementto ensure that your source table's schema matches the destination table's
schema exactly.
The time that BigQuery takes to copy tables might vary
significantly across different runs because the underlying storage is managed
dynamically.
You can't copy and append a source table to a destination table that has more
columns than the source table, and the additional columns havedefault values. Instead, you can runINSERT destination_table SELECT * FROM source_tableto copy over the data.
If the copy operation overwrites an existing table, then the table-level
access for the existing table is maintained.Tagsfrom
the source table aren't copied to the overwritten table.
If the copy operation creates a new table, then the table-level access for the
new table is determined by the access policies of the dataset in which the new
table is created. Additionally,tagsare copied from
the source table to the new table.
When you copy multiple source tables to a destination table, all source tables
must have identical tags.
Required roles
To perform the tasks in this document, you need the following permissions.
Roles to copy tables and partitions
To get the permissions that you need to copy tables and partitions,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor) IAM role on the source and destination datasets.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to copy tables and partitions. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to copy tables and partitions:
bigquery.tables.getDataon the source and destination datasets
bigquery.tables.geton the source and destination datasets
To get the permission that you need to run a copy job,
ask your administrator to grant you theJob User(roles/bigquery.jobUser) IAM role on the source and destination datasets.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains thebigquery.jobs.createpermission,
which is required to
run a copy job.
You can copy a single table in the following ways:
Using the Google Cloud console.
Using the bq command-line tool'sbq cpcommand.
Using a data definition language (DDL)CREATE TABLE COPYstatement.
Calling thejobs.insertAPI method, configuring acopyjob, and specifying thesourceTableproperty.
Using the client libraries.
The Google Cloud console and theCREATE TABLE COPYstatement support only
one source table and one destination
table in a copy job. Tocopy multiple source filesto a destination table, you must use the bq command-line tool or the API.
To copy a single source table:
Console
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, clickCopy table.
In theCopy tabledialog, underDestination:
ForProject name, choose the project that will store the copied
table.
ForDataset name, select the dataset where you want to store
the copied table. The source and destination datasets must be in the
samelocation.
ForTable name, enter a name for the new table. The name must
be unique in the destination dataset. You can't overwrite an existing
table in the destination dataset using the Google Cloud console. For
more information about table name requirements, seeTable naming.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq cpcommand. Optional flags can be used to control the write
disposition of the destination table:
-aor--append_tableappends the data from the source table to an
existing table in the destination dataset.
-for--forceoverwrites an existing table in the destination dataset
and doesn't prompt you for confirmation.
-nor--no_clobberreturns the following error message if the table
exists in the destination dataset:Table 'project_id:dataset.table' already exists, skipping.If-nis not specified, the default behavior is to prompt you to choose
whether to replace the destination table.
--destination_kms_keyis the customer-managed Cloud KMS key
used to encrypt the destination table.
If the source or destination dataset is in a project other than your default
project, add the project ID to the dataset names in the following format:project_id:dataset.
(Optional) Supply the--locationflag and set the value to yourlocation.
location: the name of your location.
The--locationflag is
optional. For example, if you are using BigQuery in the
Tokyo region, you can set the flag's value toasia-northeast1. You can
set a default value for the location using the.bigqueryrcfile.
project_id: your project ID.
dataset: the name of the source or destination
dataset.
source_table: the table you're copying.
destination_table: the name of the table in the
destination dataset.
Examples:
To copy themydataset.mytabletable to themydataset2.mytable2table,
enter the following command. Both datasets are in your default project.
bq cp mydataset.mytable mydataset2.mytable2
To copy themydataset.mytabletable and to overwrite a destination table
with the same name, enter the following command. The source dataset is in
your default project. The destination dataset is in themyotherprojectproject. The-fshortcut is used to overwrite the destination table
without a prompt.
To copy themydataset.mytabletable and to return an error if the
destination dataset contains a table with the same name, enter the following
command. The source dataset is in your default project. The destination
dataset is in themyotherprojectproject. The-nshortcut is used to
prevent overwriting a table with the same name.
To copy themydataset.mytabletable and to append the data to a
destination table with the same name, enter the following command. The
source dataset is in your default project. The destination dataset is in themyotherprojectproject. The- ashortcut is used to append to the
destination table.
bq cp -a mydataset.mytable myotherproject:myotherdataset.mytable
API
You can copy an existing table through the API by calling thebigquery.jobs.insertmethod, and configuring acopyjob. Specify your location in
thelocationproperty in thejobReferencesection of thejob resource.
You must specify the following values in your job configuration:
WheresourceTableprovides information about the table to be
copied,destinationTableprovides information about the new
table,createDispositionspecifies whether to create the
table if it doesn't exist, andwriteDispositionspecifies
whether to overwrite or append to an existing table.
usingGoogle.Apis.Bigquery.v2.Data;usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryCopyTable{publicvoidCopyTable(stringprojectId="your-project-id",stringdestinationDatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);TableReferencesourceTableRef=newTableReference(){TableId="shakespeare",DatasetId="samples",ProjectId="bigquery-public-data"};TableReferencedestinationTableRef=client.GetTableReference(destinationDatasetId,"destination_table");BigQueryJobjob=client.CreateCopyJob(sourceTableRef,destinationTableRef).PollUntilCompleted()// Wait for the job to complete..ThrowOnAnyError();// Retrieve destination tableBigQueryTabledestinationTable=client.GetTable(destinationTableRef);Console.WriteLine($"Copied{destinationTable.Resource.NumRows}rowsfromtable"+$"{sourceTableRef.DatasetId}.{sourceTableRef.TableId}"+$"to{destinationTable.FullyQualifiedId}.");}}
import("context""fmt""cloud.google.com/go/bigquery")// copyTable demonstrates copying a table from a source to a destination, and// allowing the copy to overwrite existing data by using truncation.funccopyTable(projectID,datasetID,srcID,dstIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// srcID := "sourcetable"// dstID := "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient:%v",err)}deferclient.Close()dataset:=client.Dataset(datasetID)copier:=dataset.Table(dstID).CopierFrom(dataset.Table(srcID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;publicclassCopyTable{publicstaticvoidrunCopyTable(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DESTINATION_DATASET_NAME";StringdestinationTableId="MY_DESTINATION_TABLE_NAME";StringsourceDatasetName="MY_SOURCE_DATASET_NAME";StringsourceTableId="MY_SOURCE_TABLE_NAME";copyTable(sourceDatasetName,sourceTableId,destinationDatasetName,destinationTableId);}publicstaticvoidcopyTable(StringsourceDatasetName,StringsourceTableId,StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdsourceTable=TableId.of(sourceDatasetName,sourceTableId);TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,sourceTable).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Jobnotexecutedsinceitnolongerexists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuerywasunabletocopytableduetoanerror:\n"+job.getStatus().getError());return;}System.out.println("Tablecopiedsuccessfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Tablecopyingjobwasinterrupted.\n"+e.toString());}}}
// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTable(){// Copies src_dataset:src_table to dest_dataset:dest_table./*** TODO(developer): Uncomment the following lines before running the sample*/// const srcDatasetId = "my_src_dataset";// const srcTableId = "my_src_table";// const destDatasetId = "my_dest_dataset";// const destTableId = "my_dest_table";// Copy the table contents into another tableconst[job]=awaitbigquery.dataset(srcDatasetId).table(srcTableId).copy(bigquery.dataset(destDatasetId).table(destTableId));console.log(`Job${job.id}completed.`);// Check the job's status for errorsconsterrors=job.status.errors;if(errors&&errors.length>0){throwerrors;}}
use Google\Cloud\BigQuery\BigQueryClient;use Google\Cloud\Core\ExponentialBackoff;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $sourceTableId = 'The BigQuery table ID to copy from';// $destinationTableId = 'The BigQuery table ID to copy to';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$sourceTable = $dataset->table($sourceTableId);$destinationTable = $dataset->table($destinationTableId);$copyConfig = $sourceTable->copy($destinationTable);$job = $sourceTable->runJob($copyConfig);// poll the job until it is complete$backoff = new ExponentialBackoff(10);$backoff->execute(function () use ($job) {print('Waiting for job to complete' . PHP_EOL);$job->reload();if (!$job->isComplete()) {throw new Exception('Job has not yet completed', 500);}});// check if the job has errorsif (isset($job->info()['status']['errorResult'])) {$error = $job->info()['status']['errorResult']['message'];printf('Error running job: %s' . PHP_EOL, $error);} else {print('Table copied successfully' . PHP_EOL);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set source_table_id to the ID of the original table.# source_table_id = "your-project.source_dataset.source_table"# TODO(developer): Set destination_table_id to the ID of the destination table.# destination_table_id = "your-project.destination_dataset.destination_table"job=client.copy_table(source_table_id,destination_table_id)job.result()# Wait for the job to complete.print("Acopyofthetablecreated.")
Copy multiple source tables
You can copy multiple source tables to a destination table in the following
ways:
Using the bq command-line tool'sbq cpcommand.
Calling thejobs.insertmethod, configuring acopyjob, and specifying thesourceTablesproperty.
Using the client libraries.
All source tables must have identical schemas andtags,
and only one destination table is allowed.
Source tables must be specified as a comma-separated list. You can't use
wildcards when you copy multiple source tables.
To copy multiple source tables, select one of the following choices:
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq cpcommand and include multiple source tables as a
comma-separated list. Optional flags can be used to control the write
disposition of the destination table:
-aor--append_tableappends the data from the source tables to an
existing table in the destination dataset.
-for--forceoverwrites an existing destination table in the
destination dataset and doesn't prompt you for confirmation.
-nor--no_clobberreturns the following error message if the table
exists in the destination dataset:Table 'project_id:dataset.table'
already exists, skipping.If-nis not specified, the default
behavior is to prompt you to choose whether to replace the destination
table.
--destination_kms_keyis the customer-managed Cloud Key Management Service key used to
encrypt the destination table.
If the source or destination dataset is in a project other than your default
project, add the project ID to the dataset names in the following format:project_id:dataset.
(Optional) Supply the--locationflag and set the value to yourlocation.
location: the name of your location.
The--locationflag is
optional. For example, if you are using BigQuery in the
Tokyo region, you can set the flag's value toasia-northeast1. You can
set a default value for the location using the.bigqueryrcfile.
project_id: your project ID.
dataset: the name of the source or destination
dataset.
source_table: the table that you're copying.
destination_table: the name of the table in the
destination dataset.
Examples:
To copy themydataset.mytabletable and themydataset.mytable2table tomydataset2.tablecopytable, enter the following command . All datasets are
in your default project.
To copy themydataset.mytabletable and themydataset.mytable2table tomyotherdataset.mytabletable and to overwrite a destination table with the
same name, enter the following command. The destination dataset is in themyotherprojectproject, not your default project. The-fshortcut is
used to overwrite the destination table without a prompt.
To copy themyproject:mydataset.mytabletable and themyproject:mydataset.mytable2table and to return an error if the
destination dataset contains a table with the same name, enter the following
command. The destination dataset is in themyotherprojectproject. The-nshortcut is used to prevent overwriting a table with the same name.
To copy themydataset.mytabletable and themydataset.mytable2table and
to append the data to a destination table with the same name, enter the
following command. The source dataset is in your default project. The
destination dataset is in themyotherprojectproject. The-ashortcut is
used to append to the destination table.
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;importjava.util.Arrays;publicclassCopyMultipleTables{publicstaticvoidrunCopyMultipleTables(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DATASET_NAME";StringdestinationTableId="MY_TABLE_NAME";copyMultipleTables(destinationDatasetName,destinationTableId);}publicstaticvoidcopyMultipleTables(StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,Arrays.asList(TableId.of(destinationDatasetName,"table1"),TableId.of(destinationDatasetName,"table2"))).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Jobnotexecutedsinceitnolongerexists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuerywasunabletocopytablesduetoanerror:\n"+job.getStatus().getError());return;}System.out.println("Tablecopiedsuccessfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Tablecopyingjobwasinterrupted.\n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTableMultipleSource(){// Copy multiple source tables to a given destination./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// sourceTable = 'my_table';// destinationTable = 'testing';// Create a clientconstdataset=bigquery.dataset(datasetId);constmetadata={createDisposition:'CREATE_NEVER',writeDisposition:'WRITE_TRUNCATE',};// Create table referencesconsttable=dataset.table(sourceTable);constyourTable=dataset.table(destinationTable);// Copy tableconst[apiResponse]=awaittable.copy(yourTable,metadata);console.log(apiResponse.configuration.copy);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dest_table_id to the ID of the destination table.# dest_table_id = "your-project.your_dataset.your_table_name"# TODO(developer): Set table_ids to the list of the IDs of the original tables.# table_ids = ["your-project.your_dataset.your_table_name", ...]job=client.copy_table(table_ids,dest_table_id)# Make an API request.job.result()# Wait for the job to complete.print("Thetables{}havebeenappendedto{}".format(table_ids,dest_table_id))
Copy tables across regions
You can copy a table,table snapshot, ortable clonefrom oneBigQuery
regionor multi-region to another. This includes any
tables that have customer-managed Cloud KMS (CMEK) applied. Doing so
incurs additional charges according toBigQuery
pricing.
To copy a table across regions, select one of the following options:
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
SOURCE_PROJECT: source project ID. If the source dataset is in a project other than your default
project, add the project ID to the source dataset name.
DESTINATION_PROJECT: destination project ID. If the destination dataset is in a project other than your default
project, add the project ID to the destination dataset name.
SOURCE_DATASET: the name of the source dataset.
DESTINATION_DATASET: the name of the destination dataset.
SOURCE_TABLE: the table that you are copying.
DESTINATION_TABLE: the name of the table in
the destination dataset.
Examples:
To copy themydataset_us.mytabletable from theusmulti-region to
themydataset_eu.mytable2table in theeumulti-region, enter the
following command. Both datasets are in your default project.
To copy a CMEK enabled table, you can either create a key usingCloud KMSand specify the key in thebq cpcommand or use a destination dataset with default CMEK configured. The following example specifies the destination CMEK in thebq cpcommand.
usingGoogle.Apis.Bigquery.v2.Data;usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryCopyTable{publicvoidCopyTable(stringprojectId="your-project-id",stringdestinationDatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);TableReferencesourceTableRef=newTableReference(){TableId="shakespeare",DatasetId="samples",ProjectId="bigquery-public-data"};TableReferencedestinationTableRef=client.GetTableReference(destinationDatasetId,"destination_table");BigQueryJobjob=client.CreateCopyJob(sourceTableRef,destinationTableRef).PollUntilCompleted()// Wait for the job to complete..ThrowOnAnyError();// Retrieve destination tableBigQueryTabledestinationTable=client.GetTable(destinationTableRef);Console.WriteLine($"Copied{destinationTable.Resource.NumRows}rowsfromtable"+$"{sourceTableRef.DatasetId}.{sourceTableRef.TableId}"+$"to{destinationTable.FullyQualifiedId}.");}}
import("context""fmt""cloud.google.com/go/bigquery")// copyTable demonstrates copying a table from a source to a destination, and// allowing the copy to overwrite existing data by using truncation.funccopyTable(projectID,datasetID,srcID,dstIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// srcID := "sourcetable"// dstID := "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient:%v",err)}deferclient.Close()dataset:=client.Dataset(datasetID)copier:=dataset.Table(dstID).CopierFrom(dataset.Table(srcID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;publicclassCopyTable{publicstaticvoidrunCopyTable(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DESTINATION_DATASET_NAME";StringdestinationTableId="MY_DESTINATION_TABLE_NAME";StringsourceDatasetName="MY_SOURCE_DATASET_NAME";StringsourceTableId="MY_SOURCE_TABLE_NAME";copyTable(sourceDatasetName,sourceTableId,destinationDatasetName,destinationTableId);}publicstaticvoidcopyTable(StringsourceDatasetName,StringsourceTableId,StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdsourceTable=TableId.of(sourceDatasetName,sourceTableId);TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,sourceTable).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Jobnotexecutedsinceitnolongerexists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuerywasunabletocopytableduetoanerror:\n"+job.getStatus().getError());return;}System.out.println("Tablecopiedsuccessfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Tablecopyingjobwasinterrupted.\n"+e.toString());}}}
// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTable(){// Copies src_dataset:src_table to dest_dataset:dest_table./*** TODO(developer): Uncomment the following lines before running the sample*/// const srcDatasetId = "my_src_dataset";// const srcTableId = "my_src_table";// const destDatasetId = "my_dest_dataset";// const destTableId = "my_dest_table";// Copy the table contents into another tableconst[job]=awaitbigquery.dataset(srcDatasetId).table(srcTableId).copy(bigquery.dataset(destDatasetId).table(destTableId));console.log(`Job${job.id}completed.`);// Check the job's status for errorsconsterrors=job.status.errors;if(errors&&errors.length>0){throwerrors;}}
use Google\Cloud\BigQuery\BigQueryClient;use Google\Cloud\Core\ExponentialBackoff;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $sourceTableId = 'The BigQuery table ID to copy from';// $destinationTableId = 'The BigQuery table ID to copy to';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$sourceTable = $dataset->table($sourceTableId);$destinationTable = $dataset->table($destinationTableId);$copyConfig = $sourceTable->copy($destinationTable);$job = $sourceTable->runJob($copyConfig);// poll the job until it is complete$backoff = new ExponentialBackoff(10);$backoff->execute(function () use ($job) {print('Waiting for job to complete' . PHP_EOL);$job->reload();if (!$job->isComplete()) {throw new Exception('Job has not yet completed', 500);}});// check if the job has errorsif (isset($job->info()['status']['errorResult'])) {$error = $job->info()['status']['errorResult']['message'];printf('Error running job: %s' . PHP_EOL, $error);} else {print('Table copied successfully' . PHP_EOL);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set source_table_id to the ID of the original table.# source_table_id = "your-project.source_dataset.source_table"# TODO(developer): Set destination_table_id to the ID of the destination table.# destination_table_id = "your-project.destination_dataset.destination_table"job=client.copy_table(source_table_id,destination_table_id)job.result()# Wait for the job to complete.print("Acopyofthetablecreated.")
Limitations
Copying a table across regions is subject to the following limitations:
You can't copy a table using the Google Cloud console or theTABLE COPY
DDLstatement.
You can't copy a table if there are any policy tags on the source table.
You can't copy IAM policies associated with the tables. You can apply the same policies to the destination after the copy is completed.
You can't copy multiple source tables into a single destination table.
You can't copy tables in append mode.
Time travelinformation is not copied to the destination region.
Table clones are converted to a full copy at the destination region.
View current quota usage
You can view your current usage of query, load, extract, or copy jobs by running
anINFORMATION_SCHEMAquery to view metadata about the jobs ran over a
specified time period. You can compare your current usage against the quota
limit to determine your quota usage for a particular type of job. The following
example query uses theINFORMATION_SCHEMA.JOBSview to list the number of
query, load, extract, and copy jobs by project:
When you delete a table, any data in the table is also deleted. To automatically
delete tables after a specified period of time, set thedefault table expirationfor the dataset or set the expiration time when youcreate the table.
Deleting a table also deletes any permissions associated with this table. When
you recreate a deleted table, you must also manuallyreconfigure any access permissionspreviously associated with it.
Required roles
To get the permissions that you need to delete a table,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor) IAM role on the dataset.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to delete a table. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to delete a table:
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Use thebq rmcommand with the--tableflag (or-tshortcut) to delete
a table. When you use the bq command-line tool to remove a table, you must confirm the
action. You can use the--forceflag (or-fshortcut) to skip
confirmation.
If the table is in a dataset in a project other than your default
project, add the project ID to the dataset name in the following format:project_id:dataset.
bqrm\-f\-t\project_id:dataset.table
Replace the following:
project_id: your project ID
dataset: the name of the dataset that contains the
table
table: the name of the table that you're deleting
Examples:
To delete themytabletable from themydatasetdataset, enter the
following command. Themydatasetdataset is in your default project.
bq rm -t mydataset.mytable
To delete themytabletable from themydatasetdataset, enter the
following command. Themydatasetdataset is in themyotherprojectproject, not your default project.
bq rm -t myotherproject:mydataset.mytable
To delete themytabletable from themydatasetdataset, enter the
following command. Themydatasetdataset is in your default project. The
command uses the-fshortcut to bypass confirmation.
bq rm -f -t mydataset.mytable
API
Call thetables.deleteAPI method and specify the table to delete using thetableIdparameter.
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.TableId;publicclassDeleteTable{publicstaticvoidrunDeleteTable(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";deleteTable(datasetName,tableName);}publicstaticvoiddeleteTable(StringdatasetName,StringtableName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();booleansuccess=bigquery.delete(TableId.of(datasetName,tableName));if(success){System.out.println("Tabledeletedsuccessfully");}else{System.out.println("Tablewasnotfound");}}catch(BigQueryExceptione){System.out.println("Tablewasnotdeleted.\n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctiondeleteTable(){// Deletes "my_table" from "my_dataset"./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// const tableId = "my_table";// Delete the tableawaitbigquery.dataset(datasetId).table(tableId).delete();console.log(`Table${tableId}deleted.`);}
use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $tableId = 'The BigQuery table ID';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$table = $dataset->table($tableId);$table->delete();printf('Deleted table %s.%s' . PHP_EOL, $datasetId, $tableId);
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set table_id to the ID of the table to fetch.# table_id = 'your-project.your_dataset.your_table'# If the table does not exist, delete_table raises# google.api_core.exceptions.NotFound unless not_found_ok is True.client.delete_table(table_id,not_found_ok=True)# Make an API request.print("Deletedtable'{}'.".format(table_id))
You can undelete a table within the time travel window specified for the
dataset, including explicit deletions and implicit deletions due to
table expiration. You have the ability toconfigure the time travel window.
To undelete an entire dataset, seeUndelete datasets.
The time travel window can have a duration between two and seven days. After
the time travel window has passed, it is not possible to undelete a table
using any method, including opening a support ticket.
When you restore a table from historical data,tagsfrom
the source table aren't copied to the destination table.
You can restore a table that was deleted but is still within the time travel
window by copying the table to a new table, using the@<time>time decorator.
To copy the table, you can use the bq command-line tool or the client libraries:
Console
You can't undelete a table by using the Google Cloud console.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
To restore a table, first determine a UNIX timestamp of when the table existed
(in milliseconds). You can use the Linuxdatecommand to generate the Unix timestamp from a regular timestamp value:
date -d '2023-08-04 16:00:34.456789Z' +%s000
Then, use thebq copycommand with the@<time>time travel decorator to perform the table copy operation.
For example, enter the following command to copy
themydataset.mytabletable at the time1418864998000into a new tablemydataset.newtable.
import("context""fmt""time""cloud.google.com/go/bigquery")// deleteAndUndeleteTable demonstrates how to recover a deleted table by copying it from a point in time// that predates the deletion event.funcdeleteAndUndeleteTable(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient:%v",err)}deferclient.Close()ds:=client.Dataset(datasetID)if_,err:=ds.Table(tableID).Metadata(ctx);err!=nil{returnerr}// Record the current time. We'll use this as the snapshot time// for recovering the table.snapTime:=time.Now()// "Accidentally" delete the table.iferr:=client.Dataset(datasetID).Table(tableID).Delete(ctx);err!=nil{returnerr}// Construct the restore-from tableID using a snapshot decorator.snapshotTableID:=fmt.Sprintf("%s@%d",tableID,snapTime.UnixNano()/1e6)// Choose a new table ID for the recovered table data.recoverTableID:=fmt.Sprintf("%s_recovered",tableID)// Construct and run a copy job.copier:=ds.Table(recoverTableID).CopierFrom(ds.Table(snapshotTableID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}ds.Table(recoverTableID).Delete(ctx)returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;// Sample to undeleting a tablepublicclassUndeleteTable{publicstaticvoidrunUndeleteTable(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_TABLE";StringrecoverTableName="MY_RECOVER_TABLE_TABLE";undeleteTable(datasetName,tableName,recoverTableName);}publicstaticvoidundeleteTable(StringdatasetName,StringtableName,StringrecoverTableName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();// "Accidentally" delete the table.bigquery.delete(TableId.of(datasetName,tableName));// Record the current time. We'll use this as the snapshot time// for recovering the table.longsnapTime=System.currentTimeMillis();// Construct the restore-from tableID using a snapshot decorator.StringsnapshotTableId=String.format("%s@%d",tableName,snapTime);// Construct and run a copy job.CopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(// Choose a new table ID for the recovered table data.TableId.of(datasetName,recoverTableName),TableId.of(datasetName,snapshotTableId)).build();Jobjob=bigquery.create(JobInfo.of(configuration));job=job.waitFor();if(job.isDone()&&job.getStatus().getError()==null){System.out.println("Undeletetablerecoveredsuccessfully.");}else{System.out.println("BigQuerywasunabletocopythetableduetoanerror:\n"+job.getStatus().getError());return;}}catch(BigQueryException|InterruptedExceptione){System.out.println("Tablenotfound.\n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionundeleteTable(){// Undeletes "my_table_to_undelete" from "my_dataset"./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// const tableId = "my_table_to_undelete";// const recoveredTableId = "my_recovered_table";/*** TODO(developer): Choose an appropriate snapshot point as epoch milliseconds.* For this example, we choose the current time as we're about to delete the* table immediately afterwards.*/constsnapshotEpoch=Date.now();// Delete the tableawaitbigquery.dataset(datasetId).table(tableId).delete();console.log(`Table${tableId}deleted.`);// Construct the restore-from table ID using a snapshot decorator.constsnapshotTableId=`${tableId}@${snapshotEpoch}`;// Construct and run a copy job.awaitbigquery.dataset(datasetId).table(snapshotTableId).copy(bigquery.dataset(datasetId).table(recoveredTableId));console.log(`Copied data from deleted table${tableId}to${recoveredTableId}`);}
importtimefromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Choose a table to recover.# table_id = "your-project.your_dataset.your_table"# TODO(developer): Choose a new table ID for the recovered table data.# recovered_table_id = "your-project.your_dataset.your_table_recovered"# TODO(developer): Choose an appropriate snapshot point as epoch# milliseconds. For this example, we choose the current time as we're about# to delete the table immediately afterwards.snapshot_epoch=int(time.time()*1000)# ...# "Accidentally" delete the table.client.delete_table(table_id)# Make an API request.# Construct the restore-from table ID using a snapshot decorator.snapshot_table_id="{}@{}".format(table_id,snapshot_epoch)# Construct and run a copy job.job=client.copy_table(snapshot_table_id,recovered_table_id,# Must match the source and destination tables location.location="US",)# Make an API request.job.result()# Wait for the job to complete.print("Copieddatafromdeletedtable{}to{}".format(table_id,recovered_table_id))
If you anticipate that you might want to restore a table later than what is
allowed by the time travel window, then create a table snapshot of the table.
For more information, seeTable snapshots.