For information about how to restore (orundelete) a deleted table, seeRestore deleted tables.
For more information about creating and using tables including getting table
information, listing tables, and controlling access to table data, seeCreating and using tables.
Before you begin
Grant Identity and Access Management (IAM) roles that give users the necessary permissions
to perform each task in this document. The permissions required to perform a
task (if any) are listed in the "Required permissions" section of the task.
To get the permissions that
you need to update table properties,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor)
IAM role on a table.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to update table properties. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to update table properties:
You can't add a description when you create a table using the
Google Cloud console. After the table is created, you can add a
description on theDetailspage.
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, clickDetails.
In theDescriptionsection, click
the pencil icon to edit the description.
Enter a description in the box, and clickUpdateto save.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq updatecommand with the--descriptionflag. If you are
updating a table in a project other than your default project, add the
project ID to the dataset name in the following format:project_id:dataset.
description: the text describing the table in quotes
project_id: your project ID
dataset: the name of the dataset that contains the table
you're updating
table: the name of the table you're updating
Examples:
To change the description of themytabletable in themydatasetdataset to
"Description of mytable", enter the following command. Themydatasetdataset is in
your default project.
bq update --description "Description of mytable" mydataset.mytable
To change the description of themytabletable in themydatasetdataset to
"Description of mytable", enter the following command. Themydatasetdataset is in themyotherprojectproject, not your default project.
bq update \
--description "Description of mytable" \
myotherproject:mydataset.mytable
API
Call thetables.patchmethod and use thedescriptionproperty in thetable resourceto update the table's description. Because thetables.updatemethod
replaces the entire table resource, thetables.patchmethod is preferred.
import("context""fmt""cloud.google.com/go/bigquery")// updateTableDescription demonstrates how to fetch a table's metadata and updates the Description metadata.funcupdateTableDescription(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()tableRef:=client.Dataset(datasetID).Table(tableID)meta,err:=tableRef.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.TableMetadataToUpdate{Description:"Updated description.",}if_,err=tableRef.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;publicclassUpdateTableDescription{publicstaticvoidrunUpdateTableDescription(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";StringnewDescription="this is the new table description";updateTableDescription(datasetName,tableName,newDescription);}publicstaticvoidupdateTableDescription(StringdatasetName,StringtableName,StringnewDescription){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setDescription(newDescription).build());System.out.println("Table description updated successfully to "+newDescription);}catch(BigQueryExceptione){System.out.println("Table description was not updated \n"+e.toString());}}}
# from google.cloud import bigquery# client = bigquery.Client()# project = client.project# dataset_ref = bigquery.DatasetReference(project, dataset_id)# table_ref = dataset_ref.table('my_table')# table = client.get_table(table_ref) # API requestasserttable.description=="Original description."table.description="Updated description."table=client.update_table(table,["description"])# API requestasserttable.description=="Updated description."
Gemini
You can generate a table description with Gemini in
BigQuery by using data insights. Data insights is an automated
way to explore, understand, and curate your data.
For more information about data insights, including setup steps, required
IAM roles, and best practices to improve the accuracy of the
generated insights, seeGenerate data insights in BigQuery.
In the Google Cloud console, go to theBigQuerypage.
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, click theSchematab.
ClickGenerate.
Gemini generates a table description and insights about
the table. It takes a few minutes for the information to be
populated. You can view the generated insights on the table'sInsightstab.
To edit and save the generated table description, do the following:
ClickView column descriptions.
The current table description and the generated description are
displayed.
In theTable descriptionsection, clickSave to details.
To replace the current description with the generated description,
clickCopy suggested description.
Edit the table description as necessary, and then clickSave to details.
The table description is updated immediately.
To close thePreview descriptionspanel, clickcloseClose.
Update a table's expiration time
You can set a default table expiration time at the dataset level, or you can set
a table's expiration time when the table is created. A table's expiration time
is often referred to as "time to live" or TTL.
When a table expires, it is deleted along with all of the data it contains.
If necessary, you can undelete the expired table within the time travel window
specified for the dataset, seeRestore deleted tablesfor more
information.
If you set the expiration when the table is created, the dataset's default table
expiration is ignored. If you do not set a default table expiration at the
dataset level, and you do not set a table expiration when the table is created,
the table never expires and you mustdeletethe table
manually.
At any point after the table is created, you can update the table's expiration
time in the following ways:
Using the Google Cloud console.
Using a data definition language (DDL)ALTER TABLEstatement.
You can't add an expiration time when you create a table using the
Google Cloud console. After a table is created, you can add or update a
table expiration on theTable Detailspage.
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, clickDetails.
Click the pencil icon next toTable info
ForTable expiration, selectSpecify date. Then select the
expiration date using the calendar widget.
ClickUpdateto save. The updated expiration time appears in theTable infosection.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq updatecommand with the--expirationflag. If you are
updating a table in a project other than your default project,
add the project ID to the dataset name in the following format:project_id:dataset.
integer: the default lifetime (in seconds) for
the table. The minimum value is 3600 seconds (one hour). The expiration
time evaluates to the current time plus the integer value. If you specify0, the table expiration is removed, and the table never expires. Tables
with no expiration must be manually deleted.
project_id: your project ID.
dataset: the name of the dataset that contains
the table you're updating.
table: the name of the table you're updating.
Examples:
To update the expiration time of themytabletable in themydatasetdataset to 5 days
(432000 seconds), enter the following command. Themydatasetdataset is in your
default project.
bq update --expiration 432000 mydataset.mytable
To update the expiration time of themytabletable in themydatasetdataset to 5 days
(432000 seconds), enter the following command. Themydatasetdataset is in themyotherprojectproject, not your default project.
Call thetables.patchmethod and use theexpirationTimeproperty in thetable resourceto update the table expiration in milliseconds. Because thetables.updatemethod replaces the entire table resource, thetables.patchmethod is
preferred.
import("context""fmt""time""cloud.google.com/go/bigquery")// updateTableExpiration demonstrates setting the table expiration of a table to a specific point in time// in the future, at which time it will be deleted.funcupdateTableExpiration(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()tableRef:=client.Dataset(datasetID).Table(tableID)meta,err:=tableRef.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.TableMetadataToUpdate{ExpirationTime:time.Now().Add(time.Duration(5*24)*time.Hour),// table expiration in 5 days.}if_,err=tableRef.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;importjava.util.concurrent.TimeUnit;publicclassUpdateTableExpiration{publicstaticvoidrunUpdateTableExpiration(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";// Update table expiration to one day.LongnewExpiration=TimeUnit.MILLISECONDS.convert(1,TimeUnit.DAYS);updateTableExpiration(datasetName,tableName,newExpiration);}publicstaticvoidupdateTableExpiration(StringdatasetName,StringtableName,LongnewExpiration){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setExpirationTime(newExpiration).build());System.out.println("Table expiration updated successfully to "+newExpiration);}catch(BigQueryExceptione){System.out.println("Table expiration was not updated \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionupdateTableExpiration(){// Updates a table's expiration./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = 'my_dataset', // Existing dataset// const tableId = 'my_table', // Existing table// const expirationTime = Date.now() + 1000 * 60 * 60 * 24 * 5 // 5 days from current time in ms// Retreive current table metadataconsttable=bigquery.dataset(datasetId).table(tableId);const[metadata]=awaittable.getMetadata();// Set new table expiration to 5 days from current timemetadata.expirationTime=expirationTime.toString();const[apiResponse]=awaittable.setMetadata(metadata);constnewExpirationTime=apiResponse.expirationTime;console.log(`${tableId}expiration:${newExpirationTime}`);}
# Copyright 2022 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.importdatetimedefupdate_table_expiration(table_id,expiration):orig_table_id=table_idorig_expiration=expirationfromgoogle.cloudimportbigqueryclient=bigquery.Client()# TODO(dev): Change table_id to the full name of the table you want to update.table_id="your-project.your_dataset.your_table_name"# TODO(dev): Set table to expire for desired days days from now.expiration=datetime.datetime.now(datetime.timezone.utc)+datetime.timedelta(days=5)table_id=orig_table_idexpiration=orig_expirationtable=client.get_table(table_id)# Make an API request.table.expires=expirationtable=client.update_table(table,["expires"])# API requestprint(f"Updated{table_id}, expires{table.expires}.")
To update the default dataset partition expiration time:
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Dataset;importjava.util.concurrent.TimeUnit;// Sample to update partition expiration on a dataset.publicclassUpdateDatasetPartitionExpiration{publicstaticvoidmain(String[]args){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";// Set the default partition expiration (applies to new tables, only) in// milliseconds. This example sets the default expiration to 90 days.LongnewExpiration=TimeUnit.MILLISECONDS.convert(90,TimeUnit.DAYS);updateDatasetPartitionExpiration(datasetName,newExpiration);}publicstaticvoidupdateDatasetPartitionExpiration(StringdatasetName,LongnewExpiration){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Datasetdataset=bigquery.getDataset(datasetName);bigquery.update(dataset.toBuilder().setDefaultPartitionExpirationMs(newExpiration).build());System.out.println("Dataset default partition expiration updated successfully to "+newExpiration);}catch(BigQueryExceptione){System.out.println("Dataset partition expiration was not updated \n"+e.toString());}}}
# Copyright 2019 Google LLC## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at## https://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.defupdate_dataset_default_partition_expiration(dataset_id:str)->None:fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset to fetch.# dataset_id = 'your-project.your_dataset'dataset=client.get_dataset(dataset_id)# Make an API request.# Set the default partition expiration (applies to new tables, only) in# milliseconds. This example sets the default expiration to 90 days.dataset.default_partition_expiration_ms=90*24*60*60*1000dataset=client.update_dataset(dataset,["default_partition_expiration_ms"])# Make an API request.print("Updated dataset{}.{}with new default partition expiration{}".format(dataset.project,dataset.dataset_id,dataset.default_partition_expiration_ms))
When you add aNUMERICorBIGNUMERICfield to a table and do not specify
arounding mode, then the rounding mode
is automatically set to the table's default rounding mode. Changing a table's
default rounding mode doesn't alter the rounding mode of existing fields.
Update a table's schema definition
For more information about updating a table's schema definition, seeModifying table schemas.
Rename a table
You can rename a table after it has been created by using theALTER TABLE RENAME TOstatement.
The following example renamesmytabletomynewtable:
ALTERTABLEmydataset.mytableRENAMETOmynewtable;
Limitations on renaming tables
If you want to rename a table that has data streaming into it, you must stop
the streaming, commit any pending streams, and wait
for BigQuery to indicate that streaming
is not in use.
While a table can usually be renamed 5 hours after the last streaming
operation, it might take longer.
Existing table ACLs and row access policies are preserved, but table ACL and
row access policy updates made during the table rename are not preserved.
You can't concurrently rename a table and run a DML statement on that table.
Call thejobs.insertAPI method and configure acopyjob.
Use the client libraries.
Limitations on copying tables
Table copy jobs are subject to the following limitations:
You can't stop a table copy operation after you start it. A table copy
operation runs asynchronously and doesn't stop even when you cancel the job.
You are also charged for data transfer for a cross-region table copy and for
storage in the destination region.
When you copy a table, the name of the destination table must adhere to the
same naming conventions as when youcreate a table.
Table copies are subject to BigQuerylimitson copy jobs.
The Google Cloud console supports copying only one table at a time. You
can't overwrite an existing table in the destination dataset. The table must
have a unique name in the destination dataset.
Copying multiple source tables into a destination table is not supported by
the Google Cloud console.
When copying multiple source tables to a destination table using the API,
bq command-line tool, or the client libraries, all source tables must have identical
schemas, including any partitioning or clustering.
Certain table schema updates, such as dropping or renaming
columns, can cause tables to have apparently identical schemas but different
internal representations. This might cause a table copy job to fail with the
errorMaximum limit on diverging physical schemas reached. In this case, you
can use theCREATE TABLE LIKEstatementto ensure that your source table's schema matches the destination table's
schema exactly.
The time that BigQuery takes to copy tables might vary
significantly across different runs because the underlying storage is managed
dynamically.
You can't copy and append a source table to a destination table that has more
columns than the source table, and the additional columns havedefault values. Instead, you can runINSERT destination_table SELECT * FROM source_tableto copy over the data.
If the copy operation overwrites an existing table, then the table-level
access for the existing table is maintained.Tagsfrom
the source table aren't copied to the overwritten table, while tags on the
existing table are retained. However, when you copy tables across regions,
tags on the existing table are removed.
If the copy operation creates a new table, then the table-level access for the
new table is determined by the access policies of the dataset in which the new
table is created. Additionally,tagsare copied from
the source table to the new table.
When you copy multiple source tables to a destination table, all source tables
must have identical tags.
Required roles
To perform the tasks in this document, you need the following permissions.
Roles to copy tables and partitions
To get the permissions that
you need to copy tables and partitions,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor)
IAM role on the source and destination datasets.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to copy tables and partitions. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to copy tables and partitions:
bigquery.tables.getDataon the source and destination datasets
bigquery.tables.geton the source and destination datasets
To get the permission that
you need to run a copy job,
ask your administrator to grant you theJob User(roles/bigquery.jobUser)
IAM role on the source and destination datasets.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains thebigquery.jobs.createpermission,
which is required to
run a copy job.
You can copy a single table in the following ways:
Using the Google Cloud console.
Using the bq command-line tool'sbq cpcommand.
Using a data definition language (DDL)CREATE TABLE COPYstatement.
Calling thejobs.insertAPI method, configuring acopyjob, and specifying thesourceTableproperty.
Using the client libraries.
The Google Cloud console and theCREATE TABLE COPYstatement support only
one source table and one destination
table in a copy job. Tocopy multiple source filesto a destination table, you must use the bq command-line tool or the API.
To copy a single source table:
Console
In theExplorerpanel, expand your project and dataset, then select
the table.
In the details panel, clickCopy table.
In theCopy tabledialog, underDestination:
ForProject name, choose the project that will store the copied
table.
ForDataset name, select the dataset where you want to store
the copied table. The source and destination datasets must be in the
samelocation.
ForTable name, enter a name for the new table. The name must
be unique in the destination dataset. You can't overwrite an existing
table in the destination dataset using the Google Cloud console. For
more information about table name requirements, seeTable naming.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq cpcommand. Optional flags can be used to control the write
disposition of the destination table:
-aor--append_tableappends the data from the source table to an
existing table in the destination dataset.
-for--forceoverwrites an existing table in the destination dataset
and doesn't prompt you for confirmation.
-nor--no_clobberreturns the following error message if the table
exists in the destination dataset:Table 'project_id:dataset.table' already exists, skipping.If-nis not specified, the default behavior is to prompt you to choose
whether to replace the destination table.
--destination_kms_keyis the customer-managed Cloud KMS key
used to encrypt the destination table.
If the source or destination dataset is in a project other than your default
project, add the project ID to the dataset names in the following format:project_id:dataset.
(Optional) Supply the--locationflag and set the value to yourlocation.
location: the name of your location.
The--locationflag is
optional. For example, if you are using BigQuery in the
Tokyo region, you can set the flag's value toasia-northeast1. You can
set a default value for the location using the.bigqueryrcfile.
project_id: your project ID.
dataset: the name of the source or destination
dataset.
source_table: the table you're copying.
destination_table: the name of the table in the
destination dataset.
Examples:
To copy themydataset.mytabletable to themydataset2.mytable2table,
enter the following command. Both datasets are in your default project.
bq cp mydataset.mytable mydataset2.mytable2
To copy themydataset.mytabletable and to overwrite a destination table
with the same name, enter the following command. The source dataset is in
your default project. The destination dataset is in themyotherprojectproject. The-fshortcut is used to overwrite the destination table
without a prompt.
To copy themydataset.mytabletable and to return an error if the
destination dataset contains a table with the same name, enter the following
command. The source dataset is in your default project. The destination
dataset is in themyotherprojectproject. The-nshortcut is used to
prevent overwriting a table with the same name.
To copy themydataset.mytabletable and to append the data to a
destination table with the same name, enter the following command. The
source dataset is in your default project. The destination dataset is in themyotherprojectproject. The- ashortcut is used to append to the
destination table.
bq cp -a mydataset.mytable myotherproject:myotherdataset.mytable
API
You can copy an existing table through the API by calling thebigquery.jobs.insertmethod, and configuring acopyjob. Specify your location in
thelocationproperty in thejobReferencesection of thejob resource.
You must specify the following values in your job configuration:
WheresourceTableprovides information about the table to be
copied,destinationTableprovides information about the new
table,createDispositionspecifies whether to create the
table if it doesn't exist, andwriteDispositionspecifies
whether to overwrite or append to an existing table.
usingGoogle.Apis.Bigquery.v2.Data;usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryCopyTable{publicvoidCopyTable(stringprojectId="your-project-id",stringdestinationDatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);TableReferencesourceTableRef=newTableReference(){TableId="shakespeare",DatasetId="samples",ProjectId="bigquery-public-data"};TableReferencedestinationTableRef=client.GetTableReference(destinationDatasetId,"destination_table");BigQueryJobjob=client.CreateCopyJob(sourceTableRef,destinationTableRef).PollUntilCompleted()// Wait for the job to complete..ThrowOnAnyError();// Retrieve destination tableBigQueryTabledestinationTable=client.GetTable(destinationTableRef);Console.WriteLine($"Copied {destinationTable.Resource.NumRows} rows from table "+$"{sourceTableRef.DatasetId}.{sourceTableRef.TableId} "+$"to {destinationTable.FullyQualifiedId}.");}}
import("context""fmt""cloud.google.com/go/bigquery")// copyTable demonstrates copying a table from a source to a destination, and// allowing the copy to overwrite existing data by using truncation.funccopyTable(projectID,datasetID,srcID,dstIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// srcID := "sourcetable"// dstID := "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()dataset:=client.Dataset(datasetID)copier:=dataset.Table(dstID).CopierFrom(dataset.Table(srcID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;publicclassCopyTable{publicstaticvoidrunCopyTable(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DESTINATION_DATASET_NAME";StringdestinationTableId="MY_DESTINATION_TABLE_NAME";StringsourceDatasetName="MY_SOURCE_DATASET_NAME";StringsourceTableId="MY_SOURCE_TABLE_NAME";copyTable(sourceDatasetName,sourceTableId,destinationDatasetName,destinationTableId);}publicstaticvoidcopyTable(StringsourceDatasetName,StringsourceTableId,StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdsourceTable=TableId.of(sourceDatasetName,sourceTableId);TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,sourceTable).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Job not executed since it no longer exists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuery was unable to copy table due to an error: \n"+job.getStatus().getError());return;}System.out.println("Table copied successfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Table copying job was interrupted. \n"+e.toString());}}}
// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTable(){// Copies src_dataset:src_table to dest_dataset:dest_table./*** TODO(developer): Uncomment the following lines before running the sample*/// const srcDatasetId = "my_src_dataset";// const srcTableId = "my_src_table";// const destDatasetId = "my_dest_dataset";// const destTableId = "my_dest_table";// Copy the table contents into another tableconst[job]=awaitbigquery.dataset(srcDatasetId).table(srcTableId).copy(bigquery.dataset(destDatasetId).table(destTableId));console.log(`Job${job.id}completed.`);// Check the job's status for errorsconsterrors=job.status.errors;if(errors&&errors.length>0){throwerrors;}}
use Google\Cloud\BigQuery\BigQueryClient;use Google\Cloud\Core\ExponentialBackoff;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $sourceTableId = 'The BigQuery table ID to copy from';// $destinationTableId = 'The BigQuery table ID to copy to';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$sourceTable = $dataset->table($sourceTableId);$destinationTable = $dataset->table($destinationTableId);$copyConfig = $sourceTable->copy($destinationTable);$job = $sourceTable->runJob($copyConfig);// poll the job until it is complete$backoff = new ExponentialBackoff(10);$backoff->execute(function () use ($job) {print('Waiting for job to complete' . PHP_EOL);$job->reload();if (!$job->isComplete()) {throw new Exception('Job has not yet completed', 500);}});// check if the job has errorsif (isset($job->info()['status']['errorResult'])) {$error = $job->info()['status']['errorResult']['message'];printf('Error running job: %s' . PHP_EOL, $error);} else {print('Table copied successfully' . PHP_EOL);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set source_table_id to the ID of the original table.# source_table_id = "your-project.source_dataset.source_table"# TODO(developer): Set destination_table_id to the ID of the destination table.# destination_table_id = "your-project.destination_dataset.destination_table"job=client.copy_table(source_table_id,destination_table_id)job.result()# Wait for the job to complete.print("A copy of the table created.")
Copy multiple source tables
You can copy multiple source tables to a destination table in the following
ways:
Using the bq command-line tool'sbq cpcommand.
Calling thejobs.insertmethod, configuring acopyjob, and specifying thesourceTablesproperty.
Using the client libraries.
All source tables must have identical schemas andtags,
and only one destination table is allowed.
Source tables must be specified as a comma-separated list. You can't use
wildcards when you copy multiple source tables.
To copy multiple source tables, select one of the following choices:
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq cpcommand and include multiple source tables as a
comma-separated list. Optional flags can be used to control the write
disposition of the destination table:
-aor--append_tableappends the data from the source tables to an
existing table in the destination dataset.
-for--forceoverwrites an existing destination table in the
destination dataset and doesn't prompt you for confirmation.
-nor--no_clobberreturns the following error message if the table
exists in the destination dataset:Table 'project_id:dataset.table'
already exists, skipping.If-nis not specified, the default
behavior is to prompt you to choose whether to replace the destination
table.
--destination_kms_keyis the customer-managed Cloud Key Management Service key used to
encrypt the destination table.
If the source or destination dataset is in a project other than your default
project, add the project ID to the dataset names in the following format:project_id:dataset.
(Optional) Supply the--locationflag and set the value to yourlocation.
location: the name of your location.
The--locationflag is
optional. For example, if you are using BigQuery in the
Tokyo region, you can set the flag's value toasia-northeast1. You can
set a default value for the location using the.bigqueryrcfile.
project_id: your project ID.
dataset: the name of the source or destination
dataset.
source_table: the table that you're copying.
destination_table: the name of the table in the
destination dataset.
Examples:
To copy themydataset.mytabletable and themydataset.mytable2table tomydataset2.tablecopytable, enter the following command . All datasets are
in your default project.
To copy themydataset.mytabletable and themydataset.mytable2table tomyotherdataset.mytabletable and to overwrite a destination table with the
same name, enter the following command. The destination dataset is in themyotherprojectproject, not your default project. The-fshortcut is
used to overwrite the destination table without a prompt.
To copy themyproject:mydataset.mytabletable and themyproject:mydataset.mytable2table and to return an error if the
destination dataset contains a table with the same name, enter the following
command. The destination dataset is in themyotherprojectproject. The-nshortcut is used to prevent overwriting a table with the same name.
To copy themydataset.mytabletable and themydataset.mytable2table and
to append the data to a destination table with the same name, enter the
following command. The source dataset is in your default project. The
destination dataset is in themyotherprojectproject. The-ashortcut is
used to append to the destination table.
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;importjava.util.Arrays;publicclassCopyMultipleTables{publicstaticvoidrunCopyMultipleTables(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DATASET_NAME";StringdestinationTableId="MY_TABLE_NAME";copyMultipleTables(destinationDatasetName,destinationTableId);}publicstaticvoidcopyMultipleTables(StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,Arrays.asList(TableId.of(destinationDatasetName,"table1"),TableId.of(destinationDatasetName,"table2"))).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Job not executed since it no longer exists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuery was unable to copy tables due to an error: \n"+job.getStatus().getError());return;}System.out.println("Table copied successfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Table copying job was interrupted. \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTableMultipleSource(){// Copy multiple source tables to a given destination./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// sourceTable = 'my_table';// destinationTable = 'testing';// Create a clientconstdataset=bigquery.dataset(datasetId);constmetadata={createDisposition:'CREATE_NEVER',writeDisposition:'WRITE_TRUNCATE',};// Create table referencesconsttable=dataset.table(sourceTable);constyourTable=dataset.table(destinationTable);// Copy tableconst[apiResponse]=awaittable.copy(yourTable,metadata);console.log(apiResponse.configuration.copy);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dest_table_id to the ID of the destination table.# dest_table_id = "your-project.your_dataset.your_table_name"# TODO(developer): Set table_ids to the list of the IDs of the original tables.# table_ids = ["your-project.your_dataset.your_table_name", ...]job=client.copy_table(table_ids,dest_table_id)# Make an API request.job.result()# Wait for the job to complete.print("The tables{}have been appended to{}".format(table_ids,dest_table_id))
Copy tables across regions
You can copy a table,table snapshot, ortable clonefrom oneBigQuery
regionor multi-region to another. This includes any
tables that have customer-managed Cloud KMS (CMEK) applied.
Copying a table across regions incurs additional data transfer charges according
toBigQuery pricing.
Additional charges are incurred even if you cancel the cross-region table copy
job before it has been completed.
To copy a table across regions, select one of the following options:
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
SOURCE_PROJECT: source project ID. If the source dataset is in a project other than your default
project, add the project ID to the source dataset name.
DESTINATION_PROJECT: destination project ID. If the destination dataset is in a project other than your default
project, add the project ID to the destination dataset name.
SOURCE_DATASET: the name of the source dataset.
DESTINATION_DATASET: the name of the destination dataset.
SOURCE_TABLE: the table that you are copying.
DESTINATION_TABLE: the name of the table in
the destination dataset.
The following example is a command that copies themydataset_us.mytabletable from theusmulti-region to
themydataset_eu.mytable2table in theeumulti-region. Both datasets are in the default project.
To copy a table across regions into a CMEK-enabled destination dataset, you mustenable CMEK on the tablewith a key from the table's region. The CMEK on the table doesn't have to be the same CMEK in use by the destination dataset. The following example copies a CMEK-enabled table to a destination dataset using thebq cpcommand.
Conversely, to copy a CMEK-enabled table across regions into a destination dataset, you canenable CMEK on the destination datasetwith a key from the destination dataset's region. You can also use thedestination_kms_keysflag in thebq cpcommand, as shown in the following example:
usingGoogle.Apis.Bigquery.v2.Data;usingGoogle.Cloud.BigQuery.V2;usingSystem;publicclassBigQueryCopyTable{publicvoidCopyTable(stringprojectId="your-project-id",stringdestinationDatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);TableReferencesourceTableRef=newTableReference(){TableId="shakespeare",DatasetId="samples",ProjectId="bigquery-public-data"};TableReferencedestinationTableRef=client.GetTableReference(destinationDatasetId,"destination_table");BigQueryJobjob=client.CreateCopyJob(sourceTableRef,destinationTableRef).PollUntilCompleted()// Wait for the job to complete..ThrowOnAnyError();// Retrieve destination tableBigQueryTabledestinationTable=client.GetTable(destinationTableRef);Console.WriteLine($"Copied {destinationTable.Resource.NumRows} rows from table "+$"{sourceTableRef.DatasetId}.{sourceTableRef.TableId} "+$"to {destinationTable.FullyQualifiedId}.");}}
import("context""fmt""cloud.google.com/go/bigquery")// copyTable demonstrates copying a table from a source to a destination, and// allowing the copy to overwrite existing data by using truncation.funccopyTable(projectID,datasetID,srcID,dstIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// srcID := "sourcetable"// dstID := "destinationtable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()dataset:=client.Dataset(datasetID)copier:=dataset.Table(dstID).CopierFrom(dataset.Table(srcID))copier.WriteDisposition=bigquery.WriteTruncatejob,err:=copier.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.CopyJobConfiguration;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.TableId;publicclassCopyTable{publicstaticvoidrunCopyTable(){// TODO(developer): Replace these variables before running the sample.StringdestinationDatasetName="MY_DESTINATION_DATASET_NAME";StringdestinationTableId="MY_DESTINATION_TABLE_NAME";StringsourceDatasetName="MY_SOURCE_DATASET_NAME";StringsourceTableId="MY_SOURCE_TABLE_NAME";copyTable(sourceDatasetName,sourceTableId,destinationDatasetName,destinationTableId);}publicstaticvoidcopyTable(StringsourceDatasetName,StringsourceTableId,StringdestinationDatasetName,StringdestinationTableId){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdsourceTable=TableId.of(sourceDatasetName,sourceTableId);TableIddestinationTable=TableId.of(destinationDatasetName,destinationTableId);// For more information on CopyJobConfiguration see:// https://googleapis.dev/java/google-cloud-clients/latest/com/google/cloud/bigquery/JobConfiguration.htmlCopyJobConfigurationconfiguration=CopyJobConfiguration.newBuilder(destinationTable,sourceTable).build();// For more information on Job see:// https://googleapis.dev/java/google-cloud-clients/latest/index.html?com/google/cloud/bigquery/package-summary.htmlJobjob=bigquery.create(JobInfo.of(configuration));// Blocks until this job completes its execution, either failing or succeeding.JobcompletedJob=job.waitFor();if(completedJob==null){System.out.println("Job not executed since it no longer exists.");return;}elseif(completedJob.getStatus().getError()!=null){System.out.println("BigQuery was unable to copy table due to an error: \n"+job.getStatus().getError());return;}System.out.println("Table copied successfully.");}catch(BigQueryException|InterruptedExceptione){System.out.println("Table copying job was interrupted. \n"+e.toString());}}}
// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncopyTable(){// Copies src_dataset:src_table to dest_dataset:dest_table./*** TODO(developer): Uncomment the following lines before running the sample*/// const srcDatasetId = "my_src_dataset";// const srcTableId = "my_src_table";// const destDatasetId = "my_dest_dataset";// const destTableId = "my_dest_table";// Copy the table contents into another tableconst[job]=awaitbigquery.dataset(srcDatasetId).table(srcTableId).copy(bigquery.dataset(destDatasetId).table(destTableId));console.log(`Job${job.id}completed.`);// Check the job's status for errorsconsterrors=job.status.errors;if(errors&&errors.length>0){throwerrors;}}
use Google\Cloud\BigQuery\BigQueryClient;use Google\Cloud\Core\ExponentialBackoff;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $sourceTableId = 'The BigQuery table ID to copy from';// $destinationTableId = 'The BigQuery table ID to copy to';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$sourceTable = $dataset->table($sourceTableId);$destinationTable = $dataset->table($destinationTableId);$copyConfig = $sourceTable->copy($destinationTable);$job = $sourceTable->runJob($copyConfig);// poll the job until it is complete$backoff = new ExponentialBackoff(10);$backoff->execute(function () use ($job) {print('Waiting for job to complete' . PHP_EOL);$job->reload();if (!$job->isComplete()) {throw new Exception('Job has not yet completed', 500);}});// check if the job has errorsif (isset($job->info()['status']['errorResult'])) {$error = $job->info()['status']['errorResult']['message'];printf('Error running job: %s' . PHP_EOL, $error);} else {print('Table copied successfully' . PHP_EOL);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set source_table_id to the ID of the original table.# source_table_id = "your-project.source_dataset.source_table"# TODO(developer): Set destination_table_id to the ID of the destination table.# destination_table_id = "your-project.destination_dataset.destination_table"job=client.copy_table(source_table_id,destination_table_id)job.result()# Wait for the job to complete.print("A copy of the table created.")
Limitations
Copying a table across regions is subject to the following limitations:
You can't copy a table using the Google Cloud console or theTABLE COPY
DDLstatement.
You can't copy a table if there are any policy tags on the source table.
You can't copy a table if the source table is larger than 20 physical TiB. Seeget information about tablesfor the source table physical size. Additionally, copying source tables that are larger than 1
physical TiB across regions may need multiple retries to successfully copy them.
You can't copy IAM policies associated with the tables. You can apply the same policies to the destination after the copy is completed.
If the copy operation overwrites an existing table,tagson the existing table are removed.
You can't copy multiple source tables into a single destination table.
You can't copy tables in append mode.
Time travelinformation is not copied to the destination region.
When you copy a table clone or snapshot to a new region, a full copy of the
table is created. This incurs additional storage costs.
View current quota usage
You can view your current usage of query, load, extract, or copy jobs by running
anINFORMATION_SCHEMAquery to view metadata about the jobs ran over a
specified time period. You can compare your current usage against the quota
limit to determine your quota usage for a particular type of job. The following
example query uses theINFORMATION_SCHEMA.JOBSview to list the number of
query, load, extract, and copy jobs by project:
When you delete a table, any data in the table is also deleted. To automatically
delete tables after a specified period of time, set thedefault table expirationfor the dataset or set the expiration time when youcreate the table.
Deleting a table also deletes any permissions associated with this table. When
you recreate a deleted table, you must also manuallyreconfigure any access permissionspreviously associated with it.
Required roles
To get the permissions that
you need to delete a table,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor)
IAM role on the dataset.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to delete a table. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to delete a table:
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Use thebq rmcommand with the--tableflag (or-tshortcut) to delete
a table. When you use the bq command-line tool to remove a table, you must confirm the
action. You can use the--forceflag (or-fshortcut) to skip
confirmation.
If the table is in a dataset in a project other than your default
project, add the project ID to the dataset name in the following format:project_id:dataset.
bqrm\-f\-t\project_id:dataset.table
Replace the following:
project_id: your project ID
dataset: the name of the dataset that contains the
table
table: the name of the table that you're deleting
Examples:
To delete themytabletable from themydatasetdataset, enter the
following command. Themydatasetdataset is in your default project.
bq rm -t mydataset.mytable
To delete themytabletable from themydatasetdataset, enter the
following command. Themydatasetdataset is in themyotherprojectproject, not your default project.
bq rm -t myotherproject:mydataset.mytable
To delete themytabletable from themydatasetdataset, enter the
following command. Themydatasetdataset is in your default project. The
command uses the-fshortcut to bypass confirmation.
bq rm -f -t mydataset.mytable
API
Call thetables.deleteAPI method and specify the table to delete using thetableIdparameter.
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.TableId;publicclassDeleteTable{publicstaticvoidrunDeleteTable(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";deleteTable(datasetName,tableName);}publicstaticvoiddeleteTable(StringdatasetName,StringtableName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();booleansuccess=bigquery.delete(TableId.of(datasetName,tableName));if(success){System.out.println("Table deleted successfully");}else{System.out.println("Table was not found");}}catch(BigQueryExceptione){System.out.println("Table was not deleted. \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctiondeleteTable(){// Deletes "my_table" from "my_dataset"./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// const tableId = "my_table";// Delete the tableawaitbigquery.dataset(datasetId).table(tableId).delete();console.log(`Table${tableId}deleted.`);}
use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';// $tableId = 'The BigQuery table ID';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$table = $dataset->table($tableId);$table->delete();printf('Deleted table %s.%s' . PHP_EOL, $datasetId, $tableId);
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set table_id to the ID of the table to fetch.# table_id = 'your-project.your_dataset.your_table'# If the table does not exist, delete_table raises# google.api_core.exceptions.NotFound unless not_found_ok is True.client.delete_table(table_id,not_found_ok=True)# Make an API request.print("Deleted table '{}'.".format(table_id))
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eBigQuery tables can be managed by updating properties like descriptions, expiration times, schemas, labels, and default rounding modes, and these updates can be performed using various methods including the Google Cloud console, SQL DDL, \u003ccode\u003ebq\u003c/code\u003e command-line tool, API, and client libraries.\u003c/p\u003e\n"],["\u003cp\u003eTables can be copied using the Google Cloud console, \u003ccode\u003ebq cp\u003c/code\u003e command, \u003ccode\u003eCREATE TABLE COPY\u003c/code\u003e SQL statement, API, or client libraries, but these operations have restrictions, including the inability to stop a copy operation, and cross-region copies have additional limitations.\u003c/p\u003e\n"],["\u003cp\u003eDeleting BigQuery tables can be done through the console, SQL \u003ccode\u003eDROP TABLE\u003c/code\u003e command, \u003ccode\u003ebq rm\u003c/code\u003e command, API \u003ccode\u003etables.delete\u003c/code\u003e call, or client libraries, and it's crucial to understand that this action permanently removes data and permissions unless the table is restored.\u003c/p\u003e\n"],["\u003cp\u003eDeleted tables can be restored within a specific time window using time travel capabilities via \u003ccode\u003ebq\u003c/code\u003e command or client libraries, however, table snapshots offer a longer-term retention solution, and no tags are copied over from the original table.\u003c/p\u003e\n"],["\u003cp\u003eManaging table properties and performing actions like copying, renaming, and deleting tables require specific IAM roles and permissions, such as \u003ccode\u003ebigquery.tables.update\u003c/code\u003e, \u003ccode\u003ebigquery.tables.getData\u003c/code\u003e, and \u003ccode\u003ebigquery.tables.delete\u003c/code\u003e, with \u003ccode\u003eroles/bigquery.dataEditor\u003c/code\u003e being a frequently used role.\u003c/p\u003e\n"]]],[],null,[]]