For information about how to restore (orundelete) a deleted table, seeRestore deleted tables.
For more information about creating and using tables including getting table
information, listing tables, and controlling access to table data, seeCreating and using tables.
Before you begin
Grant Identity and Access Management (IAM) roles that give users the necessary permissions
to perform each task in this document. The permissions required to perform a
task (if any) are listed in the "Required permissions" section of the task.
To get the permissions that
you need to update table properties,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor) IAM role on a table.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to update table properties. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to update table properties:
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;publicclassUpdateTableDescription{publicstaticvoidmain(String[]args){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";StringnewDescription="this is the new table description";updateTableDescription(datasetName,tableName,newDescription);}publicstaticvoidupdateTableDescription(StringdatasetName,StringtableName,StringnewDescription){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Tabletable=bigquery.getTable(datasetName,tableName);bigquery.update(table.toBuilder().setDescription(newDescription).build());System.out.println("Table description updated successfully to "+newDescription);}catch(BigQueryExceptione){System.out.println("Table description was not updated \n"+e.toString());}}}
# from google.cloud import bigquery# client = bigquery.Client()# project = client.project# dataset_ref = bigquery.DatasetReference(project, dataset_id)# table_ref = dataset_ref.table('my_table')# table = client.get_table(table_ref) # API requestasserttable.description=="Original description."table.description="Updated description."table=client.update_table(table,["description"])# API requestasserttable.description=="Updated description."
Gemini
You can generate a table description with Gemini in
BigQuery by using data insights. Data insights is an automated
way to explore, understand, and curate your data.
For more information about data insights, including setup steps, required
IAM roles, and best practices to improve the accuracy of the
generated insights, seeGenerate data insights in BigQuery.
In the Google Cloud console, go to theBigQuerypage.
In theExplorerpane, expand your project and dataset, then select
the table.
In the details panel, click theSchematab.
ClickGenerate.
Gemini generates a table description and insights about
the table. It takes a few minutes for the information to be
populated. You can view the generated insights on the table'sInsightstab.
To edit and save the generated table description, do the following:
ClickView column descriptions.
The current table description and the generated description are
displayed.
In theTable descriptionsection, clickSave to details.
To replace the current description with the generated description,
clickCopy suggested description.
Edit the table description as necessary, and then clickSave to details.
The table description is updated immediately.
To close thePreview descriptionspanel, clickcloseClose.
Update a table's expiration time
You can set a default table expiration time at the dataset level, or you can set
a table's expiration time when the table is created. A table's expiration time
is often referred to as "time to live" or TTL.
When a table expires, it is deleted along with all of the data it contains.
If necessary, you can undelete the expired table within the time travel window
specified for the dataset, seeRestore deleted tablesfor more
information.
If you set the expiration when the table is created, the dataset's default table
expiration is ignored. If you do not set a default table expiration at the
dataset level, and you do not set a table expiration when the table is created,
the table never expires and you mustdeletethe table
manually.
At any point after the table is created, you can update the table's expiration
time in the following ways:
Using the Google Cloud console.
Using a data definition language (DDL)ALTER TABLEstatement.
When you add aNUMERICorBIGNUMERICfield to a table and do not specify
arounding mode, then the rounding mode
is automatically set to the table's default rounding mode. Changing a table's
default rounding mode doesn't alter the rounding mode of existing fields.
Update a table's schema definition
For more information about updating a table's schema definition, seeModifying table schemas.
Rename a table
You can rename a table after it has been created by using theALTER TABLE RENAME TOstatement.
The following example renamesmytabletomynewtable:
ALTERTABLEmydataset.mytableRENAMETOmynewtable;
TheALTER TABLE RENAME TOstatement recreates the table in the destination
dataset with the creation timestamp of the original table. If you have
configureddataset-level table
expiration, the renamed
table might be immediately deleted if its original creation timestamp falls
outside of the expiration window.
Limitations on renaming tables
If you want to rename a table that has data streaming into it, you must stop
the streaming, commit any pending streams, and wait
for BigQuery to indicate that streaming
is not in use.
While a table can usually be renamed 5 hours after the last streaming
operation, it might take longer.
Existing table ACLs and row access policies are preserved, but table ACL and
row access policy updates made during the table rename are not preserved.
You can't concurrently rename a table and run a DML statement on that table.
Call thejobs.insertAPI method and configure acopyjob.
Use the client libraries.
Limitations on copying tables
Table copy jobs are subject to the following limitations:
You can't stop a table copy operation after you start it. A table copy
operation runs asynchronously and doesn't stop even when you cancel the job.
You are also charged for data transfer for a cross-region table copy and for
storage in the destination region.
When you copy a table, the name of the destination table must adhere to the
same naming conventions as when youcreate a table.
Table copies are subject to BigQuerylimitson copy jobs.
The Google Cloud console supports copying only one table at a time. You
can't overwrite an existing table in the destination dataset. The table must
have a unique name in the destination dataset.
Copying multiple source tables into a destination table is not supported by
the Google Cloud console.
When copying multiple source tables to a destination table using the API,
bq command-line tool, or the client libraries, all source tables must have identical
schemas, including any partitioning or clustering.
Certain table schema updates, such as dropping or renaming
columns, can cause tables to have apparently identical schemas but different
internal representations. This might cause a table copy job to fail with the
errorMaximum limit on diverging physical schemas reached. In this case, you
can use theCREATE TABLE LIKEstatementto ensure that your source table's schema matches the destination table's
schema exactly.
The time that BigQuery takes to copy tables might vary
significantly across different runs because the underlying storage is managed
dynamically.
You can't copy and append a source table to a destination table that has more
columns than the source table, and the additional columns havedefault values. Instead, you can runINSERT destination_table SELECT * FROM source_tableto copy over the data.
If the copy operation overwrites an existing table, then the table-level
access for the existing table is maintained.Tagsfrom
the source table aren't copied to the overwritten table, while tags on the
existing table are retained. However, when you copy tables across regions,
tags on the existing table are removed.
If the copy operation creates a new table, then the table-level access for the
new table is determined by the access policies of the dataset in which the new
table is created. Additionally,tagsare copied from
the source table to the new table.
When you copy multiple source tables to a destination table, all source tables
must have identical tags.
Required roles
To perform the tasks in this document, you need the following permissions.
Roles to copy tables and partitions
To get the permissions that
you need to copy tables and partitions,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor) IAM role on the source and destination datasets.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to copy tables and partitions. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to copy tables and partitions:
bigquery.tables.getDataon the source and destination datasets
bigquery.tables.geton the source and destination datasets
To get the permission that
you need to run a copy job,
ask your administrator to grant you theJob User(roles/bigquery.jobUser) IAM role on the source and destination datasets.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains thebigquery.jobs.createpermission,
which is required to
run a copy job.
You can copy a single table in the following ways:
Using the Google Cloud console.
Using the bq command-line tool'sbq cpcommand.
Using a data definition language (DDL)CREATE TABLE COPYstatement.
Calling thejobs.insertAPI method, configuring acopyjob, and specifying thesourceTableproperty.
Using the client libraries.
The Google Cloud console and theCREATE TABLE COPYstatement support only
one source table and one destination
table in a copy job. Tocopy multiple source filesto a destination table, you must use the bq command-line tool or the API.
You can copy a table,table snapshot, ortable clonefrom oneBigQuery
regionor multi-region to another. This includes any
tables that have customer-managed Cloud KMS (CMEK) applied.
Copying a table across regions incurs additional data transfer charges according
toBigQuery pricing.
Additional charges are incurred even if you cancel the cross-region table copy
job before it has been completed.
To copy a table across regions, select one of the following options:
Copying a table across regions is subject to the following limitations:
You can't copy a table using the Google Cloud console or theTABLE COPY
DDLstatement.
You can't copy a table if there are any policy tags on the source table.
You can't copy a table if the source table is larger than 20 physical TiB. Seeget information about tablesfor the source table physical size. Additionally, copying source tables that are larger than 1
physical TiB across regions may need multiple retries to successfully copy them.
You can't copy IAM policies associated with the tables. You can apply the same policies to the destination after the copy is completed.
If the copy operation overwrites an existing table,tagson the existing table are removed.
You can't copy multiple source tables into a single destination table.
You can't copy tables in append mode. If you usewrite_emptymode, the destination table must not exist.
Time travelinformation is not copied to the destination region.
When you copy a table clone or snapshot to a new region, a full copy of the
table is created. This incurs additional storage costs.
Expiration time from the source table is copied to the destination table.
View current quota usage
You can view your current usage of query, load, extract, or copy jobs by running
anINFORMATION_SCHEMAquery to view metadata about the jobs ran over a
specified time period. You can compare your current usage against thequota
limitto determine your quota usage for a
particular type of job. The following example query uses theINFORMATION_SCHEMA.JOBSview to list the number of query, load, extract, and
copy jobs by project:
Maximum number of copy jobs per day per project quota errors
BigQuery returns this error when the number of copy jobs running
in a project has exceeded the daily limit.
To learn more about the limit for copy jobs per day, seeCopy jobs.
Error message
Your project exceeded quota for copies per project
Diagnosis
If you'd like to gather more data about where the copy jobs are coming from,
you can try the following:
If your copy jobs are located in a single or only a few regions, you can try
querying theINFORMATION_SCHEMA.JOBStable for specific regions. For example:
If the goal of the frequent copy operations is to create a snapshot of data,
consider usingtable snapshotsinstead. Table snapshots are a cheaper and faster alternative to copying full tables.
You can request a quota increase by contactingsupportorsales. It might take several days to review and
process the request. We recommend stating the priority, use case, and the
project ID in the request.
Delete tables
You can delete a table in the following ways:
Using the Google Cloud console.
Using a data definition language (DDL)DROP TABLEstatement.
When you delete a table, any data in the table is also deleted. To automatically
delete tables after a specified period of time, set thedefault table expirationfor the dataset or set the expiration time when youcreate the table.
Deleting a table also deletes any permissions associated with this table. When
you recreate a deleted table, you must also manuallyreconfigure any access permissionspreviously associated with it.
Required roles
To get the permissions that
you need to delete a table,
ask your administrator to grant you theData Editor(roles/bigquery.dataEditor) IAM role on the dataset.
For more information about granting roles, seeManage access to projects, folders, and organizations.
This predefined role contains
the permissions required to delete a table. To see the exact permissions that are
required, expand theRequired permissionssection:
Required permissions
The following permissions are required to delete a table:
Tables can be created with an expiration time. Once this time is reached, BigQuery automatically deletes the table.
To view deleted tables, select one of the following options:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2026-05-15 UTC."],[],[]]