Issue thebq updatecommand with the--descriptionflag. If you are
updating a dataset in a project other than your default project, add the
project ID to the dataset name in the following format:project_id:dataset.
bqupdate\--description"string"\project_id:dataset
Replace the following:
string: the text that describes the dataset,
in quotes
project_id: your project ID
dataset: the name of the dataset that you're
updating
Examples:
Enter the following command to change the description ofmydatasetto
"Description of mydataset."mydatasetis in your default project.
bq update --description "Description of mydataset" mydataset
Enter the following command to change the description ofmydatasetto
"Description of mydataset." The dataset is inmyotherproject, not your
default project.
bq update \
--description "Description of mydataset" \
myotherproject:mydataset
API
Calldatasets.patchand
update thedescriptionproperty in thedataset resource.
Because thedatasets.updatemethod replaces the entire dataset resource,
thedatasets.patchmethod is preferred.
import("context""fmt""cloud.google.com/go/bigquery")// updateDatasetDescription demonstrates how the Description metadata of a dataset can// be read and modified.funcupdateDatasetDescription(projectID,datasetIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()ds:=client.Dataset(datasetID)meta,err:=ds.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.DatasetMetadataToUpdate{Description:"Updated Description.",}if_,err=ds.Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Dataset;publicclassUpdateDatasetDescription{publicstaticvoidrunUpdateDatasetDescription(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringnewDescription="this is the new dataset description";updateDatasetDescription(datasetName,newDescription);}publicstaticvoidupdateDatasetDescription(StringdatasetName,StringnewDescription){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Datasetdataset=bigquery.getDataset(datasetName);bigquery.update(dataset.toBuilder().setDescription(newDescription).build());System.out.println("Dataset description updated successfully to "+newDescription);}catch(BigQueryExceptione){System.out.println("Dataset description was not updated \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionupdateDatasetDescription(){// Updates a dataset's description.// Retreive current dataset metadataconstdataset=bigquery.dataset(datasetId);const[metadata]=awaitdataset.getMetadata();// Set new dataset descriptionconstdescription='New dataset description.';metadata.description=description;const[apiResponse]=awaitdataset.setMetadata(metadata);constnewDescription=apiResponse.description;console.log(`${datasetId}description:${newDescription}`);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset to fetch.# dataset_id = 'your-project.your_dataset'dataset=client.get_dataset(dataset_id)# Make an API request.dataset.description="Updated description."dataset=client.update_dataset(dataset,["description"])# Make an API request.full_dataset_id="{}.{}".format(dataset.project,dataset.dataset_id)print("Updated dataset '{}' with description '{}'.".format(full_dataset_id,dataset.description))
Update default table expiration times
You can update a dataset's default table expiration time in the following ways:
You can set a default table expiration time at the dataset level, or you can set
a table's expiration time when the table is created. If you set the expiration
when the table is created, the dataset's default table expiration is ignored. If
you don't set a default table expiration at the dataset level, and you don't
set a table expiration when the table is created, the table never expires and
you mustdelete the tablemanually. When a table expires, it's deleted along with all of the data it
contains.
When you update a dataset's default table expiration setting:
If you change the value fromNeverto a defined expiration time, any tables
that already exist in the dataset won't expire unless the expiration time was
set on the table when it was created.
If you are changing the value for the default table expiration, any tables
that already exist expire according to the original table expiration setting.
Any new tables created in the dataset have the new table expiration setting
applied unless you specify a different table expiration on the table when it is
created.
The value for default table expiration is expressed differently depending
on where the value is set. Use the method that gives you the appropriate
level of granularity:
In the Google Cloud console, expiration is expressed in days.
In the bq command-line tool, expiration is expressed in seconds.
In the API, expiration is expressed in milliseconds.
To update the default expiration time for a dataset:
Console
In theExplorerpanel, expand your project and select a dataset.
Expand themore_vertActionsoption and clickOpen.
In the details panel, click the pencil icon next toDataset infoto edit the expiration.
In theDataset infodialog, in theDefault table expirationsection, enter a value forNumber of days after table creation.
To update the default expiration time for newly created tables in a dataset,
enter thebq updatecommand with the--default_table_expirationflag.
If you are updating a dataset in a project other than your default project,
add the project ID to the dataset name in the following format:project_id:dataset.
integer: the default lifetime, in seconds, for
newly created tables. The minimum value is 3600 seconds (one hour). The
expiration time evaluates to the current UTC time plus the integer value.
Specify0to remove the existing expiration time. Any table created in
the dataset is deletedintegerseconds after
its creation time. This value is applied if you do not set a table
expiration when the table iscreated.
project_id: your project ID.
dataset: the name of the dataset that you're
updating.
Examples:
Enter the following command to set the default table expiration for
new tables created inmydatasetto two hours (7200 seconds) from the
current time. The dataset is in your default project.
Enter the following command to set the default table expiration for
new tables created inmydatasetto two hours (7200 seconds) from the
current time. The dataset is inmyotherproject, not your default project.
Calldatasets.patchand
update thedefaultTableExpirationMsproperty in thedataset resource.
The expiration is expressed in milliseconds in the API. Because thedatasets.updatemethod replaces the entire dataset resource, thedatasets.patchmethod is preferred.
import("context""fmt""time""cloud.google.com/go/bigquery")// updateDatasetDefaultExpiration demonstrats setting the default expiration of a dataset// to a specific retention period.funcupdateDatasetDefaultExpiration(projectID,datasetIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()ds:=client.Dataset(datasetID)meta,err:=ds.Metadata(ctx)iferr!=nil{returnerr}update:=bigquery.DatasetMetadataToUpdate{DefaultTableExpiration:24*time.Hour,}if_,err:=client.Dataset(datasetID).Update(ctx,update,meta.ETag);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Dataset;importjava.util.concurrent.TimeUnit;publicclassUpdateDatasetExpiration{publicstaticvoidrunUpdateDatasetExpiration(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";updateDatasetExpiration(datasetName);}publicstaticvoidupdateDatasetExpiration(StringdatasetName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();// Update dataset expiration to one dayLongnewExpiration=TimeUnit.MILLISECONDS.convert(1,TimeUnit.DAYS);Datasetdataset=bigquery.getDataset(datasetName);bigquery.update(dataset.toBuilder().setDefaultTableLifetime(newExpiration).build());System.out.println("Dataset description updated successfully to "+newExpiration);}catch(BigQueryExceptione){System.out.println("Dataset expiration was not updated \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionupdateDatasetExpiration(){// Updates the lifetime of all tables in the dataset, in milliseconds./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// Retreive current dataset metadataconstdataset=bigquery.dataset(datasetId);const[metadata]=awaitdataset.getMetadata();// Set new dataset metadataconstexpirationTime=24*60*60*1000;metadata.defaultTableExpirationMs=expirationTime.toString();const[apiResponse]=awaitdataset.setMetadata(metadata);constnewExpirationTime=apiResponse.defaultTableExpirationMs;console.log(`${datasetId}expiration:${newExpirationTime}`);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset to fetch.# dataset_id = 'your-project.your_dataset'dataset=client.get_dataset(dataset_id)# Make an API request.dataset.default_table_expiration_ms=24*60*60*1000# In milliseconds.dataset=client.update_dataset(dataset,["default_table_expiration_ms"])# Make an API request.full_dataset_id="{}.{}".format(dataset.project,dataset.dataset_id)print("Updated dataset{}with new expiration{}".format(full_dataset_id,dataset.default_table_expiration_ms))
Update default partition expiration times
You can update a dataset's default partition expiration in the following ways:
Setting or updating a dataset's default partition expiration isn't currently
supported by the Google Cloud console.
You can set a default partition expiration time at the dataset level that
affects all newly created partitioned tables, or you can set apartition expirationtime for individual tables when the partitioned tables are created. If you set
the default partition expiration at the dataset level, and you set the default
table expiration at the dataset level, new partitioned tables will only have a
partition expiration. If both options are set, the default partition expiration
overrides the default table expiration.
If you set the partition expiration time when the partitioned table is created,
that value overrides the dataset-level default partition expiration if it
exists.
If you do not set a default partition expiration at the dataset level, and you
do not set a partition expiration when the table is created, the
partitions never expire and you mustdeletethe partitions
manually.
When you set a default partition expiration on a dataset, the expiration applies
to all partitions in all partitioned tables created in the dataset. When you set
the partition expiration on a table, the expiration applies to all
partitions created in the specified table. Currently, you cannot apply different
expiration times to different partitions in the same table.
When you update a dataset's default partition expiration setting:
If you change the value fromneverto a defined expiration time, any
partitions that already exist in partitioned tables in the dataset will not
expire unless the partition expiration time was set on the table when it was
created.
If you are changing the value for the default partition expiration, any
partitions in existing partitioned tables expire according to the original
default partition expiration. Any new partitioned tables created in the dataset
have the new default partition expiration setting applied unless you specify a
different partition expiration on the table when it is created.
The value for default partition expiration is expressed differently depending
on where the value is set. Use the method that gives you the appropriate
level of granularity:
In the bq command-line tool, expiration is expressed in seconds.
In the API, expiration is expressed in milliseconds.
To update the default partition expiration time for a dataset:
Console
Updating a dataset's default partition expiration is not currently supported
by the Google Cloud console.
SQL
To update the default partition expiration time, use theALTER SCHEMA SET OPTIONSstatementto set thedefault_partition_expiration_daysoption.
The following example updates the default partition expiration for a
dataset namedmydataset:
In the Google Cloud console, go to theBigQuerypage.
To update the default expiration time for a dataset, enter thebq updatecommand with the--default_partition_expirationflag. If you are updating
a dataset in a project other than your default project,
add the project ID to the dataset name in the following format:project_id:dataset.
integer: the default lifetime, in seconds, for
partitions in newly created partitioned tables. This flag has no minimum
value. Specify0to remove the existing expiration time. Any partitions in
newly created partitioned tables are deletedintegerseconds after the partition's UTC date. This
value is applied if you do not set a partition expiration on the table when
it is created.
project_id: your project ID.
dataset: the name of the dataset that you're
updating.
Examples:
Enter the following command to set the default partition expiration for
new partitioned tables created inmydatasetto 26 hours (93,600 seconds).
The dataset is in your default project.
Enter the following command to set the default partition expiration for
new partitioned tables created inmydatasetto 26 hours (93,600 seconds).
The dataset is inmyotherproject, not your default project.
Calldatasets.patchand
update thedefaultPartitionExpirationMsproperty in thedataset resource.
The expiration is expressed in milliseconds. Because thedatasets.updatemethod replaces the entire dataset resource, thedatasets.patchmethod is
preferred.
This sets the default rounding mode for new tables created in the dataset. It
has no impact on new columns added to existing tables.
Setting the default rounding mode on a table in the dataset overrides this
option.
Update time travel windows
You can update a dataset's time travel window in the following ways:
Calling thedatasets.patchordatasets.updateAPI
method. Theupdatemethod replaces the entire dataset resource, whereas thepatchmethod only replaces fields that are provided in the submitted dataset
resource.
In theExplorerpanel, expand your project and select a dataset.
Expand themore_vertActionsoption and clickOpen.
In theDetailspanel, clickmode_editEdit details.
ExpandAdvanced options, then select theTime travel windowto use.
ClickSave.
SQL
Use theALTER SCHEMA SET OPTIONSstatement with themax_time_travel_hoursoption to specify the time travel
window when altering a dataset. Themax_time_travel_hoursvalue must
be an integer expressed in multiples of 24 (48, 72, 96, 120, 144, 168)
between 48 (2 days) and 168 (7 days).
In the Google Cloud console, go to theBigQuerypage.
Use thebq updatecommand with the--max_time_travel_hoursflag to specify the time travel
window when altering a dataset. The--max_time_travel_hoursvalue must
be an integer expressed in multiples of 24 (48, 72, 96, 120, 144, 168)
between 48 (2 days) and 168 (7 days).
DATASET_NAME: the name of the dataset that
you're updating
HOURSwith the time travel window's duration
in hours
API
Call thedatasets.patchordatasets.updatemethod with a defineddataset resourcein which you
have specified a value for themaxTimeTravelHoursfield. ThemaxTimeTravelHoursvalue must be an integer expressed in multiples of 24
(48, 72, 96, 120, 144, 168) between 48 (2 days) and 168 (7 days).
Update storage billing models
You can alter thestorage billing modelfor a dataset. Set thestorage_billing_modelvalue toPHYSICALto use
physical bytes when calculating storage changes, or toLOGICALto use
logical bytes.LOGICALis the default.
When you change a dataset's billing model, it takes 24 hours for the
change to take effect.
Once you change a dataset's storage billing model, you must wait 14 days
before you can change the storage billing model again.
Console
In theExplorerpanel, expand your project and select a dataset.
Expand themore_vertActionsoption and clickOpen.
In theDetailspanel, clickmode_editEdit details.
ExpandAdvanced options, then selectEnable physical storage
billing modelto use physical storage billing, or deselect it to
use logical storage billing.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-03 UTC."],[[["\u003cp\u003eBigQuery dataset properties such as access controls, billing model, default expiration times for tables and partitions, rounding mode, description, labels, and time travel windows can be updated after dataset creation.\u003c/p\u003e\n"],["\u003cp\u003eUpdating dataset descriptions can be done via the Google Cloud console, \u003ccode\u003ebq\u003c/code\u003e command-line tool, \u003ccode\u003edatasets.patch\u003c/code\u003e API method, or client libraries, using methods like editing details, \u003ccode\u003eALTER SCHEMA SET OPTIONS\u003c/code\u003e, or updating the \u003ccode\u003edescription\u003c/code\u003e property.\u003c/p\u003e\n"],["\u003cp\u003eThe default table expiration time can be updated to manage data retention, and existing tables will not be affected by the new setting unless their expiration was set at creation, while new tables will inherit the updated default expiration unless specified otherwise.\u003c/p\u003e\n"],["\u003cp\u003eDefault partition expiration can be set to manage the lifecycle of partitions in new partitioned tables, overriding the table expiration if set, and it can be updated using the \u003ccode\u003ebq\u003c/code\u003e command-line tool, \u003ccode\u003edatasets.patch\u003c/code\u003e API method, client libraries, or via \u003ccode\u003eALTER SCHEMA SET OPTIONS\u003c/code\u003e statement, but it is not currently supported in the Google Cloud console.\u003c/p\u003e\n"],["\u003cp\u003eThe billing model for dataset storage can be changed to either \u003ccode\u003ePHYSICAL\u003c/code\u003e or \u003ccode\u003eLOGICAL\u003c/code\u003e, but there is a waiting period of 24 hours for the change to be implemented, and you must wait 14 days before changing it again.\u003c/p\u003e\n"]]],[],null,[]]