importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Dataset;publicclassUpdateDatasetDescription{publicstaticvoidrunUpdateDatasetDescription(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringnewDescription="this is the new dataset description";updateDatasetDescription(datasetName,newDescription);}publicstaticvoidupdateDatasetDescription(StringdatasetName,StringnewDescription){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();Datasetdataset=bigquery.getDataset(datasetName);bigquery.update(dataset.toBuilder().setDescription(newDescription).build());System.out.println("Dataset description updated successfully to "+newDescription);}catch(BigQueryExceptione){System.out.println("Dataset description was not updated \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionupdateDatasetDescription(){// Updates a dataset's description.// Retreive current dataset metadataconstdataset=bigquery.dataset(datasetId);const[metadata]=awaitdataset.getMetadata();// Set new dataset descriptionconstdescription='New dataset description.';metadata.description=description;const[apiResponse]=awaitdataset.setMetadata(metadata);constnewDescription=apiResponse.description;console.log(`${datasetId}description:${newDescription}`);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset to fetch.# dataset_id = 'your-project.your_dataset'dataset=client.get_dataset(dataset_id)# Make an API request.dataset.description="Updated description."dataset=client.update_dataset(dataset,["description"])# Make an API request.full_dataset_id="{}.{}".format(dataset.project,dataset.dataset_id)print("Updated dataset '{}' with description '{}'.".format(full_dataset_id,dataset.description))
Update default table expiration times
You can update a dataset's default table expiration time in the following ways:
You can set a default table expiration time at the dataset level, or you can set
a table's expiration time when the table is created. If you set the expiration
when the table is created, the dataset's default table expiration is ignored. If
you don't set a default table expiration at the dataset level, and you don't
set a table expiration when the table is created, the table never expires and
you mustdelete the tablemanually. When a table expires, it's deleted along with all of the data it
contains.
When you update a dataset's default table expiration setting:
If you change the value fromNeverto a defined expiration time, any tables
that already exist in the dataset won't expire unless the expiration time was
set on the table when it was created.
If you are changing the value for the default table expiration, any tables
that already exist expire according to the original table expiration setting.
Any new tables created in the dataset have the new table expiration setting
applied unless you specify a different table expiration on the table when it is
created.
The value for default table expiration is expressed differently depending
on where the value is set. Use the method that gives you the appropriate
level of granularity:
In the Google Cloud console, expiration is expressed in days.
In the bq command-line tool, expiration is expressed in seconds.
In the API, expiration is expressed in milliseconds.
To update the default expiration time for a dataset:
Setting or updating a dataset's default partition expiration isn't
supported by the Google Cloud console.
You can set a default partition expiration time at the dataset level that
affects all newly created partitioned tables, or you can set apartition expirationtime for individual tables when the partitioned tables are created. If you set
the default partition expiration at the dataset level, and you set the default
table expiration at the dataset level, new partitioned tables will only have a
partition expiration. If both options are set, the default partition expiration
overrides the default table expiration.
If you set the partition expiration time when the partitioned table is created,
that value overrides the dataset-level default partition expiration if it
exists.
If you do not set a default partition expiration at the dataset level, and you
do not set a partition expiration when the table is created, the
partitions never expire and you mustdeletethe partitions
manually.
When you set a default partition expiration on a dataset, the expiration applies
to all partitions in all partitioned tables created in the dataset. When you set
the partition expiration on a table, the expiration applies to all
partitions created in the specified table. You cannot apply different
expiration times to different partitions in the same table.
When you update a dataset's default partition expiration setting:
If you change the value fromneverto a defined expiration time, any
partitions that already exist in partitioned tables in the dataset will not
expire unless the partition expiration time was set on the table when it was
created.
If you are changing the value for the default partition expiration, any
partitions in existing partitioned tables expire according to the original
default partition expiration. Any new partitioned tables created in the dataset
have the new default partition expiration setting applied unless you specify a
different partition expiration on the table when it is created.
The value for default partition expiration is expressed differently depending
on where the value is set. Use the method that gives you the appropriate
level of granularity:
In the bq command-line tool, expiration is expressed in seconds.
In the API, expiration is expressed in milliseconds.
To update the default partition expiration time for a dataset:
This sets the default rounding mode for new tables created in the dataset. It
has no impact on new columns added to existing tables.
Setting the default rounding mode on a table in the dataset overrides this
option.
Update time travel windows
You can update a dataset's time travel window in the following ways:
Calling thedatasets.patchordatasets.updateAPI
method. Theupdatemethod replaces the entire dataset resource, whereas thepatchmethod only replaces fields that are provided in the submitted dataset
resource.
You can alter thestorage billing modelfor a dataset. Set thestorage_billing_modelvalue toPHYSICALto use
physical bytes when calculating storage changes, or toLOGICALto use
logical bytes.LOGICALis the default.
When you change a dataset's billing model, it takes 24 hours for the
change to take effect.
Once you change a dataset's storage billing model, you must wait 14 days
before you can change the storage billing model again.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2026-05-15 UTC."],[],[]]