Stay organized with collectionsSave and categorize content based on your preferences.
Estimate and control costs
This page describes best practices for estimating and controlling costs in BigQuery.
The primary costs in BigQuery are compute, used for query processing,
and storage, for data that is stored in BigQuery.
BigQuery offers two types of pricing models for query processing,on-demandandcapacity-basedpricing. Each model offers different
best practices for cost control. Fordata stored in BigQuery, costs
depend on thestorage billing modelconfigured for each dataset.
Understand compute pricing for BigQuery
There are subtle differences in compute pricing for BigQuery that
affect capacity planning and cost control.
Pricing models
For on-demand compute in BigQuery, you incur charges per TiB for
BigQuery queries.
Alternatively, for capacity compute in BigQuery, you incur
charges for the compute resources (slots) that are
used to process the query. To use this model, you configurereservationsfor slots.
Reservations have the following features:
They are allocated in pools of slots, and they let you manage capacity and
isolate workloads in ways that make sense for your organization.
They must reside in one administration project and are subject toquotas and limits.
The capacity pricing model offers severaleditions,
which all offer a pay-as-you-go option that's charged in slot hours.
Enterprise and Enterprise Plus editions also provide optional
one- or three-year slot commitments that can save money over the pay-as-you-go rate.
You can also setautoscaling reservationsusing the pay-as-you-go option. For more
information, see the following:
When you use the on-demand pricing model, the only way to restrict costs is to
configure project-level or user-level daily quotas. However, these quotas
enforce a hard cap that prevents users from running queries beyond the quota
limit. To set quotas, seeCreate custom query quotas.
When you use the capacity pricing model using slot reservations, you specify the
maximum number of slots that are available to a reservation. You can also
purchase slot commitments that provide discounted prices for a committed period
of time.
You can use editions fully on demand by setting the baseline of the reservation
to 0 and the maximum to a setting that meets your workload needs.
BigQuery automatically scales up to the number of slots
needed for your workload, never exceeding the maximum that you set. For more
information, seeWorkload management using reservations.
The following sections outline additional best practices that you can use
to further control your query costs.
Create custom query quotas
Best practice:Use custom daily query quotas to limit the amount of data
processed per day.
You can manage costs by setting acustom quotathat specifies a limit on the amount of data processed per day per project
or per user. Users are not able to run queries once the quota is reached.
When you enter a query in the Google Cloud console, the query validator
verifies the query syntax and provides an estimate of the number of bytes read.
You can use this estimate to calculate query cost in the pricing calculator.
If your query is not valid, then the query validator displays an error
message. For example:
Not found: Table myProject:myDataset.myTable was not found in location US
If your query is valid, then the query validator provides an estimate of the
number of bytes required to process the query. For example:
If the query is valid, then a check mark automatically appears along with the amount of data that the query will process. If the query is invalid, then an exclamation point appears along with an error message.
bq
Enter a query like the following using the--dry_runflag.
import("context""fmt""io""cloud.google.com/go/bigquery")// queryDryRun demonstrates issuing a dry run query to validate query structure and// provide an estimate of the bytes scanned.funcqueryDryRun(wio.Writer,projectIDstring)error{// projectID := "my-project-id"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()q:=client.Query(`SELECTname,COUNT(*) as name_countFROM `+"`bigquery-public-data.usa_names.usa_1910_2013`"+`WHERE state = 'WA'GROUP BY name`)q.DryRun=true// Location must match that of the dataset(s) referenced in the query.q.Location="US"job,err:=q.Run(ctx)iferr!=nil{returnerr}// Dry run is not asynchronous, so get the latest status and statistics.status:=job.LastStatus()iferr:=status.Err();err!=nil{returnerr}fmt.Fprintf(w,"This query will process %d bytes\n",status.Statistics.TotalBytesProcessed)returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Job;importcom.google.cloud.bigquery.JobInfo;importcom.google.cloud.bigquery.JobStatistics;importcom.google.cloud.bigquery.QueryJobConfiguration;// Sample to run dry query on the tablepublicclassQueryDryRun{publicstaticvoidrunQueryDryRun(){Stringquery="SELECT name, COUNT(*) as name_count "+"FROM `bigquery-public-data.usa_names.usa_1910_2013` "+"WHERE state = 'WA' "+"GROUP BY name";queryDryRun(query);}publicstaticvoidqueryDryRun(Stringquery){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();QueryJobConfigurationqueryConfig=QueryJobConfiguration.newBuilder(query).setDryRun(true).setUseQueryCache(false).build();Jobjob=bigquery.create(JobInfo.of(queryConfig));JobStatistics.QueryStatisticsstatistics=job.getStatistics();System.out.println("Query dry run performed successfully."+statistics.getTotalBytesProcessed());}catch(BigQueryExceptione){System.out.println("Query not performed \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionqueryDryRun(){// Runs a dry query of the U.S. given names dataset for the state of Texas.constquery=`SELECT nameFROM \`bigquery-public-data.usa_names.usa_1910_2013\`WHERE state = 'TX'LIMIT 100`;// For all options, see https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/queryconstoptions={query:query,// Location must match that of the dataset(s) referenced in the query.location:'US',dryRun:true,};// Run the query as a jobconst[job]=awaitbigquery.createQueryJob(options);// Print the status and statisticsconsole.log('Status:');console.log(job.metadata.status);console.log('\nJob Statistics:');console.log(job.metadata.statistics);}
use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $query = 'SELECT id, view_count FROM `bigquery-public-data.stackoverflow.posts_questions`';// Construct a BigQuery client object.$bigQuery = new BigQueryClient(['projectId' => $projectId,]);// Set job configs$jobConfig = $bigQuery->query($query);$jobConfig->useQueryCache(false);$jobConfig->dryRun(true);// Extract query results$queryJob = $bigQuery->startJob($jobConfig);$info = $queryJob->info();printf('This query will process %s bytes' . PHP_EOL, $info['statistics']['totalBytesProcessed']);
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()job_config=bigquery.QueryJobConfig(dry_run=True,use_query_cache=False)# Start the query, passing in the extra configuration.query_job=client.query(("SELECT name, COUNT(*) as name_count ""FROM `bigquery-public-data.usa_names.usa_1910_2013` ""WHERE state = 'WA' ""GROUP BY name"),job_config=job_config,)# Make an API request.# A dry run query completes immediately.print("This query will process{}bytes.".format(query_job.total_bytes_processed))
Estimate query costs
When using theon-demand pricing model,
you can estimate the cost of running a
query by calculating the number of bytes processed.
On-demand query size calculation
To calculate the number of bytes processed by the various types of queries,
see the following sections:
Best practice:Use the maximum bytes billed setting to limit query costs
when using the on-demand pricing model.
You can limit the number of bytes billed for a query using the maximum bytes
billed setting. When you set maximum bytes billed, the number of bytes that the
query reads is estimated before the query execution. If the number of
estimated bytes is beyond the limit, then the query fails without incurring a
charge.
For clustered tables, the estimation of the number of bytes billed for a query
is an upper bound, and can be higher than the actual number of bytes billed
after running the query. So in some cases, if you set the maximum bytes billed,
a query on a clustered table can fail, even though the actual bytes billed
wouldn't exceed the maximum bytes billed setting.
If a query fails because of the maximum bytes billed setting, an error similar
to following is returned:
Error: Query exceeded limit for bytes billed: 1000000. 10485760 or higher
required.
To set the maximum bytes billed:
Console
In theQuery editor, clickMore>Query settings>Advanced options.
In theMaximum bytes billedfield, enter an integer.
ClickSave.
bq
Use thebq querycommand with the--maximum_bytes_billedflag.
Best practice:For non-clustered tables, don't use aLIMITclause as a
method of cost control.
For non-clustered tables, applying aLIMITclause to a query doesn't affect
the amount of data that is read. You are billed for reading all bytes in the
entire table as indicated by the query, even though the query returns only a
subset. With a clustered table, aLIMITclause can reduce the number of bytes
scanned, because scanning stops when enough blocks are scanned to get the
result. You are billed for only the bytes that are scanned.
Materialize query results in stages
Best practice:If possible, materialize your query results in stages.
If you create a large, multi-stage query, each time you run it,
BigQuery reads all the data that is required by the query. You are
billed for all the data that is read each time the query is run.
Instead, break your query into stages where each stage materializes the query
results by writing them to adestination table.
Querying the smaller destination table reduces the amount of data that is read
and lowers costs. The cost of storing the materialized results is much less than
the cost of processing large amounts of data.
Control workload costs
This section describes best practices for controlling costs within a workload.A workload is a set of related queries. For example, a workload can be a data
transformation pipeline that runs daily, a set of dashboards run by a group of
business analysts, or several ad-hoc queries run by a set of data scientists.
Use the Google Cloud pricing calculator
Best practice:Use theGoogle Cloud pricing calculatorto create an overall monthly cost estimate for BigQuery
based on projected usage. You can then compare this estimate to your actual
costs to identify areas for optimization.
Choose the location where the your queries will run.
ForAmount of data queried, enter the estimated bytes read from your dry run or
the query validator.
Enter your estimations of storage usage forActive storage,Long-term storage,Streaming inserts, andStreaming reads.
You only need to estimate either physical storage or logical storage, depending on thedataset storage billing model.
The estimate appears in theCost detailspanel. For more information about the estimated cost, clickOpen detailed view. You can also download and share the cost estimate.
Choose theMaximum slots,Baseline slots, optionalCommitment, andEstimated utilization of autoscaling.
Choose the location where the data is stored.
Enter your estimations of storage usage forActive storage,Long-term storage,Streaming inserts, andStreaming reads.
You only need to estimate either physical storage or logical storage, depending on thedataset storage billing model.
The estimate appears in theCost detailspanel. For more information about the estimated cost, clickOpen detailed view. You can also download and share the cost estimate.
Best practice:Use slot estimator to estimate the number of slots required for your workloads.
TheBigQuery slot estimatorhelps you to manage slot capacity based on historical performance metrics.
In addition, customers using the on-demand pricing model can view sizing
recommendations for commitments and autoscaling reservations with similar performance
when moving to capacity-based pricing.
Cancel unnecessary long-running jobs
To free capacity, check on long-running jobs to make sure that they should
continue running. If not,cancelthem.
View costs using a dashboard
Best practice:Create a dashboard to analyze your Cloud Billing data so you can
monitor and make adjustments to your BigQuery usage.
Best practice:UseCloud Billing budgetsto monitor your BigQuery charges in one place.
Cloud Billing budgets let you track your actual costs against your planned
costs. After you've set a budget amount, you set budget alert threshold rules
that are used to trigger email notifications. Budget alert emails help you stay
informed about how your BigQuery spend is tracking against your
budget.
When you load data into BigQuery storage, the data is subject to
BigQuerystorage pricing.
For older data, you can automatically take advantage of BigQuery
long-term storage pricing.
If you have a table that is not modified for 90 consecutive days, the price of
storage for that table automatically drops by 50 percent. If you have a
partitioned table, each partition is considered separately for eligibility for
long-term pricing, subject to the same rules as non-partitioned tables.
Configure the storage billing model
Best practice:Optimize the storage billing model based on your usage
patterns.
BigQuery supports storage billing using logical (uncompressed)
or physical (compressed) bytes, or a combination of both. Thestorage billing modelconfigured for each dataset determines your storage pricing, but it does not
impact query performance.
You can use theINFORMATION_SCHEMAviews to determine the storage billing model
that works bestbased on your usage patterns.
Avoid overwriting tables
Best practice:When you are using the physical storage billing model, avoid
repeatedly overwriting tables.
When you overwrite a table, for example by using the--replaceparameter
inbatch load jobsor using theTRUNCATE TABLESQL statement, the replaced data is kept for the duration of the time travel and failsafe windows.
If you overwrite a table frequently, you will incur additional storage charges.
Instead, you can incrementally load data into a table by using theWRITE_APPENDparameter in load jobs, theMERGESQL statement, or using thestorage write API.
Reduce the time travel window
Best practice:Based on your requirements, you can lower the time travel window.
Reducing thetime travelwindow from the default
value of seven days reduces the retention period for data deleted from or changed in a
table. You are billed for time travel storage only when using the physical (compressed)storage billing model.
The time travel window is set at the dataset level. You can also set The
default time travel window for new datasets usingconfiguration settings.
Use table expiration for destination tables
Best practice:If you are writing large query results to a destination
table, use the default table expiration time to remove the data when it's no
longer needed.
Keeping large result sets in BigQuery storage has a cost. If you
don't need permanent access to the results, use thedefault table expirationto automatically delete the data for you.
Archive data to Cloud Storage
Best practice:Consider archiving data in Cloud Storage.
Troubleshooting BigQuery cost discrepancies and unexpected charges
Follow these steps to troubleshoot unexpected BigQuery charges or cost discrepancies:
To understand where the charges for BigQuery are coming from when looking at the Cloud Billing report, the first recommendation is grouping charges by SKU so that it is easier to observe the usage and charges for the corresponding BigQuery services.
After that, study the pricing for the corresponding SKUs in theSKU documentation pageor thePricingpage in the Cloud Billing UI to understand which feature it is, for example, BigQuery Storage Read API, long-term storage, on-demand pricing, Standard edition.
After identifying the corresponding SKUs, use theINFORMATION_SCHEMAviews to identify the specific resources associated with these charges, for example:
If you are charged for on-demand analysis, look into theINFORMATION_SCHEMA.JOBSview examplesto determine jobs driving costs and users who launched them.
Take into account that aDailytime period in the Cloud Billing report starts at midnight US and Canadian Pacific Time (UTC-8), and observes daylight saving time shifts in the United States—adjust your calculations and data aggregations to match the same timeframes.
Filter by project if there are multiple projects attached to the billing account and you want to review charges coming from a specific project.
Make sure to select the correct region when performing investigations.
Unexpected charges related to queries, reservations and commitments
Troubleshooting unexpected charges related to job execution depends on the origin of these charges:
If you see an increase in on-demand analysis costs, this can be related to an increase in the number of jobs that were launched or the change in the amount of data that needs to be processed by jobs. Investigate this using theINFORMATION_SCHEMA.JOBSview.
If there is an increase in charges for committed slots, investigate this by queryingINFORMATION_SCHEMA.CAPACITY_COMMITMENT_CHANGESto see if new commitments have been purchased or modified.
Slot-hours billed larger than INFORMATION_SCHEMA.JOBS view calculated slot-hours
When using an autoscaling reservation, billing is calculated according to the number of scaled slots, not the number of slots used. BigQuery autoscales in multiples of 50 slots, which leads to billing for the nearest multiple even if less than the autoscaled amount is actually used.
Autoscaler has a 1 minute minimum period before scaling down, which translates into at least 1 minute being charged even if the query used the slots for less time, for example, for only 10 seconds out of the minute. The correct way to estimate charges for an autoscaling reservation is documented in theSlots Autoscaling page. For more information about using autoscaling efficiently, seeautoscaling best practicesto use autoscaling efficiently.
A similar scenario will be observed for non-autoscaling reservations—billing is calculated according to the number of slots provisioned, not the number of slots used. If you want to estimate charges for a non-autoscaling reservation, you can query theRESERVATIONS_TIMELINEviewdirectly.
Billing is less than the total bytes billed calculated through INFORMATION_SCHEMA.JOBS for project running on-demand queries
There can be multiple reasons for the actual billing to be less than the calculated bytes processed:
Each project is provided with 1 TB of free tier querying per month for no extra charge.
SCRIPTtype jobs were not excluded from the calculation, which could lead to some values being counted twice.
Different types of savings applied to your Cloud Billing account, such as negotiated discounts, promotional credits and others. Check the Savings section of theCloud Billing report. The free tier 1 TB of querying per month is also included here.
Billing is larger than the bytes processed calculated through INFORMATION_SCHEMA.JOBS for project running on-demand queries
If the billing amount is larger than the value you calculated by querying theINFORMATION_SCHEMA.JOBSview, there might be certain conditions that caused this:
Queries over row-level security tables
Queries over tables with row-level security don't produce a value fortotal_bytes_billedin theINFORMATION_SCHEMA.JOBSview, therefore, the billing calculated usingtotal_bytes_billedfromINFORMATION_SCHEMA.JOBSview will be less than the billed value. See theRow Level Security best practicespage for more details about why this information is not visible.
Performing ML operations in BigQuery
BigQuery ML pricing for on-demand queries depends on the type of model being created. Some of these model operations are charged at a higer rate than non-ML queries. Therefore, if you just add up all of thetotal_billed_bytesfor the project and use the standard on-demand pricing per-TB rate, this won't be a correct pricing aggregation—you need to account for the pricing difference per-TB.
Incorrect pricing amounts
Confirm that the correct per-TB pricing values are used in the calculations - make sure to choose the correct region as prices are location-dependent. See thePricing documentation.
The general advice is following the recommended way of calculating the on-demand job usage for billing in ourpublic documentation.
Billed for BigQuery Reservations API usage even though the API is disabled and not reservations or commitments used
Inspect the SKU to better understand what services are charged. If the SKU billed isBigQuery Governance SKU—these are charges coming from Dataplex Universal Catalog.
Some Dataplex Universal Catalog functionalities trigger job execution using BigQuery. These charges are now processed under the corresponding BigQuery Reservations API SKU. See theDataplex Universal Catalog Pricingdocumentation for more details.
Project is assigned to a reservation, but still seeing BigQuery Analysis on-demand costs
Unexpected charges for pay-as-you go (PAYG) slots for the BigQuery Standard Edition
In the Cloud Billing report, apply a filter with the labelgoog-bq-feature-typewith the valueBQ_STUDIO_NOTEBOOK. The usage you will see is metered as pay-as-you go slots under the BigQuery Standard Edition - these are charges for using theBigQuery Studio notebook. Read more about theBigQuery Studio notebook pricing.
BigQuery Reservations API charges appearing after the Reservation API is disabled
Disabling the BigQuery won't stop commitment charges. In order to stop commitment charges, you will need to delete a commitment. Set the renewal plan toNONE, and the commitment will be automatically deleted when it expires.
Unexpected storage charges
Scenarios that could lead to storage charge increases:
Deletion of table(s) or dataset(s) resulted in higher BigQuery storage costs
TheBigQuery time travel featureretains deleted data for duration of the configured time-travel window and an additional 7 days for fail-safe recovery. During this retention window, the deleted data in physical storage billing model datasets contributes to the active physical storage cost, even though the tables will no longer appear inINFORMATION_SCHEMA.TABLE_STORAGEor in the console. If the table data was in long-term storage, deletion causes this data to be moved to active physical storage. This causes the corresponding cost to rise, because active physical bytes are charged approximately 2 times more than long-term physical bytes according to theBigQuery storage pricing page. The recommended approach to minimize costs caused by data deletion for physical storage billing model datasets is to reduce the time-travel window to 2 days.
Storage costs reduced with no modifications to the data
In BigQuery users pay for active and long-term storage. Active storage charges include any table or table partition that has not been modified for 90 consecutive days, whereas long-term storage charges include tables and partitions that haven't been modified for 90 consecutive days. Overall storage cost reduction can be observed when data transitions to long-term storage, which is around 50% cheaper than active storage. Read aboutstorage pricingfor more details.
INFORMATION_SCHEMA storage calculations don't match billing
Use theINFORMATION_SCHEMA.TABLE_STORAGE_USAGE_TIMELINEviewinstead ofINFORMATION_SCHEMA.TABLE_STORAGE-TABLE_STORAGE_USAGE_TIMELINEprovides more accurate and granular data to correctly calculate storage costs
The queries run onINFORMATION_SCHEMAviews don't include taxes, adjustments, and rounding errors—take these into account when comparing the data. Read more about Reports in Cloud Billingon this page.
Data presented in theINFORMATION_SCHEMAviews is in UTC, whereas billing report data is reported in the US and Canadian Pacific Time (UTC-8).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eBigQuery primarily charges for compute (query processing) and storage, offering on-demand and capacity-based pricing models for compute, and different storage billing models.\u003c/p\u003e\n"],["\u003cp\u003eTo control query costs, it's crucial to use best practices for optimizing query computation and storage, setting custom daily query quotas, and previewing queries to estimate their cost before running them.\u003c/p\u003e\n"],["\u003cp\u003eFor on-demand pricing, setting maximum bytes billed limits the cost of a query, and it's important to avoid using \u003ccode\u003eLIMIT\u003c/code\u003e clauses in non-clustered tables as it does not reduce the amount of data read.\u003c/p\u003e\n"],["\u003cp\u003eLeveraging BigQuery reservations, commitments, and the slot estimator helps manage and control costs effectively, while long-term storage pricing can reduce the cost of older data.\u003c/p\u003e\n"],["\u003cp\u003eMonitoring costs can be achieved through the use of the Google Cloud pricing calculator, Cloud Billing budgets and alerts, as well as creating dashboards to visualize Cloud Billing data.\u003c/p\u003e\n"]]],[],null,[]]