Optionally, readIntroduction to tablesto understand table limitations, quotas, and pricing.
Table naming
When you create a table in BigQuery, the table name must
be unique per dataset. The table name can:
Contain characters with a total of up to 1,024 UTF-8 bytes.
Contain Unicode characters in category L (letter), M (mark), N (number),
Pc (connector, including underscore), Pd (dash), Zs (space). For more
information, seeGeneral Category.
The following are all examples of valid table names:table 01,ग्राहक,00_お客様,étudiant-01.
Some table names and table name prefixes are reserved. If
you receive an error saying that your table name or prefix is
reserved, then select a different name and try again.
If you include multiple dot operators (.) in a sequence, the duplicate
operators are implicitly stripped.
For example, this:project_name....dataset_name..table_name
Becomes this:project_name.dataset_name.table_name
Create tables
You can create a table in BigQuery in the following ways:
Manually using the Google Cloud console or the bq command-line toolbq mkcommand.
Programmatically by calling thetables.insertAPI method.
By using the client libraries.
From query results.
By defining a table that references an external data source.
When you load data.
By using aCREATE TABLEdata definition language (DDL) statement.
Required permissions
To create a table, you need the following IAM permissions:
bigquery.tables.create
bigquery.tables.updateData
bigquery.jobs.create
Additionally, you might require thebigquery.tables.getDatapermission to
access the data that you write to the table.
Each of the following predefined IAM roles includes the
permissions that you need in order to create a table:
In theExplorerpane, expand your project, and then select a dataset.
In theDataset infosection, clickadd_boxCreate table.
In theCreate tablepanel, specify the following details:
In theSourcesection, selectEmpty tablein theCreate table fromlist.
In theDestinationsection, specify the following details:
ForDataset, select the dataset in which you want to create the
table.
In theTablefield, enter the name of the table that you want to create.
Verify that theTable typefield is set toNative table.
In theSchemasection, enter theschemadefinition.
You can enter schema information manually by using one of
the following methods:
Option 1: ClickEdit as textand paste the schema in the form of a
JSON array. When you use a JSON array, you generate the schema using the
same process ascreating a JSON schema file.
You can view the schema of an existing table in JSON format by entering the following
command:
Optional: In theAdvanced optionssection, if you want to use a
customer-managed encryption key, then select theUse a customer-managed
encryption key (CMEK)option. By default, BigQueryencrypts customer content stored at restby using a Google-owned and Google-managed key.
ClickCreate table.
SQL
The following example creates a table namednewtablethat expires on
January 1, 2023:
In the Google Cloud console, go to theBigQuerypage.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Use thebq mkcommandwith the--tableor-tflag. You can supply table
schema information inline or with a JSON schema file. For a full list of
parameters, see thebq mk --tablereference.
Some optional parameters include:
--expiration
--description
--time_partitioning_field
--time_partitioning_type
--range_partitioning
--clustering_fields
--destination_kms_key
--label
--time_partitioning_field,--time_partitioning_type,--range_partitioning,--clustering_fields, and--destination_kms_keyare not demonstrated here. Refer to the following links for more information
on these optional parameters:
For more information about--time_partitioning_field,--time_partitioning_type, and--range_partitioningseepartitioned tables.
For more information about--clustering_fields, seeclustered tables.
If you are creating a table in a project other than your default project,
add the project ID to the dataset in the following format:project_id:dataset.
To create an empty table in an existing dataset with a schema definition,
enter the following:
integeris the default lifetime (in seconds) for the table. The
minimum value is 3600 seconds (one hour). The expiration time
evaluates to the current UTC time plus the integer value. If you set the
expiration time when you create a table, the dataset's default table
expiration setting is ignored.
descriptionis a description of the table in quotes.
key_1:value_1andkey_2:value_2are key-value pairs
that specifylabels.
key_3:value_3are key-value pairs
that specifytags. Add multiple tags under the
same flag with commas between key:value pairs.
project_idis your project ID.
datasetis a dataset in your project.
tableis the name of the table you're creating.
schemais an inline schema definition in the formatfield:data_type,field:data_typeor the path to the JSON schema
file on your local machine.
When you specify the schema on the command line, you cannot include aRECORD(STRUCT)
type, you cannot include a column description, and you cannot specify the
column mode. All modes default toNULLABLE. To include descriptions,
modes, andRECORDtypes,supply a JSON schema fileinstead.
Examples:
Enter the following command to create a table using an inline schema
definition. This command creates a table namedmytableinmydatasetin
your default project. The table expiration is set to 3600 seconds (1 hour),
the description is set toThis is my table, and the label is set toorganization:development. The command uses the-tshortcut instead of--table. The schema is specified inline as:qtr:STRING,sales:FLOAT,year:STRING.
bqmk\-t\--expiration3600\--description"This is my table"\--labelorganization:development\mydataset.mytable\qtr:STRING,sales:FLOAT,year:STRING
Enter the following command to create a table using a JSON schema file. This
command creates a table namedmytableinmydatasetin your default
project. The table expiration is set to 3600 seconds (1 hour), the
description is set toThis is my table, and the label is set toorganization:development. The path to the schema file is/tmp/myschema.json.
bqmk\--table\--expiration3600\--description"This is my table"\--labelorganization:development\mydataset.mytable\/tmp/myschema.json
Enter the following command to create a table using an JSON schema file.
This command creates a table namedmytableinmydatasetinmyotherproject. The table expiration is set to 3600 seconds (1 hour), the
description is set toThis is my table, and the label is set toorganization:development. The path to the schema file is/tmp/myschema.json.
bqmk\--table\--expiration3600\--description"This is my table"\--labelorganization:development\myotherproject:mydataset.mytable\/tmp/myschema.json
The following example creates a table namedmytable, then uses thegoogle_bigquery_table_iam_policyresource to grant
access to it. Take this step only if you want to grant access
to the table to principals who don't have access to the dataset in which
the table resides.
Set the default Google Cloud project
where you want to apply your Terraform configurations.
You only need to run this command once per project, and you can run it in any directory.
export GOOGLE_CLOUD_PROJECT=PROJECT_ID
Environment variables are overridden if you set explicit values in the Terraform
configuration file.
Prepare the directory
Each Terraform configuration file must have its own directory (also
called aroot module).
InCloud Shell, create a directory and a new
file within that directory. The filename must have the.tfextension—for examplemain.tf. In this
tutorial, the file is referred to asmain.tf.
mkdirDIRECTORY&& cdDIRECTORY&& touch main.tf
If you are following a tutorial, you can copy the sample code in each section or step.
Copy the sample code into the newly createdmain.tf.
Optionally, copy the code from GitHub. This is recommended
when the Terraform snippet is part of an end-to-end solution.
Review and modify the sample parameters to apply to your environment.
Save your changes.
Initialize Terraform. You only need to do this once per directory.
terraform init
Optionally, to use the latest Google provider version, include the-upgradeoption:
terraform init -upgrade
Apply the changes
Review the configuration and verify that the resources that Terraform is going to create or
update match your expectations:
terraform plan
Make corrections to the configuration as necessary.
Apply the Terraform configuration by running the following command and enteringyesat the prompt:
terraform apply
Wait until Terraform displays the "Apply complete!" message.
Open your Google Cloud projectto view
the results. In the Google Cloud console, navigate to your resources in the UI to make sure
that Terraform has created or updated them.
usingGoogle.Cloud.BigQuery.V2;publicclassBigQueryCreateTable{publicBigQueryTableCreateTable(stringprojectId="your-project-id",stringdatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);vardataset=client.GetDataset(datasetId);// Create schema for new table.varschema=newTableSchemaBuilder{{"full_name",BigQueryDbType.String},{"age",BigQueryDbType.Int64}}.Build();// Create the tablereturndataset.CreateTable(tableId:"your_table_id",schema:schema);}}
import("context""fmt""time""cloud.google.com/go/bigquery")// createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema.funccreateTableExplicitSchema(projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydatasetid"// tableID := "mytableid"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()sampleSchema:=bigquery.Schema{{Name:"full_name",Type:bigquery.StringFieldType},{Name:"age",Type:bigquery.IntegerFieldType},}metaData:=&bigquery.TableMetadata{Schema:sampleSchema,ExpirationTime:time.Now().AddDate(1,0,0),// Table will be automatically deleted in 1 year.}tableRef:=client.Dataset(datasetID).Table(tableID)iferr:=tableRef.Create(ctx,metaData);err!=nil{returnerr}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Field;importcom.google.cloud.bigquery.Schema;importcom.google.cloud.bigquery.StandardSQLTypeName;importcom.google.cloud.bigquery.StandardTableDefinition;importcom.google.cloud.bigquery.TableDefinition;importcom.google.cloud.bigquery.TableId;importcom.google.cloud.bigquery.TableInfo;publicclassCreateTable{publicstaticvoidrunCreateTable(){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";Schemaschema=Schema.of(Field.of("stringField",StandardSQLTypeName.STRING),Field.of("booleanField",StandardSQLTypeName.BOOL));createTable(datasetName,tableName,schema);}publicstaticvoidcreateTable(StringdatasetName,StringtableName,Schemaschema){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdtableId=TableId.of(datasetName,tableName);TableDefinitiontableDefinition=StandardTableDefinition.of(schema);TableInfotableInfo=TableInfo.newBuilder(tableId,tableDefinition).build();bigquery.create(tableInfo);System.out.println("Table created successfully");}catch(BigQueryExceptione){System.out.println("Table was not created. \n"+e.toString());}}}
// Import the Google Cloud client library and create a clientconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctioncreateTable(){// Creates a new table named "my_table" in "my_dataset"./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = "my_dataset";// const tableId = "my_table";// const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean';// For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resourceconstoptions={schema:schema,location:'US',};// Create a new table in the datasetconst[table]=awaitbigquery.dataset(datasetId).createTable(tableId,options);console.log(`Table${table.id}created.`);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set table_id to the ID of the table to create.# table_id = "your-project.your_dataset.your_table_name"schema=[bigquery.SchemaField("full_name","STRING",mode="REQUIRED"),bigquery.SchemaField("age","INTEGER",mode="REQUIRED"),]table=bigquery.Table(table_id,schema=schema)table=client.create_table(table)# Make an API request.print("Created table{}.{}.{}".format(table.project,table.dataset_id,table.table_id))
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Schema;importcom.google.cloud.bigquery.StandardTableDefinition;importcom.google.cloud.bigquery.TableDefinition;importcom.google.cloud.bigquery.TableId;importcom.google.cloud.bigquery.TableInfo;// Sample to create a table without schemapublicclassCreateTableWithoutSchema{publicstaticvoidmain(String[]args){// TODO(developer): Replace these variables before running the sample.StringdatasetName="MY_DATASET_NAME";StringtableName="MY_TABLE_NAME";createTableWithoutSchema(datasetName,tableName);}publicstaticvoidcreateTableWithoutSchema(StringdatasetName,StringtableName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdtableId=TableId.of(datasetName,tableName);TableDefinitiontableDefinition=StandardTableDefinition.of(Schema.of());TableInfotableInfo=TableInfo.newBuilder(tableId,tableDefinition).build();bigquery.create(tableInfo);System.out.println("Table created successfully");}catch(BigQueryExceptione){System.out.println("Table was not created. \n"+e.toString());}}}
Create a table from a query result
To create a table from a query result, write the results to a destination table.
Console
Open the BigQuery page in the Google Cloud console.
ClickRun. This creates a query job that writes the
query results to the table you specified.
Alternatively, if you forget to specify a destination table before running
your query, you can copy the cached results table to a permanent table by
clicking theSave Resultsbutton above the editor.
SQL
The following example uses theCREATE TABLEstatementto create thetripstable from data in the publicbikeshare_tripstable:
In the Google Cloud console, go to theBigQuerypage.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Enter thebq querycommand and specify the--destination_tableflag to
create a permanent table based on the query results. Specify theuse_legacy_sql=falseflag to use GoogleSQL syntax. To write the query
results to a table that is not in your default project, add the project ID
to the dataset name in the following format:project_id:dataset.
Optional: Supply the--locationflag and set the value to yourlocation.
To control the write disposition for an existing destination table, specify
one of the following optional flags:
--append_table: If the destination table exists, the query results are
appended to it.
--replace: If the destination table exists, it is overwritten with the
query results.
locationis the name of the location used to
process the query. The--locationflag is optional. For example, if you
are using BigQuery in the Tokyo region, you can set the flag's
value toasia-northeast1. You can set a default value for the location by
using the.bigqueryrcfile.
project_idis your project ID.
datasetis the name of the dataset that contains
the table to which you are writing the query results.
tableis the name of the table to which you're
writing the query results.
queryis a query in GoogleSQL syntax.
If no write disposition flag is specified, the default behavior is to
write the results to the table only if it is empty. If the table exists
and it is not empty, the following error is returned:BigQuery error in query operation: Error processing jobproject_id:bqjob_123abc456789_00000e1234f_1: Already
Exists: Tableproject_id:dataset.table.
Examples:
Enter the following command to write query results to a destination table
namedmytableinmydataset. The dataset is in your default project.
Since no write disposition flag is specified in the command, the table must
be new or empty. Otherwise, anAlready existserror is returned. The query
retrieves data from theUSA Name Data public dataset.
Enter the following command to use query results to overwrite a destination
table namedmytableinmydataset. The dataset is in your default
project. The command uses the--replaceflag to overwrite the destination
table.
Enter the following command to append query results to a destination table
namedmytableinmydataset. The dataset is inmy-other-project, not
your default project. The command uses the--append_tableflag to append
the query results to the destination table.
The output for each of these examples looks like the following. For
readability, some output is truncated.
Waiting on bqjob_r123abc456_000001234567_1 ... (2s) Current status: DONE
+---------+--------+
| name | number |
+---------+--------+
| Robert | 10021 |
| John | 9636 |
| Robert | 9297 |
| ... |
+---------+--------+
API
To save query results to a permanent table, call thejobs.insertmethod,
configure aqueryjob, and include a value for thedestinationTableproperty. To control the write disposition for an existing destination
table, configure thewriteDispositionproperty.
To control the processing location for the query job, specify thelocationproperty in thejobReferencesection of thejob resource.
import("context""fmt""io""cloud.google.com/go/bigquery""google.golang.org/api/iterator")// queryWithDestination demonstrates saving the results of a query to a specific table by setting the destination// via the API properties.funcqueryWithDestination(wio.Writer,projectID,destDatasetID,destTableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()q:=client.Query("SELECT 17 as my_col")q.Location="US"// Location must match the dataset(s) referenced in query.q.QueryConfig.Dst=client.Dataset(destDatasetID).Table(destTableID)// Run the query and print results when the query job is completed.job,err:=q.Run(ctx)iferr!=nil{returnerr}status,err:=job.Wait(ctx)iferr!=nil{returnerr}iferr:=status.Err();err!=nil{returnerr}it,err:=job.Read(ctx)for{varrow[]bigquery.Valueerr:=it.Next(&row)iferr==iterator.Done{break}iferr!=nil{returnerr}fmt.Fprintln(w,row)}returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.QueryJobConfiguration;importcom.google.cloud.bigquery.TableId;publicclassSaveQueryToTable{publicstaticvoidrunSaveQueryToTable(){// TODO(developer): Replace these variables before running the sample.Stringquery="SELECT corpus FROM `bigquery-public-data.samples.shakespeare` GROUP BY corpus;";StringdestinationTable="MY_TABLE";StringdestinationDataset="MY_DATASET";saveQueryToTable(destinationDataset,destinationTable,query);}publicstaticvoidsaveQueryToTable(StringdestinationDataset,StringdestinationTableId,Stringquery){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();// Identify the destination tableTableIddestinationTable=TableId.of(destinationDataset,destinationTableId);// Build the query jobQueryJobConfigurationqueryConfig=QueryJobConfiguration.newBuilder(query).setDestinationTable(destinationTable).build();// Execute the query.bigquery.query(queryConfig);// The results are now saved in the destination table.System.out.println("Saved query ran successfully");}catch(BigQueryException|InterruptedExceptione){System.out.println("Saved query did not run \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionqueryDestinationTable(){// Queries the U.S. given names dataset for the state of Texas// and saves results to permanent table./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = 'my_dataset';// const tableId = 'my_table';// Create destination table referenceconstdataset=bigquery.dataset(datasetId);constdestinationTable=dataset.table(tableId);constquery=`SELECT nameFROM \`bigquery-public-data.usa_names.usa_1910_2013\`WHERE state = 'TX'LIMIT 100`;// For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resourceconstoptions={query:query,// Location must match that of the dataset(s) referenced in the query.location:'US',destination:destinationTable,};// Run the query as a jobconst[job]=awaitbigquery.createQueryJob(options);console.log(`Job${job.id}started.`);console.log(`Query results loaded to table${destinationTable.id}`);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set table_id to the ID of the destination table.# table_id = "your-project.your_dataset.your_table_name"job_config=bigquery.QueryJobConfig(destination=table_id)sql="""SELECT corpusFROM `bigquery-public-data.samples.shakespeare`GROUP BY corpus;"""# Start the query, passing in the extra configuration.query_job=client.query(sql,job_config=job_config)# Make an API request.query_job.result()# Wait for the job to complete.print("Query results loaded to the table{}".format(table_id))
Create a table that references an external data source
An external data source is a data source that you can query directly from
BigQuery, even though the data is not stored in
BigQuery storage. For example, you might have data in a
different Google Cloud database, in files in Cloud Storage, or in a
different cloud product altogether that you would like to analyze in
BigQuery, but that you aren't prepared to migrate.
When you load data into BigQuery, you can load data into a new
table or partition, you can append data to an existing table or partition, or
you can overwrite a table or partition. You don't need to create an empty
table before loading data into it. You can create the new table and load your
data at the same time.
When you load data into BigQuery, you can supply the table
or partition schema, or for supported data formats, you can use schemaauto-detection.
To configure access to tables and views, you can grant an
IAM role to an entity at the following levels, listed in
order of range of resources allowed (largest to smallest):
Access with any resource protected by IAM is additive. For
example, if an entity does not have access at the high level such as a project,
you could grant the entity access at the dataset level, and then the entity will
have access to the tables and views in the dataset. Similarly, if the entity
does not have access at the high level or the dataset level, you could grant the
entity access at the table or view level.
Granting IAM roles at a higher level in theGoogle Cloud
resource hierarchysuch as the project, folder, or organization level gives the entity access to a
broad set of resources. For example, granting a role to an entity at the project
level gives that entity permissions that apply to all datasets throughout the
project.
Granting a role at the dataset level specifies the operations an entity is
allowed to perform on tables and views in that specific dataset, even if the
entity does not have access at a higher level. For information on configuring
dataset-level access controls, seeControlling access to datasets.
Granting a role at the table or view level specifies the operations an entity is
allowed to perform on specific tables and views, even if the entity does not
have access at a higher level. For information on configuring table-level access
controls, seeControlling access to tables and views.
You can also createIAM custom roles.
If you create a custom role, the permissions you grant depend on the specific
operations you want the entity to be able to perform.
You can't set a "deny" permission on any resource protected by
IAM.
At a minimum, to get information about tables, you must be grantedbigquery.tables.getpermissions. The following predefined IAM
roles includebigquery.tables.getpermissions:
bigquery.metadataViewer
bigquery.dataViewer
bigquery.dataOwner
bigquery.dataEditor
bigquery.admin
In addition, if a user hasbigquery.datasets.createpermissions, when that
user creates a dataset, they are grantedbigquery.dataOwneraccess to it.bigquery.dataOwneraccess gives the user the ability to retrieve table
metadata.
For more information on IAM roles and permissions in
BigQuery, seeAccess control.
Get table information
To get information about tables:
Console
In the navigation panel, in theResourcessection, expand your
project, and then select a dataset.
Click the dataset name to expand it. The tables and views in the dataset
appear.
Click the table name.
In theDetailspanel, clickDetailsto display the table's
description and table information.
Optionally, switch to theSchematab to view the table's schema
definition.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq showcommand to display all table information. Use the--schemaflag to display only table schema information. The--formatflag can be used to control the output.
If you are getting information about a table in a project other than
your default project, add the project ID to the dataset in the following
format:project_id:dataset.
import("context""fmt""io""cloud.google.com/go/bigquery")// printTableInfo demonstrates fetching metadata from a table and printing some basic information// to an io.Writer.funcprintTableInfo(wio.Writer,projectID,datasetID,tableIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"// tableID := "mytable"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()meta,err:=client.Dataset(datasetID).Table(tableID).Metadata(ctx)iferr!=nil{returnerr}// Print basic information about the table.fmt.Fprintf(w,"Schema has %d top-level fields\n",len(meta.Schema))fmt.Fprintf(w,"Description: %s\n",meta.Description)fmt.Fprintf(w,"Rows in managed storage: %d\n",meta.NumRows)returnnil}
importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.Table;importcom.google.cloud.bigquery.TableId;publicclassGetTable{publicstaticvoidrunGetTable(){// TODO(developer): Replace these variables before running the sample.StringprojectId="bigquery_public_data";StringdatasetName="samples";StringtableName="shakespeare";getTable(projectId,datasetName,tableName);}publicstaticvoidgetTable(StringprojectId,StringdatasetName,StringtableName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();TableIdtableId=TableId.of(projectId,datasetName,tableName);Tabletable=bigquery.getTable(tableId);System.out.println("Table info: "+table.getDescription());}catch(BigQueryExceptione){System.out.println("Table not retrieved. \n"+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctiongetTable(){// Retrieves table named "my_table" in "my_dataset"./*** TODO(developer): Uncomment the following lines before running the sample*/// const datasetId = "my_dataset";// const tableId = "my_table";// Retrieve table referenceconstdataset=bigquery.dataset(datasetId);const[table]=awaitdataset.table(tableId).get();console.log('Table:');console.log(table.metadata.tableReference);}getTable();
use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code *///$projectId = 'The Google project ID';//$datasetId = 'The BigQuery dataset ID';//$tableId = 'The BigQuery table ID';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$table = $dataset->table($tableId);
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set table_id to the ID of the model to fetch.# table_id = 'your-project.your_dataset.your_table'table=client.get_table(table_id)# Make an API request.# View table propertiesprint("Got table '{}.{}.{}'.".format(table.project,table.dataset_id,table.table_id))print("Table schema:{}".format(table.schema))print("Table description:{}".format(table.description))print("Table has{}rows".format(table.num_rows))
Get table information usingINFORMATION_SCHEMA
INFORMATION_SCHEMAis a series of views that provide access to metadata
about datasets, routines, tables, views, jobs, reservations, and streaming data.
You can query the following views to get table information:
Use theINFORMATION_SCHEMA.TABLESandINFORMATION_SCHEMA.TABLE_OPTIONSviews to retrieve metadata about tables and views in a project.
Use theINFORMATION_SCHEMA.COLUMNSandINFORMATION_SCHEMA.COLUMN_FIELD_PATHSviews to retrieve metadata about the
columns (fields) in a table.
Use theINFORMATION_SCHEMA.TABLE_STORAGEviews to retrieve metadata
about current and historical storage usage by a table.
TheTABLESandTABLE_OPTIONSviews also contain high-level
information about views. For detailed information, query theINFORMATION_SCHEMA.VIEWSview
instead.
TABLESview
When you query theINFORMATION_SCHEMA.TABLESview, the query results contain
one row for each table or view in a dataset. For detailed information about
views, query theINFORMATION_SCHEMA.VIEWSviewinstead.
TheINFORMATION_SCHEMA.TABLESview has the following schema:
Column name
Data type
Value
table_catalog
STRING
The project ID of the project that contains the dataset.
table_schema
STRING
The name of the dataset that contains the table or view. Also referred
to as thedatasetId.
table_name
STRING
The name of the table or view. Also referred to as thetableId.
Fortable clonesandtable snapshots,
the base table's project. Applicable only to
tables withtable_typeset toCLONEorSNAPSHOT.
base_table_schema
STRING
Fortable clonesandtable snapshots,
the base table's dataset. Applicable only to tables withtable_typeset toCLONEorSNAPSHOT.
base_table_name
STRING
Fortable clonesandtable snapshots,
the base table's name. Applicable only to tables withtable_typeset toCLONEorSNAPSHOT.
snapshot_time_ms
TIMESTAMP
Fortable clonesandtable snapshots,
the time when thecloneorsnapshotoperation was run on the base table to create this table. Iftime travelwas used, then this
field contains the time travel timestamp. Otherwise, thesnapshot_time_msfield is the same as thecreation_timefield. Applicable only to
tables withtable_typeset toCLONEorSNAPSHOT.
Formaterialized view replicas,
the status of the replication from the base materialized view to the
materialized view replica; one of the following:
REPLICATION_STATUS_UNSPECIFIED
ACTIVE: Replication is active with no errors
SOURCE_DELETED: The source materialized view has
been deleted
PERMISSION_DENIED: The source materialized view
hasn't beenauthorizedon the dataset that contains the source Amazon S3
BigLake tables used in the query that created the
materialized view.
UNSUPPORTED_CONFIGURATION: There is an issue with
the replica'sprerequisitesother than source materialized view authorization.
replication_error
STRING
Ifreplication_statusindicates a replication issue for amaterialized view replica,replication_errorprovides further details about the issue.
For tables that use change data capture (CDC), the time when row
modifications were last applied. For more information, seeMonitor table upsert operation progress.
Examples
Example 1:
The following example retrieves table metadata for all of the tables in the
dataset namedmydataset. The metadata that's
returned is for all types of tables inmydatasetin your default project.
mydatasetcontains the following tables:
mytable1: a standard BigQuery table
myview1: a BigQuery view
To run the query against a project other than your default project, add the
project ID to the dataset in the following format:`project_id`.dataset.INFORMATION_SCHEMA.view;
for example,`myproject`.mydataset.INFORMATION_SCHEMA.TABLES.
The following example retrieves table metadata for all tables of typeCLONEorSNAPSHOTfrom theINFORMATION_SCHEMA.TABLESview. The metadata returned
is for tables inmydatasetin your default project.
To run the query against a project other than your default project, add the
project ID to the dataset in the following format:`project_id`.dataset.INFORMATION_SCHEMA.view;
for example,`myproject`.mydataset.INFORMATION_SCHEMA.TABLES.
The following example retrievestable_nameandddlcolumns from theINFORMATION_SCHEMA.TABLESview for thepopulation_by_zip_2010table in thecensus_bureau_usadataset. This dataset is part of the BigQuerypublic dataset program.
Because the table you're querying is in another project, you add the project ID to the dataset in
the following format:`project_id`.dataset.INFORMATION_SCHEMA.view.
In this example, the value is`bigquery-public-data`.census_bureau_usa.INFORMATION_SCHEMA.TABLES.
+------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| table_name | ddl |
+------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| population_by_zip_2010 | CREATE TABLE `bigquery-public-data.census_bureau_usa.population_by_zip_2010` |
| | ( |
| | geo_id STRING OPTIONS(description="Geo code"), |
| | zipcode STRING NOT NULL OPTIONS(description="Five digit ZIP Code Tabulation Area Census Code"), |
| | population INT64 OPTIONS(description="The total count of the population for this segment."), |
| | minimum_age INT64 OPTIONS(description="The minimum age in the age range. If null, this indicates the row as a total for male, female, or overall population."), |
| | maximum_age INT64 OPTIONS(description="The maximum age in the age range. If null, this indicates the row as having no maximum (such as 85 and over) or the row is a total of the male, female, or overall population."), |
| | gender STRING OPTIONS(description="male or female. If empty, the row is a total population summary.") |
| | ) |
| | OPTIONS( |
| | labels=[("freebqcovid", "")] |
| | ); |
+------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
TABLE_OPTIONSview
When you query theINFORMATION_SCHEMA.TABLE_OPTIONSview, the query results
contain one row for each option, for each table or view in a dataset. For
detailed information about
views, query theINFORMATION_SCHEMA.VIEWSviewinstead.
TheINFORMATION_SCHEMA.TABLE_OPTIONSview has the following schema:
Column name
Data type
Value
TABLE_CATALOG
STRING
The project ID of the project that contains the dataset
TABLE_SCHEMA
STRING
The name of the dataset that contains the table or view also referred
to as thedatasetId
TABLE_NAME
STRING
The name of the table or view also referred to as thetableId
Whether automatic refresh is enabled for a materialized view
expiration_timestamp
TIMESTAMP
The time when this table expires
friendly_name
STRING
The table's descriptive name
kms_key_name
STRING
The name of the Cloud KMS key used to encrypt the table
labels
ARRAY<STRUCT<STRING, STRING>>
An array ofSTRUCT's that represent the labels on the
table
partition_expiration_days
FLOAT64
The default lifetime, in days, of all partitions in a partitioned
table
refresh_interval_minutes
FLOAT64
How frequently a materialized view is refreshed
require_partition_filter
BOOL
Whether queries over the table require a partition filter
tags
ARRAY<STRUCT<STRING, STRING>>
Tags attached to a table in a namespaced <key, value> syntax.
For more information, seeTags and
conditional access.
For external tables, the following options are possible:
Options
allow_jagged_rows
BOOL
Iftrue, allow rows that are missing trailing optional
columns.
Applies to CSV data.
allow_quoted_newlines
BOOL
Iftrue, allow quoted data sections that contain newline
characters in the file.
Applies to CSV data.
bigtable_options
STRING
Only required when creating a Bigtable external
table.
Specifies the schema of the Bigtable external table
in JSON format.
For a list of Bigtable table definition options, seeBigtableOptionsin the REST API reference.
column_name_character_map
STRING
Defines the scope of supported column name characters and the
handling behavior of unsupported characters. The default setting isSTRICT, which means unsupported characters cause
BigQuery to throw errors.V1andV2replace any unsupported characters
with underscores.
Supported values include:
STRICT. Enablesflexible column names.
This is the default value. Load jobs with unsupported characters in column
names fail with an error message. To configure the replacement of
unsupported characters with underscores so that the load job succeeds,
specify thedefault_column_name_character_mapconfiguration setting.
V1. Column names can only containstandard column name characters.
Unsupported characters are replaced with underscores.
This is the default behavior for tables created before the
introduction ofcolumn_name_character_map.
Iftrue, use schema inference specifically for
Parquet LIST logical type.
Applies to Parquet data.
enable_logical_types
BOOL
Iftrue, convert Avro logical types into their
corresponding SQL types. For more information, seeLogical types.
Applies to Avro data.
encoding
STRING
The character encoding of the data. Supported values include:UTF8(orUTF-8),ISO_8859_1(orISO-8859-1).
Applies to CSV data.
enum_as_string
BOOL
Iftrue, infer Parquet ENUM logical type as STRING
instead of BYTES by default.
Applies to Parquet data.
expiration_timestamp
TIMESTAMP
The time when this table expires. If not specified, the table does
not expire.
Example:"2025-01-01 00:00:00 UTC".
field_delimiter
STRING
The separator for fields in a CSV file.
Applies to CSV data.
format
STRING
The format of the external data.
Supported values forCREATE EXTERNAL TABLEinclude:AVRO,CLOUD_BIGTABLE,CSV,DATASTORE_BACKUP,DELTA_LAKE(preview),GOOGLE_SHEETS,NEWLINE_DELIMITED_JSON(orJSON),ORC,PARQUET.
The valueJSONis equivalent toNEWLINE_DELIMITED_JSON.
hive_partition_uri_prefix
STRING
A common prefix for all source URIs before the partition key encoding
begins. Applies only to hive-partitioned external tables.
Applies to Avro, CSV, JSON, Parquet, and ORC data.
Example:"gs://bucket/path".
file_set_spec_type
STRING
Specifies how to interpret source URIs for load jobs and external tables.
Supported values include:
FILE_SYSTEM_MATCH. Expands source URIs by listing files from the object store. This is the default behavior if FileSetSpecType is not set.
NEW_LINE_DELIMITED_MANIFEST. Indicates that the provided URIs are newline-delimited manifest files, with one URI per line. Wildcard URIs are not supported in the manifest files, and all referenced data files must be in the same bucket as the manifest file.
For example, if you have a source URI of"gs://bucket/path/file"and thefile_set_spec_typeisFILE_SYSTEM_MATCH, then the file is used directly as a data file. If thefile_set_spec_typeisNEW_LINE_DELIMITED_MANIFEST, then each line in the file is interpreted as a URI that points to a data file.
ignore_unknown_values
BOOL
Iftrue, ignore extra values that are not represented
in the table schema, without returning an error.
Applies to CSV and JSON data.
json_extension
STRING
For JSON data, indicates a particular JSON interchange format. If
not specified, BigQuery reads the data as generic JSON
records.
Specifies whether cached metadata is used by operations against the
table, and how fresh the cached metadata must be in order for
the operation to use it.
To disable metadata caching, specify 0. This is the default.
To enable metadata caching, specify aninterval literalvalue between 30 minutes and 7 days. For example, specifyINTERVAL 4 HOURfor a 4 hour staleness interval.
With this value, operations against the table use cached metadata if
it has been refreshed within the past 4 hours. If the cached metadata
is older than that, the operation falls back to retrieving metadata from
Cloud Storage instead.
null_marker
STRING
The string that representsNULLvalues in a CSV file.
Set the value of this option toSIMPLEwhen
creating an object table.
preserve_ascii_control_characters
BOOL
Iftrue, then the embedded ASCII control characters
which are the first 32 characters in the ASCII table, ranging from
'\x00' to '\x1F', are preserved.
Applies to CSV data.
projection_fields
STRING
A list of entity properties to load.
Applies to Datastore data.
quote
STRING
The string used to quote data sections in a CSV file. If your data
contains quoted newline characters, also set theallow_quoted_newlinesproperty totrue.
Applies to CSV data.
reference_file_schema_uri
STRING
User provided reference file with the table schema.
Iftrue, all queries over this table require a partition
filter that can be used to eliminate partitions when reading data.
Applies only to hive-partitioned external tables.
Applies to Avro, CSV, JSON, Parquet, and ORC data.
sheet_range
STRING
Range of a Google Sheets spreadsheet to query from.
Applies to Google Sheets data.
Example:"sheet1!A1:B20",
skip_leading_rows
INT64
The number of rows at the top of a file to skip when reading the
data.
Applies to CSV and Google Sheets data.
uris
For external tables, including object tables, that aren't
Bigtable tables:
ARRAY<STRING>
An array of fully qualified URIs for the external data locations.
Each URI can contain one
asterisk (*)wildcard character,
which must come after the bucket name. When you specifyurisvalues that target multiple files, all of those
files must share a compatible schema.
The following example retrieves the default table expiration times for all
tables inmydatasetin your default project (myproject) by querying theINFORMATION_SCHEMA.TABLE_OPTIONSview.
To run the query against a project other than your default project, add the
project ID to the dataset in the following format:`project_id`.dataset.INFORMATION_SCHEMA.view;
for example,`myproject`.mydataset.INFORMATION_SCHEMA.TABLE_OPTIONS.
The following example retrieves metadata about all tables inmydatasetthat
contain test data. The query uses the values in thedescriptionoption to find
tables that contain "test" anywhere in the description.mydatasetis in your
default project —myproject.
To run the query against a project other than your default project, add the
project ID to the dataset in the following format:`project_id`.dataset.INFORMATION_SCHEMA.view;
for example,`myproject`.mydataset.INFORMATION_SCHEMA.TABLE_OPTIONS.
If aSTRINGorARRAY<STRING>is passed
in, the collation specification is returned if it exists; otherwiseNULLis returned
COLUMN_DEFAULT
STRING
Thedefault valueof the
column if it exists; otherwise, the value isNULL
ROUNDING_MODE
STRING
The mode of rounding that's used for values written to the field if its
type is a parameterizedNUMERICorBIGNUMERIC;
otherwise, the value isNULL
Examples
The following example retrieves metadata from theINFORMATION_SCHEMA.COLUMNSview for thepopulation_by_zip_2010table in thecensus_bureau_usadataset. This dataset is part of the BigQuerypublic dataset program.
Because the table you're querying is in another project, thebigquery-public-dataproject, you add the project ID to the dataset in the
following format:`project_id`.dataset.INFORMATION_SCHEMA.view;
for example,`bigquery-public-data`.census_bureau_usa.INFORMATION_SCHEMA.TABLES.
The following columns are excluded from the query results because they are
currently reserved for future use:
If aSTRING,ARRAY<STRING>, orSTRINGfield in aSTRUCTis passed in, the
collation specification is returned if it exists; otherwise,NULLis returned
ROUNDING_MODE
STRING
The mode of rounding that's used when applying precision and scale to
parameterizedNUMERICorBIGNUMERICvalues;
otherwise, the value isNULL
Examples
The following example retrieves metadata from theINFORMATION_SCHEMA.COLUMN_FIELD_PATHSview for thecommitstable in thegithub_reposdataset.
This dataset is part of the BigQuerypublic dataset program.
Because the table you're querying is in another project, thebigquery-public-dataproject, you add the project ID to the dataset in the
following format:`project_id`.dataset.INFORMATION_SCHEMA.view;
for example,`bigquery-public-data`.github_repos.INFORMATION_SCHEMA.COLUMN_FIELD_PATHS.
Thecommitstable contains the following nested and nested and repeated
columns:
author: nestedRECORDcolumn
committer: nestedRECORDcolumn
trailer: nested and repeatedRECORDcolumn
difference: nested and repeatedRECORDcolumn
To view metadata about theauthoranddifferencecolumns, run the following query.
TheTABLE_STORAGEandTABLE_STORAGE_BY_ORGANIZATIONviews have the following
schema:
Column name
Data type
Value
PROJECT_ID
STRING
The project ID of the project that contains the dataset.
PROJECT_NUMBER
INT64
The project number of the project that contains the dataset.
TABLE_CATALOG
STRING
The project ID of the project that contains the dataset.
TABLE_SCHEMA
STRING
The name of the dataset that contains the table or materialized view,
also referred to as thedatasetId.
TABLE_NAME
STRING
The name of the table or materialized view, also referred to as thetableId.
CREATION_TIME
TIMESTAMP
The creation time of the table.
TOTAL_ROWS
INT64
The total number of rows in the table or materialized view.
TOTAL_PARTITIONS
INT64
The number of partitions present in the table or materialized view.
Unpartitioned tables return 0.
TOTAL_LOGICAL_BYTES
INT64
Total number of logical (uncompressed) bytes in the table or
materialized view.
ACTIVE_LOGICAL_BYTES
INT64
Number of logical (uncompressed) bytes that are younger than 90 days.
LONG_TERM_LOGICAL_BYTES
INT64
Number of logical (uncompressed) bytes that are older than 90 days.
CURRENT_PHYSICAL_BYTES
INT64
Total number of physical bytes for the current storage of the
table across all partitions.
TOTAL_PHYSICAL_BYTES
INT64
Total number of physical (compressed) bytes used for storage,
including active, long-term, and time travel (deleted or changed data)
bytes. Fail-safe (deleted or changed data retained after the time-travel
window) bytes aren't included.
ACTIVE_PHYSICAL_BYTES
INT64
Number of physical (compressed) bytes younger than 90 days,
including time travel (deleted or changed data) bytes.
LONG_TERM_PHYSICAL_BYTES
INT64
Number of physical (compressed) bytes older than 90 days.
TIME_TRAVEL_PHYSICAL_BYTES
INT64
Number of physical (compressed) bytes used by time travel storage
(deleted or changed data).
STORAGE_LAST_MODIFIED_TIME
TIMESTAMP
The most recent time that data was written to the table.
DELETED
BOOLEAN
Indicates whether or not the table is deleted.
TABLE_TYPE
STRING
The type of table. For example,EXTERNALorBASE TABLE.
FAIL_SAFE_PHYSICAL_BYTES
INT64
Number of physical (compressed) bytes used by the fail-safe storage
(deleted or changed data).
LAST_METADATA_INDEX_REFRESH_TIME
TIMESTAMP
The last metadata index refresh time of the table.
Examples
Example 1:
The following example shows you the total logical bytes billed for the
current project.
The following example shows you how to forecast the price difference per
dataset between logical and physical billing models for the next 30 days.
This example assumes that future storage usage is constant over the next
30 days from the moment the query was run. Note that the forecast is limited to
base tables, it excludes all other types of tables within a dataset.
The prices used in the pricing variables for this query are for
theus-central1region. If you want to run this query for a different region,
update the pricing variables appropriately. SeeStorage pricingfor pricing information.
Open the BigQuery page in the Google Cloud console.
Enter the following GoogleSQL query in theQuery editorbox.INFORMATION_SCHEMArequires GoogleSQL syntax. GoogleSQL
is the default syntax in the Google Cloud console.
DECLAREactive_logical_gib_priceFLOAT64DEFAULT0.02;DECLARElong_term_logical_gib_priceFLOAT64DEFAULT0.01;DECLAREactive_physical_gib_priceFLOAT64DEFAULT0.04;DECLARElong_term_physical_gib_priceFLOAT64DEFAULT0.02;WITHstorage_sizesAS(SELECTtable_schemaASdataset_name,-- LogicalSUM(IF(deleted=false,active_logical_bytes,0))/power(1024,3)ASactive_logical_gib,SUM(IF(deleted=false,long_term_logical_bytes,0))/power(1024,3)ASlong_term_logical_gib,-- PhysicalSUM(active_physical_bytes)/power(1024,3)ASactive_physical_gib,SUM(active_physical_bytes-time_travel_physical_bytes)/power(1024,3)ASactive_no_tt_physical_gib,SUM(long_term_physical_bytes)/power(1024,3)ASlong_term_physical_gib,-- Restorable previously deleted physicalSUM(time_travel_physical_bytes)/power(1024,3)AStime_travel_physical_gib,SUM(fail_safe_physical_bytes)/power(1024,3)ASfail_safe_physical_gib,FROM`region-REGION`.INFORMATION_SCHEMA.TABLE_STORAGE_BY_PROJECTWHEREtotal_physical_bytes+fail_safe_physical_bytes>0-- Base the forecast on base tables only for highest precision resultsANDtable_type='BASE TABLE'GROUPBY1)SELECTdataset_name,-- LogicalROUND(active_logical_gib,2)ASactive_logical_gib,ROUND(long_term_logical_gib,2)ASlong_term_logical_gib,-- PhysicalROUND(active_physical_gib,2)ASactive_physical_gib,ROUND(long_term_physical_gib,2)ASlong_term_physical_gib,ROUND(time_travel_physical_gib,2)AStime_travel_physical_gib,ROUND(fail_safe_physical_gib,2)ASfail_safe_physical_gib,-- Compression ratioROUND(SAFE_DIVIDE(active_logical_gib,active_no_tt_physical_gib),2)ASactive_compression_ratio,ROUND(SAFE_DIVIDE(long_term_logical_gib,long_term_physical_gib),2)ASlong_term_compression_ratio,-- Forecast costs logicalROUND(active_logical_gib*active_logical_gib_price,2)ASforecast_active_logical_cost,ROUND(long_term_logical_gib*long_term_logical_gib_price,2)ASforecast_long_term_logical_cost,-- Forecast costs physicalROUND((active_no_tt_physical_gib+time_travel_physical_gib+fail_safe_physical_gib)*active_physical_gib_price,2)ASforecast_active_physical_cost,ROUND(long_term_physical_gib*long_term_physical_gib_price,2)ASforecast_long_term_physical_cost,-- Forecast costs totalROUND(((active_logical_gib*active_logical_gib_price)+(long_term_logical_gib*long_term_logical_gib_price))-(((active_no_tt_physical_gib+time_travel_physical_gib+fail_safe_physical_gib)*active_physical_gib_price)+(long_term_physical_gib*long_term_physical_gib_price)),2)ASforecast_total_cost_differenceFROMstorage_sizesORDERBY(forecast_active_logical_cost+forecast_active_physical_cost)DESC;
At a minimum, to list tables in a dataset, you must be grantedbigquery.tables.listpermissions. The following predefined IAM
roles includebigquery.tables.listpermissions:
bigquery.user
bigquery.metadataViewer
bigquery.dataViewer
bigquery.dataEditor
bigquery.dataOwner
bigquery.admin
For more information on IAM roles and permissions in
BigQuery, seeAccess control.
List tables
To list the tables in a dataset:
Console
In the Google Cloud console, in the navigation pane, click your dataset
to expand it. This displays the tables and views in the dataset.
Scroll through the list to see the tables in the dataset. Tables and
views are identified by different icons.
bq
In the Google Cloud console, activate Cloud Shell.
At the bottom of the Google Cloud console, aCloud Shellsession starts and displays a command-line prompt. Cloud Shell is a shell environment
with the Google Cloud CLI
already installed and with values already set for
your current project. It can take a few seconds for the session to initialize.
Issue thebq lscommand. The--formatflag can be used to control the
output. If you are listing tables in a project other than your default
project, add the project ID to the dataset in the following format:project_id:dataset.
Additional flags include:
--max_resultsor-n: An integer indicating the maximum number of
results. The default value is50.
usingGoogle.Cloud.BigQuery.V2;usingSystem;usingSystem.Collections.Generic;usingSystem.Linq;publicclassBigQueryListTables{publicvoidListTables(stringprojectId="your-project-id",stringdatasetId="your_dataset_id"){BigQueryClientclient=BigQueryClient.Create(projectId);// Retrieve list of tables in the datasetList<BigQueryTable>tables=client.ListTables(datasetId).ToList();// Display the resultsif(tables.Count>0){Console.WriteLine($"Tables in dataset {datasetId}:");foreach(vartableintables){Console.WriteLine($"\t{table.Reference.TableId}");}}else{Console.WriteLine($"{datasetId} does not contain any tables.");}}}
import("context""fmt""io""cloud.google.com/go/bigquery""google.golang.org/api/iterator")// listTables demonstrates iterating through the collection of tables in a given dataset.funclistTables(wio.Writer,projectID,datasetIDstring)error{// projectID := "my-project-id"// datasetID := "mydataset"ctx:=context.Background()client,err:=bigquery.NewClient(ctx,projectID)iferr!=nil{returnfmt.Errorf("bigquery.NewClient: %v",err)}deferclient.Close()ts:=client.Dataset(datasetID).Tables(ctx)for{t,err:=ts.Next()iferr==iterator.Done{break}iferr!=nil{returnerr}fmt.Fprintf(w,"Table: %q\n",t.TableID)}returnnil}
importcom.google.api.gax.paging.Page;importcom.google.cloud.bigquery.BigQuery;importcom.google.cloud.bigquery.BigQuery.TableListOption;importcom.google.cloud.bigquery.BigQueryException;importcom.google.cloud.bigquery.BigQueryOptions;importcom.google.cloud.bigquery.DatasetId;importcom.google.cloud.bigquery.Table;publicclassListTables{publicstaticvoidrunListTables(){// TODO(developer): Replace these variables before running the sample.StringprojectId="bigquery-public-data";StringdatasetName="samples";listTables(projectId,datasetName);}publicstaticvoidlistTables(StringprojectId,StringdatasetName){try{// Initialize client that will be used to send requests. This client only needs to be created// once, and can be reused for multiple requests.BigQuerybigquery=BigQueryOptions.getDefaultInstance().getService();DatasetIddatasetId=DatasetId.of(projectId,datasetName);Page<Table>tables=bigquery.listTables(datasetId,TableListOption.pageSize(100));tables.iterateAll().forEach(table->System.out.print(table.getTableId().getTable()+"\n"));System.out.println("Tables listed successfully.");}catch(BigQueryExceptione){System.out.println("Tables were not listed. Error occurred: "+e.toString());}}}
// Import the Google Cloud client libraryconst{BigQuery}=require('@google-cloud/bigquery');constbigquery=newBigQuery();asyncfunctionlistTables(){// Lists tables in 'my_dataset'./*** TODO(developer): Uncomment the following lines before running the sample.*/// const datasetId = 'my_dataset';// List all tables in the datasetconst[tables]=awaitbigquery.dataset(datasetId).getTables();console.log('Tables:');tables.forEach(table=>console.log(table.id));}
use Google\Cloud\BigQuery\BigQueryClient;/** Uncomment and populate these variables in your code */// $projectId = 'The Google project ID';// $datasetId = 'The BigQuery dataset ID';$bigQuery = new BigQueryClient(['projectId' => $projectId,]);$dataset = $bigQuery->dataset($datasetId);$tables = $dataset->tables();foreach ($tables as $table) {print($table->id() . PHP_EOL);}
fromgoogle.cloudimportbigquery# Construct a BigQuery client object.client=bigquery.Client()# TODO(developer): Set dataset_id to the ID of the dataset that contains# the tables you are listing.# dataset_id = 'your-project.your_dataset'tables=client.list_tables(dataset_id)# Make an API request.print("Tables contained in '{}':".format(dataset_id))fortableintables:print("{}.{}.{}".format(table.project,table.dataset_id,table.table_id))
require"google/cloud/bigquery"deflist_tablesdataset_id="your_dataset_id"bigquery=Google::Cloud::Bigquery.newdataset=bigquery.datasetdataset_idputs"Tables in dataset#{dataset_id}:"dataset.tables.eachdo|table|puts"\t#{table.table_id}"endend
If you're new to Google Cloud, create an account to evaluate how
BigQuery performs in real-world
scenarios. New customers also get $300 in free credits to run, test, and
deploy workloads.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2024-11-13 UTC."],[],[]]