Create and use tables

This document describes how to create and use standard (built-in) tables in BigQuery. For information about creating other table types, see:

After creating a table, you can:

  • Control access to your table data
  • Get information about your tables
  • List the tables in a dataset
  • Get table metadata

For more information about managing tables including updating table properties, copying a table, and deleting a table, see Managing tables .

Before you begin

Before creating a table in BigQuery, first:

Table naming

When you create a table in BigQuery, the table name must be unique per dataset. The table name can:

  • Contain characters with a total of up to 1,024 UTF-8 bytes.
  • Contain Unicode characters in category L (letter), M (mark), N (number), Pc (connector, including underscore), Pd (dash), Zs (space). For more information, see General Category .

The following are all examples of valid table names: table 01 , ग्राहक , 00_お客様 , étudiant-01 .

Caveats:

  • Table names are case-sensitive by default. mytable and MyTable can coexist in the same dataset, unless they are part of a dataset with case-sensitivity turned off .
  • Some table names and table name prefixes are reserved. If you receive an error saying that your table name or prefix is reserved, then select a different name and try again.
  • If you include multiple dot operators ( . ) in a sequence, the duplicate operators are implicitly stripped.

    For example, this: project_name....dataset_name..table_name

    Becomes this: project_name.dataset_name.table_name

Create tables

You can create a table in BigQuery in the following ways:

  • Manually using the Google Cloud console or the bq command-line tool bq mk command.
  • Programmatically by calling the tables.insert API method.
  • By using the client libraries.
  • From query results.
  • By defining a table that references an external data source.
  • When you load data.
  • By using a CREATE TABLE data definition language (DDL) statement.

Required permissions

To create a table, you need the following IAM permissions:

  • bigquery.tables.create
  • bigquery.tables.updateData
  • bigquery.jobs.create

Additionally, you might require the bigquery.tables.getData permission to access the data that you write to the table.

Each of the following predefined IAM roles includes the permissions that you need in order to create a table:

  • roles/bigquery.dataEditor
  • roles/bigquery.dataOwner
  • roles/bigquery.admin (includes the bigquery.jobs.create permission)
  • roles/bigquery.user (includes the bigquery.jobs.create permission)
  • roles/bigquery.jobUser (includes the bigquery.jobs.create permission)

Additionally, if you have the bigquery.datasets.create permission, you can create and update tables in the datasets that you create.

For more information on IAM roles and permissions in BigQuery, see Predefined roles and permissions .

Create an empty table with a schema definition

You can create an empty table with a schema definition in the following ways:

  • Enter the schema using the Google Cloud console.
  • Provide the schema inline using the bq command-line tool.
  • Submit a JSON schema file using the bq command-line tool.
  • Provide the schema in a table resource when calling the APIs tables.insert method .

For more information about specifying a table schema, see Specifying a schema .

After the table is created, you can load data into it or populate it by writing query results to it.

To create an empty table with a schema definition:

Console

  1. In the Google Cloud console, go to the BigQuery page.

    Go to BigQuery

  2. In the Explorer pane, expand your project, and then select a dataset.
  3. In the Dataset info section, click Create table .
  4. In the Create table panel, specify the following details:
    1. In the Source section, select Empty table in the Create table from list.
    2. In the Destination section, specify the following details:
      1. For Dataset , select the dataset in which you want to create the table.
      2. In the Table field, enter the name of the table that you want to create.
      3. Verify that the Table type field is set to Native table .
    3. In the Schema section, enter the schema definition. You can enter schema information manually by using one of the following methods:
      • Option 1: Click Edit as text and paste the schema in the form of a JSON array. When you use a JSON array, you generate the schema using the same process as creating a JSON schema file . You can view the schema of an existing table in JSON format by entering the following command:
          
         bq  
        show 
          
        --format = 
        prettyjson  
         dataset.table 
          
        
      • Option 2: Click Add field and enter the table schema. Specify each field's Name , Type , and Mode .
    4. Optional: Specify Partition and cluster settings . For more information, see Creating partitioned tables and Creating and using clustered tables .
    5. Optional: In the Advanced options section, if you want to use a customer-managed encryption key, then select the Use a customer-managed encryption key (CMEK) option. By default, BigQuery encrypts customer content stored at rest by using a Google-owned and Google-managed key.
    6. Click Create table .

SQL

The following example creates a table named newtable that expires on January 1, 2023:

  1. In the Google Cloud console, go to the BigQuerypage.

    Go to BigQuery

  2. In the query editor, enter the following statement:

     CREATE 
      
     TABLE 
      
     mydataset 
     . 
     newtable 
      
     ( 
      
     x 
      
     INT64 
      
     OPTIONS 
      
     ( 
     description 
      
     = 
      
     'An optional INTEGER field' 
     ), 
      
     y 
      
     STRUCT 
      
    <  
     a 
      
     ARRAY 
      
    < STRING 
    >  
     OPTIONS 
      
     ( 
     description 
      
     = 
      
     'A repeated STRING field' 
     ), 
      
     b 
      
     BOOL 
      
    > ) 
      
     OPTIONS 
      
     ( 
      
     expiration_timestamp 
      
     = 
      
     TIMESTAMP 
      
     '2023-01-01 00:00:00 UTC' 
     , 
      
     description 
      
     = 
      
     'a table that expires in 2023' 
     , 
      
     labels 
      
     = 
      
     [( 
     'org_unit' 
     , 
      
     'development' 
     )]); 
    
  3. Click Run.

For more information about how to run queries, see Run an interactive query .

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Use the bq mk command with the --table or -t flag. You can supply table schema information inline or with a JSON schema file. For a full list of parameters, see the bq mk --table reference . Some optional parameters include:

    • --expiration
    • --description
    • --time_partitioning_field
    • --time_partitioning_type
    • --range_partitioning
    • --clustering_fields
    • --destination_kms_key
    • --label

    --time_partitioning_field , --time_partitioning_type , --range_partitioning , --clustering_fields , and --destination_kms_key are not demonstrated here. Refer to the following links for more information on these optional parameters:

    If you are creating a table in a project other than your default project, add the project ID to the dataset in the following format: project_id:dataset .

    To create an empty table in an existing dataset with a schema definition, enter the following:

     bq  
    mk 
      
     \ 
    --table  
     \ 
    --expiration = 
     integer 
      
     \ 
    --description = 
     description 
      
     \ 
    --label = 
     key_1:value_1 
      
     \ 
    --label = 
     key_2:value_2 
      
     \ 
    --add_tags = 
     key_3:value_3 
     [ 
    ,... ] 
      
     \ 
     project_id 
    : dataset 
    . table 
      
     \ 
     schema 
    

    Replace the following:

    • integer is the default lifetime (in seconds) for the table. The minimum value is 3600 seconds (one hour). The expiration time evaluates to the current UTC time plus the integer value. If you set the expiration time when you create a table, the dataset's default table expiration setting is ignored.
    • description is a description of the table in quotes.
    • key_1 : value_1 and key_2 : value_2 are key-value pairs that specify labels .
    • key_3 : value_3 are key-value pairs that specify tags . Add multiple tags under the same flag with commas between key:value pairs.
    • project_id is your project ID.
    • dataset is a dataset in your project.
    • table is the name of the table you're creating.
    • schema is an inline schema definition in the format field:data_type,field:data_type or the path to the JSON schema file on your local machine.

    When you specify the schema on the command line, you cannot include a RECORD ( STRUCT ) type, you cannot include a column description, and you cannot specify the column mode. All modes default to NULLABLE . To include descriptions, modes, and RECORD types, supply a JSON schema file instead.

    Examples:

    Enter the following command to create a table using an inline schema definition. This command creates a table named mytable in mydataset in your default project. The table expiration is set to 3600 seconds (1 hour), the description is set to This is my table , and the label is set to organization:development . The command uses the -t shortcut instead of --table . The schema is specified inline as: qtr:STRING,sales:FLOAT,year:STRING .

     bq  
    mk 
      
     \ 
      
    -t  
     \ 
      
    --expiration  
     3600 
      
     \ 
      
    --description  
     "This is my table" 
      
     \ 
      
    --label  
    organization:development  
     \ 
      
    mydataset.mytable  
     \ 
      
    qtr:STRING,sales:FLOAT,year:STRING

    Enter the following command to create a table using a JSON schema file. This command creates a table named mytable in mydataset in your default project. The table expiration is set to 3600 seconds (1 hour), the description is set to This is my table , and the label is set to organization:development . The path to the schema file is /tmp/myschema.json .

     bq  
    mk 
      
     \ 
      
    --table  
     \ 
      
    --expiration  
     3600 
      
     \ 
      
    --description  
     "This is my table" 
      
     \ 
      
    --label  
    organization:development  
     \ 
      
    mydataset.mytable  
     \ 
      
    /tmp/myschema.json

    Enter the following command to create a table using an JSON schema file. This command creates a table named mytable in mydataset in myotherproject . The table expiration is set to 3600 seconds (1 hour), the description is set to This is my table , and the label is set to organization:development . The path to the schema file is /tmp/myschema.json .

     bq  
    mk 
      
     \ 
      
    --table  
     \ 
      
    --expiration  
     3600 
      
     \ 
      
    --description  
     "This is my table" 
      
     \ 
      
    --label  
    organization:development  
     \ 
      
    myotherproject:mydataset.mytable  
     \ 
      
    /tmp/myschema.json

    After the table is created, you can update the table's expiration, description, and labels. You can also modify the schema definition .

Terraform

Use the google_bigquery_table resource.

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

Create a table

The following example creates a table named mytable :

 resource "google_bigquery_dataset" "default" {
  dataset_id                      = "mydataset"
  default_partition_expiration_ms = 2592000000  # 30 days
  default_table_expiration_ms     = 31536000000 # 365 days
  description                     = "dataset description"
  location                        = "US"
  max_time_travel_hours           = 96 # 4 days

  labels = {
    billing_group = "accounting",
    pii           = "sensitive"
  }
}

resource "google_bigquery_table" "default" {
  dataset_id          = google_bigquery_dataset.default.dataset_id
  table_id            = "mytable"
  deletion_protection = false # set to "true" in production

  schema = <<EOF
[
  {
    "name": "ID",
    "type": "INT64",
    "mode": "NULLABLE",
    "description": "Item ID"
  },
  {
    "name": "Item",
    "type": "STRING",
    "mode": "NULLABLE"
  }
]
EOF

} 

Create a table and grant access to it

The following example creates a table named mytable , then uses the google_bigquery_table_iam_policy resource to grant access to it. Take this step only if you want to grant access to the table to principals who don't have access to the dataset in which the table resides.

 resource "google_bigquery_dataset" "default" {
  dataset_id                      = "mydataset"
  default_partition_expiration_ms = 2592000000  # 30 days
  default_table_expiration_ms     = 31536000000 # 365 days
  description                     = "dataset description"
  location                        = "US"
  max_time_travel_hours           = 96 # 4 days

  labels = {
    billing_group = "accounting",
    pii           = "sensitive"
  }
}

resource "google_bigquery_table" "default" {
  dataset_id          = google_bigquery_dataset.default.dataset_id
  table_id            = "mytable"
  deletion_protection = false # set to "true" in production

  schema = <<EOF
[
  {
    "name": "ID",
    "type": "INT64",
    "mode": "NULLABLE",
    "description": "Item ID"
  },
  {
    "name": "Item",
    "type": "STRING",
    "mode": "NULLABLE"
  }
]
EOF

}

data "google_iam_policy" "default" {
  binding {
    role = "roles/bigquery.dataOwner"
    members = [
      "user:raha@altostrat.com",
    ]
  }
}

resource "google_bigquery_table_iam_policy" "policy" {
  dataset_id  = google_bigquery_table.default.dataset_id
  table_id    = google_bigquery_table.default.table_id
  policy_data = data.google_iam_policy.default.policy_data
} 

Create a table with a customer-managed encryption key

The following example creates a table named mytable , and also uses the google_kms_crypto_key and google_kms_key_ring resources to specify a Cloud Key Management Service key for the table. You must enable the Cloud Key Management Service API before running this example.

 resource "google_bigquery_dataset" "default" {
  dataset_id                      = "mydataset"
  default_partition_expiration_ms = 2592000000  # 30 days
  default_table_expiration_ms     = 31536000000 # 365 days
  description                     = "dataset description"
  location                        = "US"
  max_time_travel_hours           = 96 # 4 days

  labels = {
    billing_group = "accounting",
    pii           = "sensitive"
  }
}

resource "google_bigquery_table" "default" {
  dataset_id          = google_bigquery_dataset.default.dataset_id
  table_id            = "mytable"
  deletion_protection = false # set to "true" in production

  schema = <<EOF
[
  {
    "name": "ID",
    "type": "INT64",
    "mode": "NULLABLE",
    "description": "Item ID"
  },
  {
    "name": "Item",
    "type": "STRING",
    "mode": "NULLABLE"
  }
]
EOF

  encryption_configuration {
    kms_key_name = google_kms_crypto_key.crypto_key.id
  }

  depends_on = [google_project_iam_member.service_account_access]
}

resource "google_kms_crypto_key" "crypto_key" {
  name     = "example-key"
  key_ring = google_kms_key_ring.key_ring.id
}

resource "random_id" "default" {
  byte_length = 8
}

resource "google_kms_key_ring" "key_ring" {
  name     = "${random_id.default.hex}-example-keyring"
  location = "us"
}

# Enable the BigQuery service account to encrypt/decrypt Cloud KMS keys
data "google_project" "project" {
}

resource "google_project_iam_member" "service_account_access" {
  project = data.google_project.project.project_id
  role    = "roles/cloudkms.cryptoKeyEncrypterDecrypter"
  member  = "serviceAccount:bq-${data.google_project.project.number}@bigquery-encryption.iam.gserviceaccount.com"
} 

To apply your Terraform configuration in a Google Cloud project, complete the steps in the following sections.

Prepare Cloud Shell

  1. Launch Cloud Shell .
  2. Set the default Google Cloud project where you want to apply your Terraform configurations.

    You only need to run this command once per project, and you can run it in any directory.

    export GOOGLE_CLOUD_PROJECT= PROJECT_ID 
    

    Environment variables are overridden if you set explicit values in the Terraform configuration file.

Prepare the directory

Each Terraform configuration file must have its own directory (also called a root module ).

  1. In Cloud Shell , create a directory and a new file within that directory. The filename must have the .tf extension—for example main.tf . In this tutorial, the file is referred to as main.tf .
    mkdir DIRECTORY 
    && cd DIRECTORY 
    && touch main.tf
  2. If you are following a tutorial, you can copy the sample code in each section or step.

    Copy the sample code into the newly created main.tf .

    Optionally, copy the code from GitHub. This is recommended when the Terraform snippet is part of an end-to-end solution.

  3. Review and modify the sample parameters to apply to your environment.
  4. Save your changes.
  5. Initialize Terraform. You only need to do this once per directory.
    terraform init

    Optionally, to use the latest Google provider version, include the -upgrade option:

    terraform init -upgrade

Apply the changes

  1. Review the configuration and verify that the resources that Terraform is going to create or update match your expectations:
    terraform plan

    Make corrections to the configuration as necessary.

  2. Apply the Terraform configuration by running the following command and entering yes at the prompt:
    terraform apply

    Wait until Terraform displays the "Apply complete!" message.

  3. Open your Google Cloud project to view the results. In the Google Cloud console, navigate to your resources in the UI to make sure that Terraform has created or updated them.

API

Call the tables.insert method with a defined table resource .

C#

Before trying this sample, follow the C# setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery C# API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  using 
  
 Google.Cloud.BigQuery.V2 
 ; 
 public 
  
 class 
  
 BigQueryCreateTable 
 { 
  
 public 
  
 BigQueryTable 
  
 CreateTable 
 ( 
  
 string 
  
 projectId 
  
 = 
  
 "your-project-id" 
 , 
  
 string 
  
 datasetId 
  
 = 
  
 "your_dataset_id" 
  
 ) 
  
 { 
  
 BigQueryClient 
  
 client 
  
 = 
  
 BigQueryClient 
 . 
 Create 
 ( 
 projectId 
 ); 
  
 var 
  
 dataset 
  
 = 
  
 client 
 . 
 GetDataset 
 ( 
 datasetId 
 ); 
  
 // Create schema for new table. 
  
 var 
  
 schema 
  
 = 
  
 new 
  
 TableSchemaBuilder 
  
 { 
  
 { 
  
 "full_name" 
 , 
  
 BigQueryDbType 
 . 
 String 
  
 }, 
  
 { 
  
 "age" 
 , 
  
 BigQueryDbType 
 . 
 Int64 
  
 } 
  
 }. 
 Build 
 (); 
  
 // Create the table 
  
 return 
  
 dataset 
 . 
 CreateTable 
 ( 
 tableId 
 : 
  
 "your_table_id" 
 , 
  
 schema 
 : 
  
 schema 
 ); 
  
 } 
 } 
 

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Go API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "time" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 // createTableExplicitSchema demonstrates creating a new BigQuery table and specifying a schema. 
 func 
  
 createTableExplicitSchema 
 ( 
 projectID 
 , 
  
 datasetID 
 , 
  
 tableID 
  
 string 
 ) 
  
 error 
  
 { 
  
 // projectID := "my-project-id" 
  
 // datasetID := "mydatasetid" 
  
 // tableID := "mytableid" 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 projectID 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 fmt 
 . 
 Errorf 
 ( 
 "bigquery.NewClient: %v" 
 , 
  
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
 Close 
 () 
  
 sampleSchema 
  
 := 
  
 bigquery 
 . 
 Schema 
 { 
  
 { 
 Name 
 : 
  
 "full_name" 
 , 
  
 Type 
 : 
  
 bigquery 
 . 
 StringFieldType 
 }, 
  
 { 
 Name 
 : 
  
 "age" 
 , 
  
 Type 
 : 
  
 bigquery 
 . 
 IntegerFieldType 
 }, 
  
 } 
  
 metaData 
  
 := 
  
& bigquery 
 . 
 TableMetadata 
 { 
  
 Schema 
 : 
  
 sampleSchema 
 , 
  
 ExpirationTime 
 : 
  
 time 
 . 
 Now 
 (). 
 AddDate 
 ( 
 1 
 , 
  
 0 
 , 
  
 0 
 ), 
  
 // Table will be automatically deleted in 1 year. 
  
 } 
  
 tableRef 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 datasetID 
 ). 
 Table 
 ( 
 tableID 
 ) 
  
 if 
  
 err 
  
 := 
  
 tableRef 
 . 
 Create 
 ( 
 ctx 
 , 
  
 metaData 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 return 
  
 nil 
 } 
 

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Java API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  import 
  
 com.google.cloud.bigquery.BigQuery 
 ; 
 import 
  
 com.google.cloud.bigquery.BigQueryException 
 ; 
 import 
  
 com.google.cloud.bigquery.BigQueryOptions 
 ; 
 import 
  
 com.google.cloud.bigquery.Field 
 ; 
 import 
  
 com.google.cloud.bigquery.Schema 
 ; 
 import 
  
 com.google.cloud.bigquery.StandardSQLTypeName 
 ; 
 import 
  
 com.google.cloud.bigquery.StandardTableDefinition 
 ; 
 import 
  
 com.google.cloud.bigquery.TableDefinition 
 ; 
 import 
  
 com.google.cloud.bigquery.TableId 
 ; 
 import 
  
 com.google.cloud.bigquery.TableInfo 
 ; 
 public 
  
 class 
 CreateTable 
  
 { 
  
 public 
  
 static 
  
 void 
  
 runCreateTable 
 () 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 datasetName 
  
 = 
  
 "MY_DATASET_NAME" 
 ; 
  
 String 
  
 tableName 
  
 = 
  
 "MY_TABLE_NAME" 
 ; 
  
 Schema 
  
 schema 
  
 = 
  
 Schema 
 . 
 of 
 ( 
  
 Field 
 . 
 of 
 ( 
 "stringField" 
 , 
  
 StandardSQLTypeName 
 . 
 STRING 
 ), 
  
 Field 
 . 
 of 
 ( 
 "booleanField" 
 , 
  
 StandardSQLTypeName 
 . 
 BOOL 
 )); 
  
 createTable 
 ( 
 datasetName 
 , 
  
 tableName 
 , 
  
 schema 
 ); 
  
 } 
  
 public 
  
 static 
  
 void 
  
 createTable 
 ( 
 String 
  
 datasetName 
 , 
  
 String 
  
 tableName 
 , 
  
 Schema 
  
 schema 
 ) 
  
 { 
  
 try 
  
 { 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. 
  
 BigQuery 
  
 bigquery 
  
 = 
  
 BigQueryOptions 
 . 
 getDefaultInstance 
 (). 
 getService 
 (); 
  
 TableId 
  
 tableId 
  
 = 
  
 TableId 
 . 
 of 
 ( 
 datasetName 
 , 
  
 tableName 
 ); 
  
 TableDefinition 
  
 tableDefinition 
  
 = 
  
 StandardTableDefinition 
 . 
 of 
 ( 
 schema 
 ); 
  
 TableInfo 
  
 tableInfo 
  
 = 
  
 TableInfo 
 . 
 newBuilder 
 ( 
 tableId 
 , 
  
 tableDefinition 
 ). 
 build 
 (); 
  
 bigquery 
 . 
 create 
 ( 
 tableInfo 
 ); 
  
 System 
 . 
 out 
 . 
 println 
 ( 
 "Table created successfully" 
 ); 
  
 } 
  
 catch 
  
 ( 
 BigQueryException 
  
 e 
 ) 
  
 { 
  
 System 
 . 
 out 
 . 
 println 
 ( 
 "Table was not created. \n" 
  
 + 
  
 e 
 . 
 toString 
 ()); 
  
 } 
  
 } 
 } 
 

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Node.js API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  // Import the Google Cloud client library and create a client 
 const 
  
 { 
 BigQuery 
 } 
  
 = 
  
 require 
 ( 
 '@google-cloud/bigquery' 
 ); 
 const 
  
 bigquery 
  
 = 
  
 new 
  
 BigQuery 
 (); 
 async 
  
 function 
  
 createTable 
 () 
  
 { 
  
 // Creates a new table named "my_table" in "my_dataset". 
  
 /** 
 * TODO(developer): Uncomment the following lines before running the sample. 
 */ 
  
 // const datasetId = "my_dataset"; 
  
 // const tableId = "my_table"; 
  
 // const schema = 'Name:string, Age:integer, Weight:float, IsMagic:boolean'; 
  
 // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource 
  
 const 
  
 options 
  
 = 
  
 { 
  
 schema 
 : 
  
 schema 
 , 
  
 location 
 : 
  
 'US' 
 , 
  
 }; 
  
 // Create a new table in the dataset 
  
 const 
  
 [ 
 table 
 ] 
  
 = 
  
 await 
  
 bigquery 
  
 . 
 dataset 
 ( 
 datasetId 
 ) 
  
 . 
 createTable 
 ( 
 tableId 
 , 
  
 options 
 ); 
  
 console 
 . 
 log 
 ( 
 `Table 
 ${ 
 table 
 . 
 id 
 } 
 created.` 
 ); 
 } 
 

PHP

Before trying this sample, follow the PHP setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery PHP API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  use Google\Cloud\BigQuery\BigQueryClient; 
 /** Uncomment and populate these variables in your code */ 
 // $projectId = 'The Google project ID'; 
 // $datasetId = 'The BigQuery dataset ID'; 
 // $tableId = 'The BigQuery table ID'; 
 // $fields = [ 
 //    [ 
 //        'name' => 'field1', 
 //        'type' => 'string', 
 //        'mode' => 'required' 
 //    ], 
 //    [ 
 //        'name' => 'field2', 
 //        'type' => 'integer' 
 //    ], 
 //]; 
 $bigQuery = new BigQueryClient([ 
 'projectId' => $projectId, 
 ]); 
 $dataset = $bigQuery->dataset($datasetId); 
 $schema = ['fields' => $fields]; 
 $table = $dataset->createTable($tableId, ['schema' => $schema]); 
 printf('Created table %s' . PHP_EOL, $tableId); 
 

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Python API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  from 
 google.cloud 
 import 
 bigquery 
 # Construct a BigQuery client object. 
 client 
 = 
 bigquery 
 . 
 Client 
 () 
 # TODO(developer): Set table_id to the ID of the table to create. 
 # table_id = "your-project.your_dataset.your_table_name" 
 schema 
 = 
 [ 
 bigquery 
 . 
 SchemaField 
 ( 
 "full_name" 
 , 
 "STRING" 
 , 
 mode 
 = 
 "REQUIRED" 
 ), 
 bigquery 
 . 
 SchemaField 
 ( 
 "age" 
 , 
 "INTEGER" 
 , 
 mode 
 = 
 "REQUIRED" 
 ), 
 ] 
 table 
 = 
 bigquery 
 . 
 Table 
 ( 
 table_id 
 , 
 schema 
 = 
 schema 
 ) 
 table 
 = 
 client 
 . 
 create_table 
 ( 
 table 
 ) 
 # Make an API request. 
 print 
 ( 
 "Created table 
 {} 
 . 
 {} 
 . 
 {} 
 " 
 . 
 format 
 ( 
 table 
 . 
 project 
 , 
 table 
 . 
 dataset_id 
 , 
 table 
 . 
 table_id 
 ) 
 ) 
 

Ruby

Before trying this sample, follow the Ruby setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Ruby API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  require 
  
 "google/cloud/bigquery" 
 def 
  
 create_table 
  
 dataset_id 
  
 = 
  
 "my_dataset" 
  
 bigquery 
  
 = 
  
 Google 
 :: 
 Cloud 
 :: 
 Bigquery 
 . 
 new 
  
 dataset 
  
 = 
  
 bigquery 
 . 
 dataset 
  
 dataset_id 
  
 table_id 
  
 = 
  
 "my_table" 
  
 table 
  
 = 
  
 dataset 
 . 
 create_table 
  
 table_id 
  
 do 
  
 | 
 updater 
 | 
  
 updater 
 . 
 string 
  
 "full_name" 
 , 
  
 mode 
 : 
  
 :required 
  
 updater 
 . 
 integer 
  
 "age" 
 , 
  
 mode 
 : 
  
 :required 
  
 end 
  
 puts 
  
 "Created table: 
 #{ 
 table_id 
 } 
 " 
 end 
 

Create an empty table without a schema definition

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Java API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  import 
  
 com.google.cloud.bigquery.BigQuery 
 ; 
 import 
  
 com.google.cloud.bigquery.BigQueryException 
 ; 
 import 
  
 com.google.cloud.bigquery.BigQueryOptions 
 ; 
 import 
  
 com.google.cloud.bigquery.Schema 
 ; 
 import 
  
 com.google.cloud.bigquery.StandardTableDefinition 
 ; 
 import 
  
 com.google.cloud.bigquery.TableDefinition 
 ; 
 import 
  
 com.google.cloud.bigquery.TableId 
 ; 
 import 
  
 com.google.cloud.bigquery.TableInfo 
 ; 
 // Sample to create a table without schema 
 public 
  
 class 
 CreateTableWithoutSchema 
  
 { 
  
 public 
  
 static 
  
 void 
  
 main 
 ( 
 String 
 [] 
  
 args 
 ) 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 datasetName 
  
 = 
  
 "MY_DATASET_NAME" 
 ; 
  
 String 
  
 tableName 
  
 = 
  
 "MY_TABLE_NAME" 
 ; 
  
 createTableWithoutSchema 
 ( 
 datasetName 
 , 
  
 tableName 
 ); 
  
 } 
  
 public 
  
 static 
  
 void 
  
 createTableWithoutSchema 
 ( 
 String 
  
 datasetName 
 , 
  
 String 
  
 tableName 
 ) 
  
 { 
  
 try 
  
 { 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. 
  
 BigQuery 
  
 bigquery 
  
 = 
  
 BigQueryOptions 
 . 
 getDefaultInstance 
 (). 
 getService 
 (); 
  
 TableId 
  
 tableId 
  
 = 
  
 TableId 
 . 
 of 
 ( 
 datasetName 
 , 
  
 tableName 
 ); 
  
 TableDefinition 
  
 tableDefinition 
  
 = 
  
 StandardTableDefinition 
 . 
 of 
 ( 
 Schema 
 . 
 of 
 ()); 
  
 TableInfo 
  
 tableInfo 
  
 = 
  
 TableInfo 
 . 
 newBuilder 
 ( 
 tableId 
 , 
  
 tableDefinition 
 ). 
 build 
 (); 
  
 bigquery 
 . 
 create 
 ( 
 tableInfo 
 ); 
  
 System 
 . 
 out 
 . 
 println 
 ( 
 "Table created successfully" 
 ); 
  
 } 
  
 catch 
  
 ( 
 BigQueryException 
  
 e 
 ) 
  
 { 
  
 System 
 . 
 out 
 . 
 println 
 ( 
 "Table was not created. \n" 
  
 + 
  
 e 
 . 
 toString 
 ()); 
  
 } 
  
 } 
 } 
 

Create a table from a query result

To create a table from a query result, write the results to a destination table.

Console

  1. Open the BigQuery page in the Google Cloud console.

    Go to the BigQuery page

  2. In the Explorerpanel, expand your project and select a dataset.

  3. Enter a valid SQL query.

  4. Click Moreand then select Query settings.

    Query settings

  5. Select the Set a destination table for query resultsoption.

    Set destination

  6. In the Destinationsection, select the Datasetin which you want to create the table, and then choose a Table Id.

  7. In the Destination table write preferencesection, choose one of the following:

    • Write if empty— Writes the query results to the table only if the table is empty.
    • Append to table— Appends the query results to an existing table.
    • Overwrite table— Overwrites an existing table with the same name using the query results.
  8. Optional: For Data location, choose your location .

  9. To update the query settings, click Save.

  10. Click Run. This creates a query job that writes the query results to the table you specified.

Alternatively, if you forget to specify a destination table before running your query, you can copy the cached results table to a permanent table by clicking the Save Results button above the editor.

SQL

The following example uses the CREATE TABLE statement to create the trips table from data in the public bikeshare_trips table:

  1. In the Google Cloud console, go to the BigQuerypage.

    Go to BigQuery

  2. In the query editor, enter the following statement:

     CREATE 
      
     TABLE 
      
      mydataset 
     
     . 
     trips 
      
     AS 
      
     ( 
      
     SELECT 
      
     bike_id 
     , 
      
     start_time 
     , 
      
     duration_minutes 
      
     FROM 
      
     bigquery 
     - 
     public 
     - 
     data 
     . 
     austin_bikeshare 
     . 
     bikeshare_trips 
     ); 
    
  3. Click Run.

For more information about how to run queries, see Run an interactive query .

For more information, see Creating a new table from an existing table .

bq

  1. In the Google Cloud console, activate Cloud Shell.

    Activate Cloud Shell

    At the bottom of the Google Cloud console, a Cloud Shell session starts and displays a command-line prompt. Cloud Shell is a shell environment with the Google Cloud CLI already installed and with values already set for your current project. It can take a few seconds for the session to initialize.

  2. Enter the bq query command and specify the --destination_table flag to create a permanent table based on the query results. Specify the use_legacy_sql=false flag to use GoogleSQL syntax. To write the query results to a table that is not in your default project, add the project ID to the dataset name in the following format: project_id : dataset .

    Optional: Supply the --location flag and set the value to your location .

    To control the write disposition for an existing destination table, specify one of the following optional flags:

    • --append_table : If the destination table exists, the query results are appended to it.
    • --replace : If the destination table exists, it is overwritten with the query results.

      bq  
      --location = 
       location 
        
      query  
       \ 
      --destination_table  
       project_id 
      : dataset 
      . table 
        
       \ 
      --use_legacy_sql = 
       false 
        
       ' query 
      ' 
      

      Replace the following:

    • location is the name of the location used to process the query. The --location flag is optional. For example, if you are using BigQuery in the Tokyo region, you can set the flag's value to asia-northeast1 . You can set a default value for the location by using the .bigqueryrc file .

    • project_id is your project ID.

    • dataset is the name of the dataset that contains the table to which you are writing the query results.

    • table is the name of the table to which you're writing the query results.

    • query is a query in GoogleSQL syntax.

      If no write disposition flag is specified, the default behavior is to write the results to the table only if it is empty. If the table exists and it is not empty, the following error is returned: BigQuery error in query operation: Error processing job project_id :bqjob_123abc456789_00000e1234f_1: Already Exists: Table project_id:dataset.table .

      Examples:

      Enter the following command to write query results to a destination table named mytable in mydataset . The dataset is in your default project. Since no write disposition flag is specified in the command, the table must be new or empty. Otherwise, an Already exists error is returned. The query retrieves data from the USA Name Data public dataset .

       bq  
      query 
        
       \ 
      --destination_table  
      mydataset.mytable  
       \ 
      --use_legacy_sql = 
       false 
        
       \ 
       'SELECT 
       name, 
       number 
       FROM 
       `bigquery-public-data`.usa_names.usa_1910_current 
       WHERE 
       gender = "M" 
       ORDER BY 
       number DESC' 
      

      Enter the following command to use query results to overwrite a destination table named mytable in mydataset . The dataset is in your default project. The command uses the --replace flag to overwrite the destination table.

       bq  
      query 
        
       \ 
      --destination_table  
      mydataset.mytable  
       \ 
      --replace  
       \ 
      --use_legacy_sql = 
       false 
        
       \ 
       'SELECT 
       name, 
       number 
       FROM 
       `bigquery-public-data`.usa_names.usa_1910_current 
       WHERE 
       gender = "M" 
       ORDER BY 
       number DESC' 
      

      Enter the following command to append query results to a destination table named mytable in mydataset . The dataset is in my-other-project , not your default project. The command uses the --append_table flag to append the query results to the destination table.

       bq  
      query 
        
       \ 
      --append_table  
       \ 
      --use_legacy_sql = 
       false 
        
       \ 
      --destination_table  
      my-other-project:mydataset.mytable  
       \ 
       'SELECT 
       name, 
       number 
       FROM 
       `bigquery-public-data`.usa_names.usa_1910_current 
       WHERE 
       gender = "M" 
       ORDER BY 
       number DESC' 
      

      The output for each of these examples looks like the following. For readability, some output is truncated.

      Waiting on bqjob_r123abc456_000001234567_1 ... (2s) Current status: DONE
      +---------+--------+
      |  name   | number |
      +---------+--------+
      | Robert  |  10021 |
      | John    |   9636 |
      | Robert  |   9297 |
      | ...              |
      +---------+--------+

API

To save query results to a permanent table, call the jobs.insert method, configure a query job, and include a value for the destinationTable property. To control the write disposition for an existing destination table, configure the writeDisposition property.

To control the processing location for the query job, specify the location property in the jobReference section of the job resource .

Go

Before trying this sample, follow the Go setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Go API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "io" 
  
 "cloud.google.com/go/bigquery" 
  
 "google.golang.org/api/iterator" 
 ) 
 // queryWithDestination demonstrates saving the results of a query to a specific table by setting the destination 
 // via the API properties. 
 func 
  
 queryWithDestination 
 ( 
 w 
  
 io 
 . 
 Writer 
 , 
  
 projectID 
 , 
  
 destDatasetID 
 , 
  
 destTableID 
  
 string 
 ) 
  
 error 
  
 { 
  
 // projectID := "my-project-id" 
  
 // datasetID := "mydataset" 
  
 // tableID := "mytable" 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 projectID 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 fmt 
 . 
 Errorf 
 ( 
 "bigquery.NewClient: %v" 
 , 
  
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
 Close 
 () 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "SELECT 17 as my_col" 
 ) 
  
 q 
 . 
 Location 
  
 = 
  
 "US" 
  
 // Location must match the dataset(s) referenced in query. 
  
 q 
 . 
 QueryConfig 
 . 
 Dst 
  
 = 
  
 client 
 . 
 Dataset 
 ( 
 destDatasetID 
 ). 
 Table 
 ( 
 destTableID 
 ) 
  
 // Run the query and print results when the query job is completed. 
  
 job 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 if 
  
 err 
  
 := 
  
 status 
 . 
 Err 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 it 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Read 
 ( 
 ctx 
 ) 
  
 for 
  
 { 
  
 var 
  
 row 
  
 [] 
 bigquery 
 . 
 Value 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 ( 
& row 
 ) 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 return 
  
 err 
  
 } 
  
 fmt 
 . 
 Fprintln 
 ( 
 w 
 , 
  
 row 
 ) 
  
 } 
  
 return 
  
 nil 
 } 
 

Java

Before trying this sample, follow the Java setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Java API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

To save query results to a permanent table, set the destination table to the desired TableId in a QueryJobConfiguration .

  import 
  
 com.google.cloud.bigquery.BigQuery 
 ; 
 import 
  
 com.google.cloud.bigquery.BigQueryException 
 ; 
 import 
  
 com.google.cloud.bigquery.BigQueryOptions 
 ; 
 import 
  
 com.google.cloud.bigquery.QueryJobConfiguration 
 ; 
 import 
  
 com.google.cloud.bigquery.TableId 
 ; 
 public 
  
 class 
 SaveQueryToTable 
  
 { 
  
 public 
  
 static 
  
 void 
  
 runSaveQueryToTable 
 () 
  
 { 
  
 // TODO(developer): Replace these variables before running the sample. 
  
 String 
  
 query 
  
 = 
  
 "SELECT corpus FROM `bigquery-public-data.samples.shakespeare` GROUP BY corpus;" 
 ; 
  
 String 
  
 destinationTable 
  
 = 
  
 "MY_TABLE" 
 ; 
  
 String 
  
 destinationDataset 
  
 = 
  
 "MY_DATASET" 
 ; 
  
 saveQueryToTable 
 ( 
 destinationDataset 
 , 
  
 destinationTable 
 , 
  
 query 
 ); 
  
 } 
  
 public 
  
 static 
  
 void 
  
 saveQueryToTable 
 ( 
  
 String 
  
 destinationDataset 
 , 
  
 String 
  
 destinationTableId 
 , 
  
 String 
  
 query 
 ) 
  
 { 
  
 try 
  
 { 
  
 // Initialize client that will be used to send requests. This client only needs to be created 
  
 // once, and can be reused for multiple requests. 
  
 BigQuery 
  
 bigquery 
  
 = 
  
 BigQueryOptions 
 . 
 getDefaultInstance 
 (). 
 getService 
 (); 
  
 // Identify the destination table 
  
 TableId 
  
 destinationTable 
  
 = 
  
 TableId 
 . 
 of 
 ( 
 destinationDataset 
 , 
  
 destinationTableId 
 ); 
  
 // Build the query job 
  
 QueryJobConfiguration 
  
 queryConfig 
  
 = 
  
 QueryJobConfiguration 
 . 
 newBuilder 
 ( 
 query 
 ). 
 setDestinationTable 
 ( 
 destinationTable 
 ). 
 build 
 (); 
  
 // Execute the query. 
  
 bigquery 
 . 
 query 
 ( 
 queryConfig 
 ); 
  
 // The results are now saved in the destination table. 
  
 System 
 . 
 out 
 . 
 println 
 ( 
 "Saved query ran successfully" 
 ); 
  
 } 
  
 catch 
  
 ( 
 BigQueryException 
  
 | 
  
 InterruptedException 
  
 e 
 ) 
  
 { 
  
 System 
 . 
 out 
 . 
 println 
 ( 
 "Saved query did not run \n" 
  
 + 
  
 e 
 . 
 toString 
 ()); 
  
 } 
  
 } 
 } 
 

Node.js

Before trying this sample, follow the Node.js setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Node.js API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

  // Import the Google Cloud client library 
 const 
  
 { 
 BigQuery 
 } 
  
 = 
  
 require 
 ( 
 '@google-cloud/bigquery' 
 ); 
 const 
  
 bigquery 
  
 = 
  
 new 
  
 BigQuery 
 (); 
 async 
  
 function 
  
 queryDestinationTable 
 () 
  
 { 
  
 // Queries the U.S. given names dataset for the state of Texas 
  
 // and saves results to permanent table. 
  
 /** 
 * TODO(developer): Uncomment the following lines before running the sample. 
 */ 
  
 // const datasetId = 'my_dataset'; 
  
 // const tableId = 'my_table'; 
  
 // Create destination table reference 
  
 const 
  
 dataset 
  
 = 
  
 bigquery 
 . 
 dataset 
 ( 
 datasetId 
 ); 
  
 const 
  
 destinationTable 
  
 = 
  
 dataset 
 . 
 table 
 ( 
 tableId 
 ); 
  
 const 
  
 query 
  
 = 
  
 `SELECT name 
 FROM \`bigquery-public-data.usa_names.usa_1910_2013\` 
 WHERE state = 'TX' 
 LIMIT 100` 
 ; 
  
 // For all options, see https://cloud.google.com/bigquery/docs/reference/v2/tables#resource 
  
 const 
  
 options 
  
 = 
  
 { 
  
 query 
 : 
  
 query 
 , 
  
 // Location must match that of the dataset(s) referenced in the query. 
  
 location 
 : 
  
 'US' 
 , 
  
 destination 
 : 
  
 destinationTable 
 , 
  
 }; 
  
 // Run the query as a job 
  
 const 
  
 [ 
 job 
 ] 
  
 = 
  
 await 
  
 bigquery 
 . 
 createQueryJob 
 ( 
 options 
 ); 
  
 console 
 . 
 log 
 ( 
 `Job 
 ${ 
 job 
 . 
 id 
 } 
 started.` 
 ); 
  
 console 
 . 
 log 
 ( 
 `Query results loaded to table 
 ${ 
 destinationTable 
 . 
 id 
 } 
 ` 
 ); 
 } 
 

Python

Before trying this sample, follow the Python setup instructions in the BigQuery quickstart using client libraries . For more information, see the BigQuery Python API reference documentation .

To authenticate to BigQuery, set up Application Default Credentials. For more information, see Set up authentication for client libraries .

To save query results to a permanent table, create a QueryJobConfig and set the destination to the desired TableReference . Pass the job configuration to the query method .
  from 
 google.cloud 
 import 
 bigquery 
 # Construct a BigQuery client object. 
 client 
 = 
 bigquery 
 . 
 Client 
 () 
 # TODO(developer): Set table_id to the ID of the destination table. 
 # table_id = "your-project.your_dataset.your_table_name" 
 job_config 
 = 
 bigquery 
 . 
 QueryJobConfig 
 ( 
 destination 
 = 
 table_id 
 ) 
 sql 
 = 
 """ 
 SELECT corpus 
 FROM `bigquery-public-data.samples.shakespeare` 
 GROUP BY corpus; 
 """ 
 # Start the query, passing in the extra configuration. 
 query_job 
 = 
 client 
 . 
 query 
 ( 
 sql 
 , 
 job_config 
 = 
 job_config 
 ) 
 # Make an API request. 
 query_job 
 . 
 result 
 () 
 # Wait for the job to complete. 
 print 
 ( 
 "Query results loaded to the table 
 {} 
 " 
 . 
 format 
 ( 
 table_id 
 ))