BigQuery - Package cloud.google.com/go/bigquery (v1.23.0)

Package bigquery provides a client for the BigQuery service.

The following assumes a basic familiarity with BigQuery concepts. See https://cloud.google.com/bigquery/docs .

See https://godoc.org/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.

Creating a Client

To start working with this package, create a client:

 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 projectID 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Querying

To query existing tables, create a Query and call its Read method:

 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 ` 
 SELECT year, SUM(number) as num 
 FROM ` 
  
 + 
  
 "`bigquery-public-data.usa_names.usa_1910_2013`" 
  
 + 
  
 ` 
 WHERE name = "William" 
 GROUP BY year 
 ORDER BY year 
 ` 
 ) 
 it 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Read 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Then iterate through the resulting rows. You can store a row using anything that implements the ValueLoader interface, or with a slice or map of bigquery.Value. A slice is simplest:

 for 
  
 { 
  
 var 
  
 values 
  
 [] 
 bigquery 
 . 
 Value 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 ( 
& values 
 ) 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 values 
 ) 
 } 

You can also use a struct whose exported fields match the query:

 type 
  
 Count 
  
 struct 
  
 { 
  
 Year 
  
 int 
  
 Num 
  
 int 
 } 
 for 
  
 { 
  
 var 
  
 c 
  
 Count 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 ( 
& c 
 ) 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 c 
 ) 
 } 

You can also start the query running and get the results later. Create the query as above, but call Run instead of Read. This returns a Job, which represents an asynchronous operation.

 job 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Run 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Get the job's ID, a printable string. You can save this string to retrieve the results at a later time, even in another process.

 jobID 
  
 := 
  
 job 
 . 
 ID 
 () 
 fmt 
 . 
 Printf 
 ( 
 "The job ID is %s\n" 
 , 
  
 jobID 
 ) 

To retrieve the job's results from the ID, first look up the Job:

 job 
 , 
  
 err 
  
 = 
  
 client 
 . 
 JobFromID 
 ( 
 ctx 
 , 
  
 jobID 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Use the Job.Read method to obtain an iterator, and loop over the rows. Query.Read is just a convenience method that combines Query.Run and Job.Read.

 it 
 , 
  
 err 
  
 = 
  
 job 
 . 
 Read 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // Proceed with iteration as above. 

Datasets and Tables

You can refer to datasets in the client's project with the Dataset method, and in other projects with the DatasetInProject method:

 myDataset 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
 yourDataset 
  
 := 
  
 client 
 . 
 DatasetInProject 
 ( 
 "your-project-id" 
 , 
  
 "your_dataset" 
 ) 

These methods create references to datasets, not the datasets themselves. You can have a dataset reference even if the dataset doesn't exist yet. Use Dataset.Create to create a dataset from a reference:

 if 
  
 err 
  
 := 
  
 myDataset 
 . 
 Create 
 ( 
 ctx 
 , 
  
 nil 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

You can refer to tables with Dataset.Table. Like bigquery.Dataset, bigquery.Table is a reference to an object in BigQuery that may or may not exist.

 table 
  
 := 
  
 myDataset 
 . 
 Table 
 ( 
 "my_table" 
 ) 

You can create, delete and update the metadata of tables with methods on Table. For instance, you could create a temporary table with:

 err 
  
 = 
  
 myDataset 
 . 
 Table 
 ( 
 "temp" 
 ). 
 Create 
 ( 
 ctx 
 , 
  
& bigquery 
 . 
 TableMetadata 
 { 
  
 ExpirationTime 
 : 
  
 time 
 . 
 Now 
 (). 
 Add 
 ( 
 1 
 * 
 time 
 . 
 Hour 
 )}) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

We'll see how to create a table with a schema in the next section.

Schemas

There are two ways to construct schemas with this package. You can build a schema by hand, like so:

 schema1 
  
 := 
  
 bigquery 
 . 
 Schema 
 { 
  
 { 
 Name 
 : 
  
 "Name" 
 , 
  
 Required 
 : 
  
 true 
 , 
  
 Type 
 : 
  
 bigquery 
 . 
 StringFieldType 
 }, 
  
 { 
 Name 
 : 
  
 "Grades" 
 , 
  
 Repeated 
 : 
  
 true 
 , 
  
 Type 
 : 
  
 bigquery 
 . 
 IntegerFieldType 
 }, 
  
 { 
 Name 
 : 
  
 "Optional" 
 , 
  
 Required 
 : 
  
 false 
 , 
  
 Type 
 : 
  
 bigquery 
 . 
 IntegerFieldType 
 }, 
 } 

Or you can infer the schema from a struct:

 type 
  
 student 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Grades 
  
 [] 
 int 
  
 Optional 
  
 bigquery 
 . 
 NullInt64 
 } 
 schema2 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 InferSchema 
 ( 
 student 
 {}) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // schema1 and schema2 are identical. 

Struct inference supports tags like those of the encoding/json package, so you can change names, ignore fields, or mark a field as nullable (non-required). Fields declared as one of the Null types (NullInt64, NullFloat64, NullString, NullBool, NullTimestamp, NullDate, NullTime, NullDateTime, and NullGeography) are automatically inferred as nullable, so the "nullable" tag is only needed for []byte, *big.Rat and pointer-to-struct fields.

 type 
  
 student2 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 `bigquery:"full_name"` 
  
 Grades 
  
 [] 
 int 
  
 Secret 
  
 string 
  
 `bigquery:"-"` 
  
 Optional 
  
 [] 
 byte 
  
 ` 
 bigquery 
 : 
 ",nullable" 
 } 
 schema3 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 InferSchema 
 ( 
 student2 
 {}) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // schema3 has required fields "full_name" and "Grade", and nullable BYTES field "Optional". 

Having constructed a schema, you can create a table with it like so:

 if 
  
 err 
  
 := 
  
 table 
 . 
 Create 
 ( 
 ctx 
 , 
  
& bigquery 
 . 
 TableMetadata 
 { 
 Schema 
 : 
  
 schema1 
 }); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Copying

You can copy one or more tables to another table. Begin by constructing a Copier describing the copy. Then set any desired copy options, and finally call Run to get a Job:

 copier 
  
 := 
  
 myDataset 
 . 
 Table 
 ( 
 "dest" 
 ). 
 CopierFrom 
 ( 
 myDataset 
 . 
 Table 
 ( 
 "src" 
 )) 
 copier 
 . 
 WriteDisposition 
  
 = 
  
 bigquery 
 . 
 WriteTruncate 
 job 
 , 
  
 err 
  
 = 
  
 copier 
 . 
 Run 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

You can chain the call to Run if you don't want to set options:

 job 
 , 
  
 err 
  
 = 
  
 myDataset 
 . 
 Table 
 ( 
 "dest" 
 ). 
 CopierFrom 
 ( 
 myDataset 
 . 
 Table 
 ( 
 "src" 
 )). 
 Run 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

You can wait for your job to complete:

 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Job.Wait polls with exponential backoff. You can also poll yourself, if you wish:

 for 
  
 { 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Status 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 status 
 . 
 Done 
 () 
  
 { 
  
 if 
  
 status 
 . 
 Err 
 () 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatalf 
 ( 
 "Job failed with error %v" 
 , 
  
 status 
 . 
 Err 
 ()) 
  
 } 
  
 break 
  
 } 
  
 time 
 . 
 Sleep 
 ( 
 pollInterval 
 ) 
 } 

Loading and Uploading

There are two ways to populate a table with this package: load the data from a Google Cloud Storage object, or upload rows directly from your program.

For loading, first create a GCSReference, configuring it if desired. Then make a Loader, optionally configure it as well, and call its Run method.

 gcsRef 
  
 := 
  
 bigquery 
 . 
 NewGCSReference 
 ( 
 "gs://my-bucket/my-object" 
 ) 
 gcsRef 
 . 
 AllowJaggedRows 
  
 = 
  
 true 
 loader 
  
 := 
  
 myDataset 
 . 
 Table 
 ( 
 "dest" 
 ). 
 LoaderFrom 
 ( 
 gcsRef 
 ) 
 loader 
 . 
 CreateDisposition 
  
 = 
  
 bigquery 
 . 
 CreateNever 
 job 
 , 
  
 err 
  
 = 
  
 loader 
 . 
 Run 
 ( 
 ctx 
 ) 
 // Poll the job for completion if desired, as above. 

To upload, first define a type that implements the ValueSaver interface, which has a single method named Save. Then create an Inserter, and call its Put method with a slice of values.

 u 
  
 := 
  
 table 
 . 
 Inserter 
 () 
 // Item implements the ValueSaver interface. 
 items 
  
 := 
  
 [] 
 * 
 Item 
 { 
  
 { 
 Name 
 : 
  
 "n1" 
 , 
  
 Size 
 : 
  
 32.6 
 , 
  
 Count 
 : 
  
 7 
 }, 
  
 { 
 Name 
 : 
  
 "n2" 
 , 
  
 Size 
 : 
  
 4 
 , 
  
 Count 
 : 
  
 2 
 }, 
  
 { 
 Name 
 : 
  
 "n3" 
 , 
  
 Size 
 : 
  
 101.5 
 , 
  
 Count 
 : 
  
 1 
 }, 
 } 
 if 
  
 err 
  
 := 
  
 u 
 . 
 Put 
 ( 
 ctx 
 , 
  
 items 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

You can also upload a struct that doesn't implement ValueSaver. Use the StructSaver type to specify the schema and insert ID by hand, or just supply the struct or struct pointer directly and the schema will be inferred:

 type 
  
 Item2 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Size 
  
 float64 
  
 Count 
  
 int 
 } 
 // Item implements the ValueSaver interface. 
 items2 
  
 := 
  
 [] 
 * 
 Item2 
 { 
  
 { 
 Name 
 : 
  
 "n1" 
 , 
  
 Size 
 : 
  
 32.6 
 , 
  
 Count 
 : 
  
 7 
 }, 
  
 { 
 Name 
 : 
  
 "n2" 
 , 
  
 Size 
 : 
  
 4 
 , 
  
 Count 
 : 
  
 2 
 }, 
  
 { 
 Name 
 : 
  
 "n3" 
 , 
  
 Size 
 : 
  
 101.5 
 , 
  
 Count 
 : 
  
 1 
 }, 
 } 
 if 
  
 err 
  
 := 
  
 u 
 . 
 Put 
 ( 
 ctx 
 , 
  
 items2 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

BigQuery allows for higher throughput when omitting insertion IDs. To enable this, specify the sentinel NoDedupeID value for the insertion ID when implementing a ValueSaver.

Extracting

If you've been following so far, extracting data from a BigQuery table into a Google Cloud Storage object will feel familiar. First create an Extractor, then optionally configure it, and lastly call its Run method.

 extractor 
  
 := 
  
 table 
 . 
 ExtractorTo 
 ( 
 gcsRef 
 ) 
 extractor 
 . 
 DisableHeader 
  
 = 
  
 true 
 job 
 , 
  
 err 
  
 = 
  
 extractor 
 . 
 Run 
 ( 
 ctx 
 ) 
 // Poll the job for completion if desired, as above. 

Errors

Errors returned by this client are often of the type googleapi.Error: https://godoc.org/google.golang.org/api/googleapi#Error

These errors can be introspected for more information by type asserting to the richer *googleapi.Error type. For example:

  
 if 
  
 e 
 , 
  
 ok 
  
 := 
  
 err 
 .( 
 * 
 googleapi 
 . 
 Error 
 ); 
  
 ok 
  
 { 
  
 if 
  
 e 
 . 
 Code 
  
 = 
  
 409 
  
 { 
  
 ... 
  
 } 
  
 } 

In some cases, your client may received unstructured googleapi.Error error responses. In such cases, it is likely that you have exceeded BigQuery request limits, documented at: https://cloud.google.com/bigquery/quotas

Constants

NumericPrecisionDigits, NumericScaleDigits, BigNumericPrecisionDigits, BigNumericScaleDigits

  const 
  
 ( 
  
 // NumericPrecisionDigits is the maximum number of digits in a NUMERIC value. 
  
 NumericPrecisionDigits 
  
 = 
  
 38 
  
 // NumericScaleDigits is the maximum number of digits after the decimal point in a NUMERIC value. 
  
 NumericScaleDigits 
  
 = 
  
 9 
  
 // BigNumericPrecisionDigits is the maximum number of full digits in a BIGNUMERIC value. 
  
 BigNumericPrecisionDigits 
  
 = 
  
 76 
  
 // BigNumericScaleDigits is the maximum number of full digits in a BIGNUMERIC value. 
  
 BigNumericScaleDigits 
  
 = 
  
 38 
 ) 
 

DetectProjectID

  const 
  
 DetectProjectID 
  
 = 
  
 "*detect-project-id*" 
 

DetectProjectID is a sentinel value that instructs NewClient to detect the project ID. It is given in place of the projectID argument. NewClient will use the project ID from the given credentials or the default credentials ( https://developers.google.com/accounts/docs/application-default-credentials ) if no credentials were provided. When providing credentials, not all options will allow NewClient to extract the project ID. Specifically a JWT does not have the project ID encoded.

NoDedupeID

  const 
  
 NoDedupeID 
  
 = 
  
 "NoDedupeID" 
 

NoDedupeID indicates a streaming insert row wants to opt out of best-effort deduplication. It is EXPERIMENTAL and subject to change or removal without notice.

Scope

  const 
  
 ( 
  
 // Scope is the Oauth2 scope for the service. 
  
 // For relevant BigQuery scopes, see: 
  
 // https://developers.google.com/identity/protocols/googlescopes#bigqueryv2 
  
 Scope 
  
 = 
  
 "https://www.googleapis.com/auth/bigquery" 
 ) 
 

Variables

NeverExpire

  var 
  
 NeverExpire 
  
 = 
  
  time 
 
 . 
  Time 
 
 {}. 
 Add 
 ( 
 - 
 1 
 ) 
 

NeverExpire is a sentinel value used to remove a table'e expiration time.

Functions

func BigNumericString

  func 
  
 BigNumericString 
 ( 
 r 
  
 * 
  big 
 
 . 
  Rat 
 
 ) 
  
  string 
 
 

BigNumericString returns a string representing a *big.Rat in a format compatible with BigQuery SQL. It returns a floating point literal with 38 digits after the decimal point.

func CivilDateTimeString

  func 
  
 CivilDateTimeString 
 ( 
 dt 
  
  civil 
 
 . 
  DateTime 
 
 ) 
  
  string 
 
 

CivilDateTimeString returns a string representing a civil.DateTime in a format compatible with BigQuery SQL. It separate the date and time with a space, and formats the time with CivilTimeString.

Use CivilDateTimeString when using civil.DateTime in DML, for example in INSERT statements.

func CivilTimeString

  func 
  
 CivilTimeString 
 ( 
 t 
  
  civil 
 
 . 
  Time 
 
 ) 
  
  string 
 
 

CivilTimeString returns a string representing a civil.Time in a format compatible with BigQuery SQL. It rounds the time to the nearest microsecond and returns a string with six digits of sub-second precision.

Use CivilTimeString when using civil.Time in DML, for example in INSERT statements.

func NumericString

  func 
  
 NumericString 
 ( 
 r 
  
 * 
  big 
 
 . 
  Rat 
 
 ) 
  
  string 
 
 

NumericString returns a string representing a *big.Rat in a format compatible with BigQuery SQL. It returns a floating-point literal with 9 digits after the decimal point.

func Seed

  func 
  
 Seed 
 ( 
 s 
  
  int64 
 
 ) 
 

Seed seeds this package's random number generator, used for generating job and insert IDs. Use Seed to obtain repeatable, deterministic behavior from bigquery clients. Seed should be called before any clients are created.

AccessEntry

  type 
  
 AccessEntry 
  
 struct 
  
 { 
  
 Role 
  
  AccessRole 
 
  
 // The role of the entity 
  
 EntityType 
  
  EntityType 
 
  
 // The type of entity 
  
 Entity 
  
  string 
 
  
 // The entity (individual or group) granted access 
  
 View 
  
 * 
  Table 
 
  
 // The view granted access (EntityType must be ViewEntity) 
  
 Routine 
  
 * 
  Routine 
 
  
 // The routine granted access (only UDF currently supported) 
 } 
 

An AccessEntry describes the permissions that an entity has on a dataset.

AccessRole

  type 
  
 AccessRole 
  
  string 
 
 

AccessRole is the level of access to grant to a dataset.

OwnerRole, ReaderRole, WriterRole

  const 
  
 ( 
  
 // OwnerRole is the OWNER AccessRole. 
  
 OwnerRole 
  
  AccessRole 
 
  
 = 
  
 "OWNER" 
  
 // ReaderRole is the READER AccessRole. 
  
 ReaderRole 
  
  AccessRole 
 
  
 = 
  
 "READER" 
  
 // WriterRole is the WRITER AccessRole. 
  
 WriterRole 
  
  AccessRole 
 
  
 = 
  
 "WRITER" 
 ) 
 

BigtableColumn

  type 
  
 BigtableColumn 
  
 struct 
  
 { 
  
 // Qualifier of the column. Columns in the parent column family that have this 
  
 // exact qualifier are exposed as . field. The column field name is the 
  
 // same as the column qualifier. 
  
 Qualifier 
  
  string 
 
  
 // If the qualifier is not a valid BigQuery field identifier i.e. does not match 
  
 // [a-zA-Z][a-zA-Z0-9_]*, a valid identifier must be provided as the column field 
  
 // name and is used as field name in queries. 
  
 FieldName 
  
  string 
 
  
 // If true, only the latest version of values are exposed for this column. 
  
 // See BigtableColumnFamily.OnlyReadLatest. 
  
 OnlyReadLatest 
  
  bool 
 
  
 // The encoding of the values when the type is not STRING. 
  
 // See BigtableColumnFamily.Encoding 
  
 Encoding 
  
  string 
 
  
 // The type to convert the value in cells of this column. 
  
 // See BigtableColumnFamily.Type 
  
 Type 
  
  string 
 
 } 
 

BigtableColumn describes how BigQuery should access a Bigtable column.

BigtableColumnFamily

  type 
  
 BigtableColumnFamily 
  
 struct 
  
 { 
  
 // Identifier of the column family. 
  
 FamilyID 
  
  string 
 
  
 // Lists of columns that should be exposed as individual fields as opposed to a 
  
 // list of (column name, value) pairs. All columns whose qualifier matches a 
  
 // qualifier in this list can be accessed as .. Other columns can be accessed as 
  
 // a list through .Column field. 
  
 Columns 
  
 [] 
 * 
  BigtableColumn 
 
  
 // The encoding of the values when the type is not STRING. Acceptable encoding values are: 
  
 // - TEXT - indicates values are alphanumeric text strings. 
  
 // - BINARY - indicates values are encoded using HBase Bytes.toBytes family of functions. 
  
 // This can be overridden for a specific column by listing that column in 'columns' and 
  
 // specifying an encoding for it. 
  
 Encoding 
  
  string 
 
  
 // If true, only the latest version of values are exposed for all columns in this 
  
 // column family. This can be overridden for a specific column by listing that 
  
 // column in 'columns' and specifying a different setting for that column. 
  
 OnlyReadLatest 
  
  bool 
 
  
 // The type to convert the value in cells of this 
  
 // column family. The values are expected to be encoded using HBase 
  
 // Bytes.toBytes function when using the BINARY encoding value. 
  
 // Following BigQuery types are allowed (case-sensitive): 
  
 // BYTES STRING INTEGER FLOAT BOOLEAN. 
  
 // The default type is BYTES. This can be overridden for a specific column by 
  
 // listing that column in 'columns' and specifying a type for it. 
  
 Type 
  
  string 
 
 } 
 

BigtableColumnFamily describes how BigQuery should access a Bigtable column family.

BigtableOptions

  type 
  
 BigtableOptions 
  
 struct 
  
 { 
  
 // A list of column families to expose in the table schema along with their 
  
 // types. If omitted, all column families are present in the table schema and 
  
 // their values are read as BYTES. 
  
 ColumnFamilies 
  
 [] 
 * 
  BigtableColumnFamily 
 
  
 // If true, then the column families that are not specified in columnFamilies 
  
 // list are not exposed in the table schema. Otherwise, they are read with BYTES 
  
 // type values. The default is false. 
  
 IgnoreUnspecifiedColumnFamilies 
  
  bool 
 
  
 // If true, then the rowkey column families will be read and converted to string. 
  
 // Otherwise they are read with BYTES type values and users need to manually cast 
  
 // them with CAST if necessary. The default is false. 
  
 ReadRowkeyAsString 
  
  bool 
 
 } 
 

BigtableOptions are additional options for Bigtable external data sources.

CSVOptions

  type 
  
 CSVOptions 
  
 struct 
  
 { 
  
 // AllowJaggedRows causes missing trailing optional columns to be tolerated 
  
 // when reading CSV data. Missing values are treated as nulls. 
  
 AllowJaggedRows 
  
  bool 
 
  
 // AllowQuotedNewlines sets whether quoted data sections containing 
  
 // newlines are allowed when reading CSV data. 
  
 AllowQuotedNewlines 
  
  bool 
 
  
 // Encoding is the character encoding of data to be read. 
  
 Encoding 
  
  Encoding 
 
  
 // FieldDelimiter is the separator for fields in a CSV file, used when 
  
 // reading or exporting data. The default is ",". 
  
 FieldDelimiter 
  
  string 
 
  
 // Quote is the value used to quote data sections in a CSV file. The 
  
 // default quotation character is the double quote ("), which is used if 
  
 // both Quote and ForceZeroQuote are unset. 
  
 // To specify that no character should be interpreted as a quotation 
  
 // character, set ForceZeroQuote to true. 
  
 // Only used when reading data. 
  
 Quote 
  
  string 
 
  
 ForceZeroQuote 
  
  bool 
 
  
 // The number of rows at the top of a CSV file that BigQuery will skip when 
  
 // reading data. 
  
 SkipLeadingRows 
  
  int64 
 
 } 
 

CSVOptions are additional options for CSV external data sources.

Client

  type 
  
 Client 
  
 struct 
  
 { 
  
 // Location, if set, will be used as the default location for all subsequent 
  
 // dataset creation and job operations. A location specified directly in one of 
  
 // those operations will override this value. 
  
 Location 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

Client may be used to perform BigQuery operations.

func NewClient

  func 
  
 NewClient 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
  
  string 
 
 , 
  
 opts 
  
 ... 
  option 
 
 . 
  ClientOption 
 
 ) 
  
 ( 
 * 
  Client 
 
 , 
  
  error 
 
 ) 
 

NewClient constructs a new Client which can perform BigQuery operations. Operations performed via the client are billed to the specified GCP project.

If the project ID is set to DetectProjectID, NewClient will attempt to detect the project ID from credentials.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 _ 
  
 = 
  
 client 
  
 // TODO: Use client. 
 } 
 

func (*Client) Close

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Close 
 () 
  
  error 
 
 

Close closes any resources held by the client. Close should be called when the client is no longer needed. It need not be called at program exit.

func (*Client) Dataset

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Dataset 
 ( 
 id 
  
  string 
 
 ) 
  
 * 
  Dataset 
 
 

Dataset creates a handle to a BigQuery dataset in the client's project.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 fmt 
 . 
 Println 
 ( 
 ds 
 ) 
 } 
 

func (*Client) DatasetInProject

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 DatasetInProject 
 ( 
 projectID 
 , 
  
 datasetID 
  
  string 
 
 ) 
  
 * 
  Dataset 
 
 

DatasetInProject creates a handle to a BigQuery dataset in the specified project.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
  DatasetInProject 
 
 ( 
 "their-project-id" 
 , 
  
 "their-dataset" 
 ) 
  
 fmt 
 . 
 Println 
 ( 
 ds 
 ) 
 } 
 

func (*Client) Datasets

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Datasets 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  DatasetIterator 
 
 

Datasets returns an iterator over the datasets in a project. The Client's project is used by default, but that can be changed by setting ProjectID on the returned iterator before calling Next.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Datasets 
 
 ( 
 ctx 
 ) 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Client) DatasetsInProject (deprecated)

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 DatasetsInProject 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
  
  string 
 
 ) 
  
 * 
  DatasetIterator 
 
 

DatasetsInProject returns an iterator over the datasets in the provided project.

Deprecated: call Client.Datasets, then set ProjectID on the returned iterator.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  DatasetsInProject 
 
 ( 
 ctx 
 , 
  
 "their-project-id" 
 ) 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Client) JobFromID

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 JobFromID 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 id 
  
  string 
 
 ) 
  
 ( 
 * 
  Job 
 
 , 
  
  error 
 
 ) 
 

JobFromID creates a Job which refers to an existing BigQuery job. The job need not have been created by this package. For example, the job may have been created in the BigQuery console.

For jobs whose location is other than "US" or "EU", set Client.Location or use JobFromIDLocation.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 getJobID 
 () 
  
 string 
  
 { 
  
 return 
  
 "" 
  
 } 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 jobID 
  
 := 
  
 getJobID 
 () 
  
 // Get a job ID using Job.ID, the console or elsewhere. 
  
 job 
 , 
  
 err 
  
 := 
  
 client 
 . 
  JobFromID 
 
 ( 
 ctx 
 , 
  
 jobID 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 job 
 . 
  LastStatus 
 
 ()) 
  
 // Display the job's status. 
 } 
 

func (*Client) JobFromIDLocation

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 JobFromIDLocation 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 id 
 , 
  
 location 
  
  string 
 
 ) 
  
 ( 
 j 
  
 * 
  Job 
 
 , 
  
 err 
  
  error 
 
 ) 
 

JobFromIDLocation creates a Job which refers to an existing BigQuery job. The job need not have been created by this package (for example, it may have been created in the BigQuery console), but it must exist in the specified location.

func (*Client) Jobs

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Jobs 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  JobIterator 
 
 

Jobs lists jobs within a project.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Jobs 
 
 ( 
 ctx 
 ) 
  
 it 
 . 
  State 
 
  
 = 
  
 bigquery 
 . 
  Running 
 
  
 // list only running jobs. 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Client) Project

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Project 
 () 
  
  string 
 
 

Project returns the project ID or number for this instance of the client, which may have either been explicitly specified or autodetected.

func (*Client) Query

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Query 
 ( 
 q 
  
  string 
 
 ) 
  
 * 
  Query 
 
 

Query creates a query with string q. The returned Query may optionally be further configured before its Run method is called.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select name, num from t1" 
 ) 
  
 q 
 . 
 DefaultProjectID 
  
 = 
  
 "project-id" 
  
 // TODO: set other options on the Query. 
  
 // TODO: Call Query.Run or Query.Read. 
 } 
 
encryptionKey
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select name, num from t1" 
 ) 
  
 // TODO: Replace this key with a key you have created in Cloud KMS. 
  
 keyName 
  
 := 
  
 "projects/P/locations/L/keyRings/R/cryptoKeys/K" 
  
 q 
 . 
 DestinationEncryptionConfig 
  
 = 
  
& bigquery 
 . 
  EncryptionConfig 
 
 { 
 KMSKeyName 
 : 
  
 keyName 
 } 
  
 // TODO: set other options on the Query. 
  
 // TODO: Call Query.Run or Query.Read. 
 } 
 
parameters
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select num from t1 where name = @user" 
 ) 
  
 q 
 . 
 Parameters 
  
 = 
  
 [] 
 bigquery 
 . 
  QueryParameter 
 
 { 
  
 { 
 Name 
 : 
  
 "user" 
 , 
  
 Value 
 : 
  
 "Elizabeth" 
 }, 
  
 } 
  
 // TODO: set other options on the Query. 
  
 // TODO: Call Query.Run or Query.Read. 
 } 
 

Clustering

  type 
  
 Clustering 
  
 struct 
  
 { 
  
 Fields 
  
 [] 
  string 
 
 } 
 

Clustering governs the organization of data within a managed table. For more information, see https://cloud.google.com/bigquery/docs/clustered-tables

Compression

  type 
  
 Compression 
  
  string 
 
 

Compression is the type of compression to apply when writing data to Google Cloud Storage.

None, Gzip, Deflate, Snappy

  const 
  
 ( 
  
 // None specifies no compression. 
  
 None 
  
  Compression 
 
  
 = 
  
 "NONE" 
  
 // Gzip specifies gzip compression. 
  
 Gzip 
  
  Compression 
 
  
 = 
  
 "GZIP" 
  
 // Deflate specifies DEFLATE compression for Avro files. 
  
 Deflate 
  
  Compression 
 
  
 = 
  
 "DEFLATE" 
  
 // Snappy specifies SNAPPY compression for Avro files. 
  
 Snappy 
  
  Compression 
 
  
 = 
  
 "SNAPPY" 
 ) 
 

ConnectionProperty

  type 
  
 ConnectionProperty 
  
 struct 
  
 { 
  
 // Name of the connection property to set. 
  
 Key 
  
  string 
 
  
 // Value of the connection property. 
  
 Value 
  
  string 
 
 } 
 

ConnectionProperty represents a single key and value pair that can be sent alongside a query request.

Copier

  type 
  
 Copier 
  
 struct 
  
 { 
  
  JobIDConfig 
 
  
  CopyConfig 
 
  
 // contains filtered or unexported fields 
 } 
 

A Copier copies data into a BigQuery table from one or more BigQuery tables.

func (*Copier) Run

  func 
  
 ( 
 c 
  
 * 
  Copier 
 
 ) 
  
 Run 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 * 
  Job 
 
 , 
  
  error 
 
 ) 
 

Run initiates a copy job.

CopyConfig

  type 
  
 CopyConfig 
  
 struct 
  
 { 
  
 // Srcs are the tables from which data will be copied. 
  
 Srcs 
  
 [] 
 * 
  Table 
 
  
 // Dst is the table into which the data will be copied. 
  
 Dst 
  
 * 
  Table 
 
  
 // CreateDisposition specifies the circumstances under which the destination table will be created. 
  
 // The default is CreateIfNeeded. 
  
 CreateDisposition 
  
  TableCreateDisposition 
 
  
 // WriteDisposition specifies how existing data in the destination table is treated. 
  
 // The default is WriteEmpty. 
  
 WriteDisposition 
  
  TableWriteDisposition 
 
  
 // The labels associated with this job. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // Custom encryption configuration (e.g., Cloud KMS keys). 
  
 DestinationEncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // One of the supported operation types when executing a Table Copy jobs.  By default this 
  
 // copies tables, but can also be set to perform snapshot or restore operations. 
  
 OperationType 
  
  TableCopyOperationType 
 
 } 
 

CopyConfig holds the configuration for a copy job.

DMLStatistics

  type 
  
 DMLStatistics 
  
 struct 
  
 { 
  
 // Rows added by the statement. 
  
 InsertedRowCount 
  
  int64 
 
  
 // Rows removed by the statement. 
  
 DeletedRowCount 
  
  int64 
 
  
 // Rows changed by the statement. 
  
 UpdatedRowCount 
  
  int64 
 
 } 
 

DMLStatistics contains counts of row mutations triggered by a DML query statement.

DataFormat

  type 
  
 DataFormat 
  
  string 
 
 

DataFormat describes the format of BigQuery table data.

CSV, Avro, JSON, DatastoreBackup, GoogleSheets, Bigtable, Parquet, ORC, TFSavedModel, XGBoostBooster

  const 
  
 ( 
  
 CSV 
  
  DataFormat 
 
  
 = 
  
 "CSV" 
  
 Avro 
  
  DataFormat 
 
  
 = 
  
 "AVRO" 
  
 JSON 
  
  DataFormat 
 
  
 = 
  
 "NEWLINE_DELIMITED_JSON" 
  
 DatastoreBackup 
  
  DataFormat 
 
  
 = 
  
 "DATASTORE_BACKUP" 
  
 GoogleSheets 
  
  DataFormat 
 
  
 = 
  
 "GOOGLE_SHEETS" 
  
 Bigtable 
  
  DataFormat 
 
  
 = 
  
 "BIGTABLE" 
  
 Parquet 
  
  DataFormat 
 
  
 = 
  
 "PARQUET" 
  
 ORC 
  
  DataFormat 
 
  
 = 
  
 "ORC" 
  
 // For BQ ML Models, TensorFlow Saved Model format. 
  
 TFSavedModel 
  
  DataFormat 
 
  
 = 
  
 "ML_TF_SAVED_MODEL" 
  
 // For BQ ML Models, xgBoost Booster format. 
  
 XGBoostBooster 
  
  DataFormat 
 
  
 = 
  
 "ML_XGBOOST_BOOSTER" 
 ) 
 

Constants describing the format of BigQuery table data.

Dataset

  type 
  
 Dataset 
  
 struct 
  
 { 
  
 ProjectID 
  
  string 
 
  
 DatasetID 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

Dataset is a reference to a BigQuery dataset.

func (*Dataset) Create

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Create 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 md 
  
 * 
  DatasetMetadata 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Create creates a dataset in the BigQuery service. An error will be returned if the dataset already exists. Pass in a DatasetMetadata value to configure the dataset.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 if 
  
 err 
  
 := 
  
 ds 
 . 
 Create 
 ( 
 ctx 
 , 
  
& bigquery 
 . 
  DatasetMetadata 
 
 { 
 Location 
 : 
  
 "EU" 
 }); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

func (*Dataset) Delete

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete deletes the dataset. Delete will fail if the dataset is not empty.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Delete 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

func (*Dataset) DeleteWithContents

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 DeleteWithContents 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

DeleteWithContents deletes the dataset, as well as contained resources.

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Metadata 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 md 
  
 * 
  DatasetMetadata 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Metadata fetches the metadata for the dataset.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 md 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Metadata 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 md 
 ) 
 } 
 

func (*Dataset) Model

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Model 
 ( 
 modelID 
  
  string 
 
 ) 
  
 * 
  Model 
 
 

Model creates a handle to a BigQuery model in the dataset. To determine if a model exists, call Model.Metadata. If the model does not already exist, you can create it via execution of a CREATE MODEL query.

func (*Dataset) Models

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Models 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  ModelIterator 
 
 

Models returns an iterator over the models in the Dataset.

func (*Dataset) Routine

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Routine 
 ( 
 routineID 
  
  string 
 
 ) 
  
 * 
  Routine 
 
 

Routine creates a handle to a BigQuery routine in the dataset. To determine if a routine exists, call Routine.Metadata.

func (*Dataset) Routines

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Routines 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  RoutineIterator 
 
 

Routines returns an iterator over the routines in the Dataset.

func (*Dataset) Table

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Table 
 ( 
 tableID 
  
  string 
 
 ) 
  
 * 
  Table 
 
 

Table creates a handle to a BigQuery table in the dataset. To determine if a table exists, call Table.Metadata. If the table does not already exist, use Table.Create to create it.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 // Table creates a reference to the table. It does not create the actual 
  
 // table in BigQuery; to do so, use Table.Create. 
  
 t 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ) 
  
 fmt 
 . 
 Println 
 ( 
 t 
 ) 
 } 
 

func (*Dataset) Tables

  func 
  
 ( 
 d 
  
 * 
  Dataset 
 
 ) 
  
 Tables 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  TableIterator 
 
 

Tables returns an iterator over the tables in the Dataset.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
  Tables 
 
 ( 
 ctx 
 ) 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Dataset) Update

Update modifies specific Dataset metadata fields. To perform a read-modify-write that protects against intervening reads, set the etag argument to the DatasetMetadata.ETag field from the read. Pass the empty string for etag for a "blind write" that will always succeed.

Examples

blindWrite
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 md 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Update 
 ( 
 ctx 
 , 
  
 bigquery 
 . 
  DatasetMetadataToUpdate 
 
 { 
 Name 
 : 
  
 "blind" 
 }, 
  
 "" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 md 
 ) 
 } 
 
readModifyWrite
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 md 
 , 
  
 err 
  
 := 
  
 ds 
 . 
 Metadata 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 md2 
 , 
  
 err 
  
 := 
  
 ds 
 . 
 Update 
 ( 
 ctx 
 , 
  
 bigquery 
 . 
  DatasetMetadataToUpdate 
 
 { 
 Name 
 : 
  
 "new " 
  
 + 
  
 md 
 . 
 Name 
 }, 
  
 md 
 . 
 ETag 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 md2 
 ) 
 } 
 

DatasetIterator

  type 
  
 DatasetIterator 
  
 struct 
  
 { 
  
 // ListHidden causes hidden datasets to be listed when set to true. 
  
 // Set before the first call to Next. 
  
 ListHidden 
  
  bool 
 
  
 // Filter restricts the datasets returned by label. The filter syntax is described in 
  
 // https://cloud.google.com/bigquery/docs/labeling-datasets#filtering_datasets_using_labels 
  
 // Set before the first call to Next. 
  
 Filter 
  
  string 
 
  
 // The project ID of the listed datasets. 
  
 // Set before the first call to Next. 
  
 ProjectID 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

DatasetIterator iterates over the datasets in a project.

func (*DatasetIterator) Next

  func 
  
 ( 
 it 
  
 * 
  DatasetIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  Dataset 
 
 , 
  
  error 
 
 ) 
 

Next returns the next Dataset. Its second return value is iterator.Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Datasets 
 
 ( 
 ctx 
 ) 
  
 for 
  
 { 
  
 ds 
 , 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 ds 
 ) 
  
 } 
 } 
 

func (*DatasetIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  DatasetIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

  type 
  
 DatasetMetadata 
  
 struct 
  
 { 
  
 // These fields can be set when creating a dataset. 
  
 Name 
  
  string 
 
  
 // The user-friendly name for this dataset. 
  
 Description 
  
  string 
 
  
 // The user-friendly description of this dataset. 
  
 Location 
  
  string 
 
  
 // The geo location of the dataset. 
  
 DefaultTableExpiration 
  
  time 
 
 . 
  Duration 
 
  
 // The default expiration time for new tables. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // User-provided labels. 
  
 Access 
  
 [] 
 * 
  AccessEntry 
 
  
 // Access permissions. 
  
 DefaultEncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // These fields are read-only. 
  
 CreationTime 
  
  time 
 
 . 
  Time 
 
  
 LastModifiedTime 
  
  time 
 
 . 
  Time 
 
  
 // When the dataset or any of its tables were modified. 
  
 FullID 
  
  string 
 
  
 // The full dataset ID in the form projectID:datasetID. 
  
 // ETag is the ETag obtained when reading metadata. Pass it to Dataset.Update to 
  
 // ensure that the metadata hasn't changed since it was read. 
  
 ETag 
  
  string 
 
 } 
 

DatasetMetadata contains information about a BigQuery dataset.

DatasetMetadataToUpdate

  type 
  
 DatasetMetadataToUpdate 
  
 struct 
  
 { 
  
 Description 
  
  optional 
 
 . 
  String 
 
  
 // The user-friendly description of this table. 
  
 Name 
  
  optional 
 
 . 
  String 
 
  
 // The user-friendly name for this dataset. 
  
 // DefaultTableExpiration is the default expiration time for new tables. 
  
 // If set to time.Duration(0), new tables never expire. 
  
 DefaultTableExpiration 
  
  optional 
 
 . 
  Duration 
 
  
 // DefaultEncryptionConfig defines CMEK settings for new resources created 
  
 // in the dataset. 
  
 DefaultEncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // The entire access list. It is not possible to replace individual entries. 
  
 Access 
  
 [] 
 * 
  AccessEntry 
 
  
 // contains filtered or unexported fields 
 } 
 

DatasetMetadataToUpdate is used when updating a dataset's metadata. Only non-nil fields will be updated.

func (*DatasetMetadataToUpdate) DeleteLabel

  func 
  
 ( 
 u 
  
 * 
  DatasetMetadataToUpdate 
 
 ) 
  
 DeleteLabel 
 ( 
 name 
  
  string 
 
 ) 
 

DeleteLabel causes a label to be deleted on a call to Update.

func (*DatasetMetadataToUpdate) SetLabel

  func 
  
 ( 
 u 
  
 * 
  DatasetMetadataToUpdate 
 
 ) 
  
 SetLabel 
 ( 
 name 
 , 
  
 value 
  
  string 
 
 ) 
 

SetLabel causes a label to be added or modified on a call to Update.

DecimalTargetType

  type 
  
 DecimalTargetType 
  
  string 
 
 

DecimalTargetType is used to express preference ordering for converting values from external formats.

NumericTargetType, BigNumericTargetType, StringTargetType

  var 
  
 ( 
  
 // NumericTargetType indicates the preferred type is NUMERIC when supported. 
  
 NumericTargetType 
  
  DecimalTargetType 
 
  
 = 
  
 "NUMERIC" 
  
 // BigNumericTargetType indicates the preferred type is BIGNUMERIC when supported. 
  
 BigNumericTargetType 
  
  DecimalTargetType 
 
  
 = 
  
 "BIGNUMERIC" 
  
 // StringTargetType indicates the preferred type is STRING when supported. 
  
 StringTargetType 
  
  DecimalTargetType 
 
  
 = 
  
 "STRING" 
 ) 
 

Encoding

  type 
  
 Encoding 
  
  string 
 
 

Encoding specifies the character encoding of data to be loaded into BigQuery. See https://cloud.google.com/bigquery/docs/reference/v2/jobs#configuration.load.encoding for more details about how this is used.

UTF_8, ISO_8859_1

  const 
  
 ( 
  
 // UTF_8 specifies the UTF-8 encoding type. 
  
 UTF_8 
  
  Encoding 
 
  
 = 
  
 "UTF-8" 
  
 // ISO_8859_1 specifies the ISO-8859-1 encoding type. 
  
 ISO_8859_1 
  
  Encoding 
 
  
 = 
  
 "ISO-8859-1" 
 ) 
 

EncryptionConfig

  type 
  
 EncryptionConfig 
  
 struct 
  
 { 
  
 // Describes the Cloud KMS encryption key that will be used to protect 
  
 // destination BigQuery table. The BigQuery Service Account associated with your 
  
 // project requires access to this encryption key. 
  
 KMSKeyName 
  
  string 
 
 } 
 

EncryptionConfig configures customer-managed encryption on tables and ML models.

EntityType

  type 
  
 EntityType 
  
  int 
 
 

EntityType is the type of entity in an AccessEntry.

DomainEntity, GroupEmailEntity, UserEmailEntity, SpecialGroupEntity, ViewEntity, IAMMemberEntity, RoutineEntity

  const 
  
 ( 
  
 // DomainEntity is a domain (e.g. "example.com"). 
  
 DomainEntity 
  
  EntityType 
 
  
 = 
  
  iota 
 
  
 + 
  
 1 
  
 // GroupEmailEntity is an email address of a Google Group. 
  
 GroupEmailEntity 
  
 // UserEmailEntity is an email address of an individual user. 
  
 UserEmailEntity 
  
 // SpecialGroupEntity is a special group: one of projectOwners, projectReaders, projectWriters or 
  
 // allAuthenticatedUsers. 
  
 SpecialGroupEntity 
  
 // ViewEntity is a BigQuery logical view. 
  
 ViewEntity 
  
 // IAMMemberEntity represents entities present in IAM but not represented using 
  
 // the other entity types. 
  
 IAMMemberEntity 
  
 // RoutineEntity is a BigQuery routine, referencing a User Defined Function (UDF). 
  
 RoutineEntity 
 ) 
 

Error

  type 
  
 Error 
  
 struct 
  
 { 
  
 // Mirrors bq.ErrorProto, but drops DebugInfo 
  
 Location 
 , 
  
 Message 
 , 
  
 Reason 
  
  string 
 
 } 
 

An Error contains detailed information about a failed bigquery operation. Detailed description of possible Reasons can be found here: https://cloud.google.com/bigquery/troubleshooting-errors .

func (Error) Error

  func 
  
 ( 
 e 
  
  Error 
 
 ) 
  
 Error 
 () 
  
  string 
 
 

ExplainQueryStage

  type 
  
 ExplainQueryStage 
  
 struct 
  
 { 
  
 // CompletedParallelInputs: Number of parallel input segments completed. 
  
 CompletedParallelInputs 
  
  int64 
 
  
 // ComputeAvg: Duration the average shard spent on CPU-bound tasks. 
  
 ComputeAvg 
  
  time 
 
 . 
  Duration 
 
  
 // ComputeMax: Duration the slowest shard spent on CPU-bound tasks. 
  
 ComputeMax 
  
  time 
 
 . 
  Duration 
 
  
 // Relative amount of the total time the average shard spent on CPU-bound tasks. 
  
 ComputeRatioAvg 
  
  float64 
 
  
 // Relative amount of the total time the slowest shard spent on CPU-bound tasks. 
  
 ComputeRatioMax 
  
  float64 
 
  
 // EndTime: Stage end time. 
  
 EndTime 
  
  time 
 
 . 
  Time 
 
  
 // Unique ID for stage within plan. 
  
 ID 
  
  int64 
 
  
 // InputStages: IDs for stages that are inputs to this stage. 
  
 InputStages 
  
 [] 
  int64 
 
  
 // Human-readable name for stage. 
  
 Name 
  
  string 
 
  
 // ParallelInputs: Number of parallel input segments to be processed. 
  
 ParallelInputs 
  
  int64 
 
  
 // ReadAvg: Duration the average shard spent reading input. 
  
 ReadAvg 
  
  time 
 
 . 
  Duration 
 
  
 // ReadMax: Duration the slowest shard spent reading input. 
  
 ReadMax 
  
  time 
 
 . 
  Duration 
 
  
 // Relative amount of the total time the average shard spent reading input. 
  
 ReadRatioAvg 
  
  float64 
 
  
 // Relative amount of the total time the slowest shard spent reading input. 
  
 ReadRatioMax 
  
  float64 
 
  
 // Number of records read into the stage. 
  
 RecordsRead 
  
  int64 
 
  
 // Number of records written by the stage. 
  
 RecordsWritten 
  
  int64 
 
  
 // ShuffleOutputBytes: Total number of bytes written to shuffle. 
  
 ShuffleOutputBytes 
  
  int64 
 
  
 // ShuffleOutputBytesSpilled: Total number of bytes written to shuffle 
  
 // and spilled to disk. 
  
 ShuffleOutputBytesSpilled 
  
  int64 
 
  
 // StartTime: Stage start time. 
  
 StartTime 
  
  time 
 
 . 
  Time 
 
  
 // Current status for the stage. 
  
 Status 
  
  string 
 
  
 // List of operations within the stage in dependency order (approximately 
  
 // chronological). 
  
 Steps 
  
 [] 
 * 
  ExplainQueryStep 
 
  
 // WaitAvg: Duration the average shard spent waiting to be scheduled. 
  
 WaitAvg 
  
  time 
 
 . 
  Duration 
 
  
 // WaitMax: Duration the slowest shard spent waiting to be scheduled. 
  
 WaitMax 
  
  time 
 
 . 
  Duration 
 
  
 // Relative amount of the total time the average shard spent waiting to be scheduled. 
  
 WaitRatioAvg 
  
  float64 
 
  
 // Relative amount of the total time the slowest shard spent waiting to be scheduled. 
  
 WaitRatioMax 
  
  float64 
 
  
 // WriteAvg: Duration the average shard spent on writing output. 
  
 WriteAvg 
  
  time 
 
 . 
  Duration 
 
  
 // WriteMax: Duration the slowest shard spent on writing output. 
  
 WriteMax 
  
  time 
 
 . 
  Duration 
 
  
 // Relative amount of the total time the average shard spent on writing output. 
  
 WriteRatioAvg 
  
  float64 
 
  
 // Relative amount of the total time the slowest shard spent on writing output. 
  
 WriteRatioMax 
  
  float64 
 
 } 
 

ExplainQueryStage describes one stage of a query.

ExplainQueryStep

  type 
  
 ExplainQueryStep 
  
 struct 
  
 { 
  
 // Machine-readable operation type. 
  
 Kind 
  
  string 
 
  
 // Human-readable stage descriptions. 
  
 Substeps 
  
 [] 
  string 
 
 } 
 

ExplainQueryStep describes one step of a query stage.

ExternalData

  type 
  
 ExternalData 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

ExternalData is a table which is stored outside of BigQuery. It is implemented by *ExternalDataConfig. GCSReference also implements it, for backwards compatibility.

ExternalDataConfig

  type 
  
 ExternalDataConfig 
  
 struct 
  
 { 
  
 // The format of the data. Required. 
  
 SourceFormat 
  
  DataFormat 
 
  
 // The fully-qualified URIs that point to your 
  
 // data in Google Cloud. Required. 
  
 // 
  
 // For Google Cloud Storage URIs, each URI can contain one '*' wildcard character 
  
 // and it must come after the 'bucket' name. Size limits related to load jobs 
  
 // apply to external data sources. 
  
 // 
  
 // For Google Cloud Bigtable URIs, exactly one URI can be specified and it has be 
  
 // a fully specified and valid HTTPS URL for a Google Cloud Bigtable table. 
  
 // 
  
 // For Google Cloud Datastore backups, exactly one URI can be specified. Also, 
  
 // the '*' wildcard character is not allowed. 
  
 SourceURIs 
  
 [] 
  string 
 
  
 // The schema of the data. Required for CSV and JSON; disallowed for the 
  
 // other formats. 
  
 Schema 
  
  Schema 
 
  
 // Try to detect schema and format options automatically. 
  
 // Any option specified explicitly will be honored. 
  
 AutoDetect 
  
  bool 
 
  
 // The compression type of the data. 
  
 Compression 
  
  Compression 
 
  
 // IgnoreUnknownValues causes values not matching the schema to be 
  
 // tolerated. Unknown values are ignored. For CSV this ignores extra values 
  
 // at the end of a line. For JSON this ignores named values that do not 
  
 // match any column name. If this field is not set, records containing 
  
 // unknown values are treated as bad records. The MaxBadRecords field can 
  
 // be used to customize how bad records are handled. 
  
 IgnoreUnknownValues 
  
  bool 
 
  
 // MaxBadRecords is the maximum number of bad records that will be ignored 
  
 // when reading data. 
  
 MaxBadRecords 
  
  int64 
 
  
 // Additional options for CSV, GoogleSheets, Bigtable, and Parquet formats. 
  
 Options 
  
  ExternalDataConfigOptions 
 
  
 // HivePartitioningOptions allows use of Hive partitioning based on the 
  
 // layout of objects in Google Cloud Storage. 
  
 HivePartitioningOptions 
  
 * 
  HivePartitioningOptions 
 
  
 // DecimalTargetTypes allows selection of how decimal values are converted when 
  
 // processed in bigquery, subject to the value type having sufficient precision/scale 
  
 // to support the values.  In the order of NUMERIC, BIGNUMERIC, and STRING, a type is 
  
 // selected if is present in the list and if supports the necessary precision and scale. 
  
 // 
  
 // StringTargetType supports all precision and scale values. 
  
 DecimalTargetTypes 
  
 [] 
  DecimalTargetType 
 
 } 
 

ExternalDataConfig describes data external to BigQuery that can be used in queries and to create external tables.

ExternalDataConfigOptions

  type 
  
 ExternalDataConfigOptions 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

ExternalDataConfigOptions are additional options for external data configurations. This interface is implemented by CSVOptions, GoogleSheetsOptions and BigtableOptions.

ExtractConfig

  type 
  
 ExtractConfig 
  
 struct 
  
 { 
  
 // Src is the table from which data will be extracted. 
  
 // Only one of Src or SrcModel should be specified. 
  
 Src 
  
 * 
  Table 
 
  
 // SrcModel is the ML model from which the data will be extracted. 
  
 // Only one of Src or SrcModel should be specified. 
  
 SrcModel 
  
 * 
  Model 
 
  
 // Dst is the destination into which the data will be extracted. 
  
 Dst 
  
 * 
  GCSReference 
 
  
 // DisableHeader disables the printing of a header row in exported data. 
  
 DisableHeader 
  
  bool 
 
  
 // The labels associated with this job. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // For Avro-based extracts, controls whether logical type annotations are generated. 
  
 // 
  
 // Example:  With this enabled, writing a BigQuery TIMESTAMP column will result in 
  
 // an integer column annotated with the appropriate timestamp-micros/millis annotation 
  
 // in the resulting Avro files. 
  
 UseAvroLogicalTypes 
  
  bool 
 
 } 
 

ExtractConfig holds the configuration for an extract job.

ExtractStatistics

  type 
  
 ExtractStatistics 
  
 struct 
  
 { 
  
 // The number of files per destination URI or URI pattern specified in the 
  
 // extract configuration. These values will be in the same order as the 
  
 // URIs specified in the 'destinationUris' field. 
  
 DestinationURIFileCounts 
  
 [] 
  int64 
 
 } 
 

ExtractStatistics contains statistics about an extract job.

Extractor

  type 
  
 Extractor 
  
 struct 
  
 { 
  
  JobIDConfig 
 
  
  ExtractConfig 
 
  
 // contains filtered or unexported fields 
 } 
 

An Extractor extracts data from a BigQuery table into Google Cloud Storage.

func (*Extractor) Run

  func 
  
 ( 
 e 
  
 * 
  Extractor 
 
 ) 
  
 Run 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 j 
  
 * 
  Job 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Run initiates an extract job.

FieldSchema

  type 
  
 FieldSchema 
  
 struct 
  
 { 
  
 // The field name. 
  
 // Must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_), 
  
 // and must start with a letter or underscore. 
  
 // The maximum length is 128 characters. 
  
 Name 
  
  string 
 
  
 // A description of the field. The maximum length is 16,384 characters. 
  
 Description 
  
  string 
 
  
 // Whether the field may contain multiple values. 
  
 Repeated 
  
  bool 
 
  
 // Whether the field is required.  Ignored if Repeated is true. 
  
 Required 
  
  bool 
 
  
 // The field data type.  If Type is Record, then this field contains a nested schema, 
  
 // which is described by Schema. 
  
 Type 
  
  FieldType 
 
  
 // Annotations for enforcing column-level security constraints. 
  
 PolicyTags 
  
 * 
  PolicyTagList 
 
  
 // Describes the nested schema if Type is set to Record. 
  
 Schema 
  
  Schema 
 
  
 // Maximum length of the field for STRING or BYTES type. 
  
 // 
  
 // It is invalid to set value for types other than STRING or BYTES. 
  
 // 
  
 // For STRING type, this represents the maximum UTF-8 length of strings 
  
 // allowed in the field. For BYTES type, this represents the maximum 
  
 // number of bytes in the field. 
  
 MaxLength 
  
  int64 
 
  
 // Precision can be used to constrain the maximum number of 
  
 // total digits allowed for NUMERIC or BIGNUMERIC types. 
  
 // 
  
 // It is invalid to set values for Precision for types other than 
  
 // NUMERIC or BIGNUMERIC. 
  
 // 
  
 // For NUMERIC type, acceptable values for Precision must 
  
 // be: 1 ≤ (Precision - Scale) ≤ 29. Values for Scale 
  
 // must be: 0 ≤ Scale ≤ 9. 
  
 // 
  
 // For BIGNUMERIC type, acceptable values for Precision must 
  
 // be: 1 ≤ (Precision - Scale) ≤ 38. Values for Scale 
  
 // must be: 0 ≤ Scale ≤ 38. 
  
 Precision 
  
  int64 
 
  
 // Scale can be used to constrain the maximum number of digits 
  
 // in the fractional part of a NUMERIC or BIGNUMERIC type. 
  
 // 
  
 // If the Scale value is set, the Precision value must be set as well. 
  
 // 
  
 // It is invalid to set values for Scale for types other than 
  
 // NUMERIC or BIGNUMERIC. 
  
 // 
  
 // See the Precision field for additional guidance about valid values. 
  
 Scale 
  
  int64 
 
 } 
 

FieldSchema describes a single field.

FieldType

  type 
  
 FieldType 
  
  string 
 
 

FieldType is the type of field.

StringFieldType, BytesFieldType, IntegerFieldType, FloatFieldType, BooleanFieldType, TimestampFieldType, RecordFieldType, DateFieldType, TimeFieldType, DateTimeFieldType, NumericFieldType, GeographyFieldType, BigNumericFieldType

  const 
  
 ( 
  
 // StringFieldType is a string field type. 
  
 StringFieldType 
  
  FieldType 
 
  
 = 
  
 "STRING" 
  
 // BytesFieldType is a bytes field type. 
  
 BytesFieldType 
  
  FieldType 
 
  
 = 
  
 "BYTES" 
  
 // IntegerFieldType is a integer field type. 
  
 IntegerFieldType 
  
  FieldType 
 
  
 = 
  
 "INTEGER" 
  
 // FloatFieldType is a float field type. 
  
 FloatFieldType 
  
  FieldType 
 
  
 = 
  
 "FLOAT" 
  
 // BooleanFieldType is a boolean field type. 
  
 BooleanFieldType 
  
  FieldType 
 
  
 = 
  
 "BOOLEAN" 
  
 // TimestampFieldType is a timestamp field type. 
  
 TimestampFieldType 
  
  FieldType 
 
  
 = 
  
 "TIMESTAMP" 
  
 // RecordFieldType is a record field type. It is typically used to create columns with repeated or nested data. 
  
 RecordFieldType 
  
  FieldType 
 
  
 = 
  
 "RECORD" 
  
 // DateFieldType is a date field type. 
  
 DateFieldType 
  
  FieldType 
 
  
 = 
  
 "DATE" 
  
 // TimeFieldType is a time field type. 
  
 TimeFieldType 
  
  FieldType 
 
  
 = 
  
 "TIME" 
  
 // DateTimeFieldType is a datetime field type. 
  
 DateTimeFieldType 
  
  FieldType 
 
  
 = 
  
 "DATETIME" 
  
 // NumericFieldType is a numeric field type. Numeric types include integer types, floating point types and the 
  
 // NUMERIC data type. 
  
 NumericFieldType 
  
  FieldType 
 
  
 = 
  
 "NUMERIC" 
  
 // GeographyFieldType is a string field type.  Geography types represent a set of points 
  
 // on the Earth's surface, represented in Well Known Text (WKT) format. 
  
 GeographyFieldType 
  
  FieldType 
 
  
 = 
  
 "GEOGRAPHY" 
  
 // BigNumericFieldType is a numeric field type that supports values of larger precision 
  
 // and scale than the NumericFieldType. 
  
 BigNumericFieldType 
  
  FieldType 
 
  
 = 
  
 "BIGNUMERIC" 
 ) 
 

FileConfig

  type 
  
 FileConfig 
  
 struct 
  
 { 
  
 // SourceFormat is the format of the data to be read. 
  
 // Allowed values are: Avro, CSV, DatastoreBackup, JSON, ORC, and Parquet.  The default is CSV. 
  
 SourceFormat 
  
  DataFormat 
 
  
 // Indicates if we should automatically infer the options and 
  
 // schema for CSV and JSON sources. 
  
 AutoDetect 
  
  bool 
 
  
 // MaxBadRecords is the maximum number of bad records that will be ignored 
  
 // when reading data. 
  
 MaxBadRecords 
  
  int64 
 
  
 // IgnoreUnknownValues causes values not matching the schema to be 
  
 // tolerated. Unknown values are ignored. For CSV this ignores extra values 
  
 // at the end of a line. For JSON this ignores named values that do not 
  
 // match any column name. If this field is not set, records containing 
  
 // unknown values are treated as bad records. The MaxBadRecords field can 
  
 // be used to customize how bad records are handled. 
  
 IgnoreUnknownValues 
  
  bool 
 
  
 // Schema describes the data. It is required when reading CSV or JSON data, 
  
 // unless the data is being loaded into a table that already exists. 
  
 Schema 
  
  Schema 
 
  
 // Additional options for CSV files. 
  
  CSVOptions 
 
  
 // Additional options for Parquet files. 
  
 ParquetOptions 
  
 * 
  ParquetOptions 
 
 } 
 

FileConfig contains configuration options that pertain to files, typically text files that require interpretation to be used as a BigQuery table. A file may live in Google Cloud Storage (see GCSReference), or it may be loaded into a table via the Table.LoaderFromReader.

GCSReference

  type 
  
 GCSReference 
  
 struct 
  
 { 
  
 // URIs refer to Google Cloud Storage objects. 
  
 URIs 
  
 [] 
  string 
 
  
  FileConfig 
 
  
 // DestinationFormat is the format to use when writing exported files. 
  
 // Allowed values are: CSV, Avro, JSON.  The default is CSV. 
  
 // CSV is not supported for tables with nested or repeated fields. 
  
 DestinationFormat 
  
  DataFormat 
 
  
 // Compression specifies the type of compression to apply when writing data 
  
 // to Google Cloud Storage, or using this GCSReference as an ExternalData 
  
 // source with CSV or JSON SourceFormat. Default is None. 
  
 // 
  
 // Avro files allow additional compression types: DEFLATE and SNAPPY. 
  
 Compression 
  
  Compression 
 
 } 
 

GCSReference is a reference to one or more Google Cloud Storage objects, which together constitute an input or output to a BigQuery operation.

func NewGCSReference

  func 
  
 NewGCSReference 
 ( 
 uri 
  
  string 
 
 ) 
  
 * 
  GCSReference 
 
 

NewGCSReference constructs a reference to one or more Google Cloud Storage objects, which together constitute a data source or destination. In the simple case, a single URI in the form gs://bucket/object may refer to a single GCS object. Data may also be split into mutiple files, if multiple URIs or URIs containing wildcards are provided. Each URI may contain one '*' wildcard character, which (if present) must come after the bucket name. For more information about the treatment of wildcards and multiple URIs, see https://cloud.google.com/bigquery/exporting-data-from-bigquery#exportingmultiple

Example

  package 
  
 main 
 import 
  
 ( 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 gcsRef 
  
 := 
  
 bigquery 
 . 
  NewGCSReference 
 
 ( 
 "gs://my-bucket/my-object" 
 ) 
  
 fmt 
 . 
 Println 
 ( 
 gcsRef 
 ) 
 } 
 

GoogleSheetsOptions

  type 
  
 GoogleSheetsOptions 
  
 struct 
  
 { 
  
 // The number of rows at the top of a sheet that BigQuery will skip when 
  
 // reading data. 
  
 SkipLeadingRows 
  
  int64 
 
  
 // Optionally specifies a more specific range of cells to include. 
  
 // Typical format: sheet_name!top_left_cell_id:bottom_right_cell_id 
  
 // 
  
 // Example: sheet1!A1:B20 
  
 Range 
  
  string 
 
 } 
 

GoogleSheetsOptions are additional options for GoogleSheets external data sources.

HivePartitioningMode

  type 
  
 HivePartitioningMode 
  
  string 
 
 

HivePartitioningMode is used in conjunction with HivePartitioningOptions.

AutoHivePartitioningMode, StringHivePartitioningMode, CustomHivePartitioningMode

  const 
  
 ( 
  
 // AutoHivePartitioningMode automatically infers partitioning key and types. 
  
 AutoHivePartitioningMode 
  
  HivePartitioningMode 
 
  
 = 
  
 "AUTO" 
  
 // StringHivePartitioningMode automatically infers partitioning keys and treats values as string. 
  
 StringHivePartitioningMode 
  
  HivePartitioningMode 
 
  
 = 
  
 "STRINGS" 
  
 // CustomHivePartitioningMode allows custom definition of the external partitioning. 
  
 CustomHivePartitioningMode 
  
  HivePartitioningMode 
 
  
 = 
  
 "CUSTOM" 
 ) 
 

HivePartitioningOptions

  type 
  
 HivePartitioningOptions 
  
 struct 
  
 { 
  
 // Mode defines which hive partitioning mode to use when reading data. 
  
 Mode 
  
  HivePartitioningMode 
 
  
 // When hive partition detection is requested, a common prefix for 
  
 // all source uris should be supplied.  The prefix must end immediately 
  
 // before the partition key encoding begins. 
  
 // 
  
 // For example, consider files following this data layout. 
  
 //   gs://bucket/path_to_table/dt=2019-01-01/country=BR/id=7/file.avro 
  
 //   gs://bucket/path_to_table/dt=2018-12-31/country=CA/id=3/file.avro 
  
 // 
  
 // When hive partitioning is requested with either AUTO or STRINGS 
  
 // detection, the common prefix can be either of 
  
 // gs://bucket/path_to_table or gs://bucket/path_to_table/ (trailing 
  
 // slash does not matter). 
  
 SourceURIPrefix 
  
  string 
 
  
 // If set to true, queries against this external table require 
  
 // a partition filter to be present that can perform partition 
  
 // elimination.  Hive-partitioned load jobs with this field 
  
 // set to true will fail. 
  
 RequirePartitionFilter 
  
  bool 
 
 } 
 

HivePartitioningOptions defines the behavior of Hive partitioning when working with external data.

Inserter

  type 
  
 Inserter 
  
 struct 
  
 { 
  
 // SkipInvalidRows causes rows containing invalid data to be silently 
  
 // ignored. The default value is false, which causes the entire request to 
  
 // fail if there is an attempt to insert an invalid row. 
  
 SkipInvalidRows 
  
  bool 
 
  
 // IgnoreUnknownValues causes values not matching the schema to be ignored. 
  
 // The default value is false, which causes records containing such values 
  
 // to be treated as invalid records. 
  
 IgnoreUnknownValues 
  
  bool 
 
  
 // A TableTemplateSuffix allows Inserters to create tables automatically. 
  
 // 
  
 // Experimental: this option is experimental and may be modified or removed in future versions, 
  
 // regardless of any other documented package stability guarantees. In general, 
  
 // the BigQuery team recommends the use of partitioned tables over sharding 
  
 // tables by suffix. 
  
 // 
  
 // When you specify a suffix, the table you upload data to 
  
 // will be used as a template for creating a new table, with the same schema, 
  
 // called  + 
 

An Inserter does streaming inserts into a BigQuery table. It is safe for concurrent use.

func (*Inserter) Put

  func 
  
 ( 
 u 
  
 * 
  Inserter 
 
 ) 
  
 Put 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 src 
  
 interface 
 {}) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Put uploads one or more rows to the BigQuery service.

If src is ValueSaver, then its Save method is called to produce a row for uploading.

If src is a struct or pointer to a struct, then a schema is inferred from it and used to create a StructSaver. The InsertID of the StructSaver will be empty.

If src is a slice of ValueSavers, structs, or struct pointers, then each element of the slice is treated as above, and multiple rows are uploaded.

Put returns a PutMultiError if one or more rows failed to be uploaded. The PutMultiError contains a RowInsertionError for each failed row.

Put will retry on temporary errors (see https://cloud.google.com/bigquery/troubleshooting-errors ). This can result in duplicate rows if you do not use insert IDs. Also, if the error persists, the call will run indefinitely. Pass a context with a timeout to prevent hanging calls.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 type 
  
 Item 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Size 
  
 float64 
  
 Count 
  
 int 
 } 
 // Save implements the ValueSaver interface. 
 func 
  
 ( 
 i 
  
 * 
 Item 
 ) 
  
 Save 
 () 
  
 ( 
 map 
 [ 
 string 
 ] 
 bigquery 
 . 
  Value 
 
 , 
  
 string 
 , 
  
 error 
 ) 
  
 { 
  
 return 
  
 map 
 [ 
 string 
 ] 
 bigquery 
 . 
  Value 
 
 { 
  
 "Name" 
 : 
  
 i 
 . 
 Name 
 , 
  
 "Size" 
 : 
  
 i 
 . 
 Size 
 , 
  
 "Count" 
 : 
  
 i 
 . 
 Count 
 , 
  
 }, 
  
 "" 
 , 
  
 nil 
 } 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ins 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Inserter 
 () 
  
 // Item implements the ValueSaver interface. 
  
 items 
  
 := 
  
 [] 
 * 
 Item 
 { 
  
 { 
 Name 
 : 
  
 "n1" 
 , 
  
 Size 
 : 
  
 32.6 
 , 
  
 Count 
 : 
  
 7 
 }, 
  
 { 
 Name 
 : 
  
 "n2" 
 , 
  
 Size 
 : 
  
 4 
 , 
  
 Count 
 : 
  
 2 
 }, 
  
 { 
 Name 
 : 
  
 "n3" 
 , 
  
 Size 
 : 
  
 101.5 
 , 
  
 Count 
 : 
  
 1 
 }, 
  
 } 
  
 if 
  
 err 
  
 := 
  
 ins 
 . 
  Put 
 
 ( 
 ctx 
 , 
  
 items 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 
struct
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ins 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Inserter 
 () 
  
 type 
  
 score 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Num 
  
 int 
  
 } 
  
 scores 
  
 := 
  
 [] 
 score 
 { 
  
 { 
 Name 
 : 
  
 "n1" 
 , 
  
 Num 
 : 
  
 12 
 }, 
  
 { 
 Name 
 : 
  
 "n2" 
 , 
  
 Num 
 : 
  
 31 
 }, 
  
 { 
 Name 
 : 
  
 "n3" 
 , 
  
 Num 
 : 
  
 7 
 }, 
  
 } 
  
 // Schema is inferred from the score type. 
  
 if 
  
 err 
  
 := 
  
 ins 
 . 
  Put 
 
 ( 
 ctx 
 , 
  
 scores 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 
structSaver
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 var 
  
 schema 
  
 bigquery 
 . 
  Schema 
 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ins 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Inserter 
 () 
  
 type 
  
 score 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Num 
  
 int 
  
 } 
  
 // Assume schema holds the table's schema. 
  
 savers 
  
 := 
  
 [] 
 * 
 bigquery 
 . 
  StructSaver 
 
 { 
  
 { 
 Struct 
 : 
  
 score 
 { 
 Name 
 : 
  
 "n1" 
 , 
  
 Num 
 : 
  
 12 
 }, 
  
 Schema 
 : 
  
 schema 
 , 
  
 InsertID 
 : 
  
 "id1" 
 }, 
  
 { 
 Struct 
 : 
  
 score 
 { 
 Name 
 : 
  
 "n2" 
 , 
  
 Num 
 : 
  
 31 
 }, 
  
 Schema 
 : 
  
 schema 
 , 
  
 InsertID 
 : 
  
 "id2" 
 }, 
  
 { 
 Struct 
 : 
  
 score 
 { 
 Name 
 : 
  
 "n3" 
 , 
  
 Num 
 : 
  
 7 
 }, 
  
 Schema 
 : 
  
 schema 
 , 
  
 InsertID 
 : 
  
 "id3" 
 }, 
  
 } 
  
 if 
  
 err 
  
 := 
  
 ins 
 . 
  Put 
 
 ( 
 ctx 
 , 
  
 savers 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 
valuesSaver
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 var 
  
 schema 
  
 bigquery 
 . 
  Schema 
 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ins 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Inserter 
 () 
  
 var 
  
 vss 
  
 [] 
 * 
 bigquery 
 . 
  ValuesSaver 
 
  
 for 
  
 i 
 , 
  
 name 
  
 := 
  
 range 
  
 [] 
 string 
 { 
 "n1" 
 , 
  
 "n2" 
 , 
  
 "n3" 
 } 
  
 { 
  
 // Assume schema holds the table's schema. 
  
 vss 
  
 = 
  
 append 
 ( 
 vss 
 , 
  
& bigquery 
 . 
  ValuesSaver 
 
 { 
  
 Schema 
 : 
  
 schema 
 , 
  
 InsertID 
 : 
  
 name 
 , 
  
 Row 
 : 
  
 [] 
 bigquery 
 . 
  Value 
 
 { 
 name 
 , 
  
 int64 
 ( 
 i 
 )}, 
  
 }) 
  
 } 
  
 if 
  
 err 
  
 := 
  
 ins 
 . 
  Put 
 
 ( 
 ctx 
 , 
  
 vss 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

Job

  type 
  
 Job 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

A Job represents an operation which has been submitted to BigQuery for processing.

func (*Job) Cancel

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Cancel 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
  error 
 
 

Cancel requests that a job be cancelled. This method returns without waiting for cancellation to take effect. To check whether the job has terminated, use Job.Status. Cancelled jobs may still incur costs.

func (*Job) Children

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Children 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  JobIterator 
 
 

Children returns a job iterator for enumerating child jobs of the current job. Currently only scripts, a form of query job, will create child jobs.

func (*Job) Config

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Config 
 () 
  
 ( 
  JobConfig 
 
 , 
  
  error 
 
 ) 
 

Config returns the configuration information for j.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 job 
 , 
  
 err 
  
 := 
  
 ds 
 . 
 Table 
 ( 
 "t1" 
 ). 
  CopierFrom 
 
 ( 
 ds 
 . 
 Table 
 ( 
 "t2" 
 )). 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 jc 
 , 
  
 err 
  
 := 
  
 job 
 . 
  Config 
 
 () 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 copyConfig 
  
 := 
  
 jc 
 .( 
 * 
 bigquery 
 . 
  CopyConfig 
 
 ) 
  
 fmt 
 . 
 Println 
 ( 
 copyConfig 
 . 
 Dst 
 , 
  
 copyConfig 
 . 
 CreateDisposition 
 ) 
 } 
 

func (*Job) Delete

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete deletes the job.

func (*Job) Email

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Email 
 () 
  
  string 
 
 

Email returns the email of the job's creator.

func (*Job) ID

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 ID 
 () 
  
  string 
 
 

ID returns the job's ID.

func (*Job) LastStatus

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 LastStatus 
 () 
  
 * 
  JobStatus 
 
 

LastStatus returns the most recently retrieved status of the job. The status is retrieved when a new job is created, or when JobFromID or Job.Status is called. Call Job.Status to get the most up-to-date information about a job.

func (*Job) Location

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Location 
 () 
  
  string 
 
 

Location returns the job's location.

func (*Job) ProjectID

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 ProjectID 
 () 
  
  string 
 
 

ProjectID returns the job's associated project.

func (*Job) Read

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Read 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 ri 
  
 * 
  RowIterator 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Read fetches the results of a query job. If j is not a query job, Read returns an error.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select name, num from t1" 
 ) 
  
 // Call Query.Run to get a Job, then call Read on the job. 
  
 // Note: Query.Read is a shorthand for this. 
  
 job 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Read 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Job) Status

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Status 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 js 
  
 * 
  JobStatus 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Status retrieves the current status of the job from BigQuery. It fails if the Status could not be determined.

func (*Job) Wait

  func 
  
 ( 
 j 
  
 * 
  Job 
 
 ) 
  
 Wait 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 js 
  
 * 
  JobStatus 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Wait blocks until the job or the context is done. It returns the final status of the job. If an error occurs while retrieving the status, Wait returns that error. But Wait returns nil if the status was retrieved successfully, even if status.Err() != nil. So callers must check both errors. See the example.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 job 
 , 
  
 err 
  
 := 
  
 ds 
 . 
 Table 
 ( 
 "t1" 
 ). 
  CopierFrom 
 
 ( 
 ds 
 . 
 Table 
 ( 
 "t2" 
 )). 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 status 
 . 
  Err 
 
 () 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

JobConfig

  type 
  
 JobConfig 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

JobConfig contains configuration information for a job. It is implemented by *CopyConfig, *ExtractConfig, *LoadConfig and *QueryConfig.

JobIDConfig

  type 
  
 JobIDConfig 
  
 struct 
  
 { 
  
 // JobID is the ID to use for the job. If empty, a random job ID will be generated. 
  
 JobID 
  
  string 
 
  
 // If AddJobIDSuffix is true, then a random string will be appended to JobID. 
  
 AddJobIDSuffix 
  
  bool 
 
  
 // Location is the location for the job. 
  
 Location 
  
  string 
 
 } 
 

JobIDConfig describes how to create an ID for a job.

JobIterator

  type 
  
 JobIterator 
  
 struct 
  
 { 
  
 ProjectID 
  
  string 
 
  
 // Project ID of the jobs to list. Default is the client's project. 
  
 AllUsers 
  
  bool 
 
  
 // Whether to list jobs owned by all users in the project, or just the current caller. 
  
 State 
  
  State 
 
  
 // List only jobs in the given state. Defaults to all states. 
  
 MinCreationTime 
  
  time 
 
 . 
  Time 
 
  
 // List only jobs created after this time. 
  
 MaxCreationTime 
  
  time 
 
 . 
  Time 
 
  
 // List only jobs created before this time. 
  
 ParentJobID 
  
  string 
 
  
 // List only jobs that are children of a given scripting job. 
  
 // contains filtered or unexported fields 
 } 
 

JobIterator iterates over jobs in a project.

func (*JobIterator) Next

  func 
  
 ( 
 it 
  
 * 
  JobIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  Job 
 
 , 
  
  error 
 
 ) 
 

Next returns the next Job. Its second return value is iterator.Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

func (*JobIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  JobIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo is a getter for the JobIterator's PageInfo.

JobStatistics

  type 
  
 JobStatistics 
  
 struct 
  
 { 
  
 CreationTime 
  
  time 
 
 . 
  Time 
 
  
 StartTime 
  
  time 
 
 . 
  Time 
 
  
 EndTime 
  
  time 
 
 . 
  Time 
 
  
 TotalBytesProcessed 
  
  int64 
 
  
 Details 
  
  Statistics 
 
  
 // NumChildJobs indicates the number of child jobs run as part of a script. 
  
 NumChildJobs 
  
  int64 
 
  
 // ParentJobID indicates the origin job for jobs run as part of a script. 
  
 ParentJobID 
  
  string 
 
  
 // ScriptStatistics includes information run as part of a child job within 
  
 // a script. 
  
 ScriptStatistics 
  
 * 
  ScriptStatistics 
 
  
 // ReservationUsage attributes slot consumption to reservations. 
  
 ReservationUsage 
  
 [] 
 * 
  ReservationUsage 
 
  
 // TransactionInfo indicates the transaction ID associated with the job, if any. 
  
 TransactionInfo 
  
 * 
  TransactionInfo 
 
  
 // SessionInfo contains information about the session if this job is part of one. 
  
 SessionInfo 
  
 * 
  SessionInfo 
 
 } 
 

JobStatistics contains statistics about a job.

JobStatus

  type 
  
 JobStatus 
  
 struct 
  
 { 
  
 State 
  
  State 
 
  
 // All errors encountered during the running of the job. 
  
 // Not all Errors are fatal, so errors here do not necessarily mean that the job has completed or was unsuccessful. 
  
 Errors 
  
 [] 
 * 
  Error 
 
  
 // Statistics about the job. 
  
 Statistics 
  
 * 
  JobStatistics 
 
  
 // contains filtered or unexported fields 
 } 
 

JobStatus contains the current State of a job, and errors encountered while processing that job.

func (*JobStatus) Done

  func 
  
 ( 
 s 
  
 * 
  JobStatus 
 
 ) 
  
 Done 
 () 
  
  bool 
 
 

Done reports whether the job has completed. After Done returns true, the Err method will return an error if the job completed unsuccessfully.

func (*JobStatus) Err

  func 
  
 ( 
 s 
  
 * 
  JobStatus 
 
 ) 
  
 Err 
 () 
  
  error 
 
 

Err returns the error that caused the job to complete unsuccessfully (if any).

LoadConfig

  type 
  
 LoadConfig 
  
 struct 
  
 { 
  
 // Src is the source from which data will be loaded. 
  
 Src 
  
  LoadSource 
 
  
 // Dst is the table into which the data will be loaded. 
  
 Dst 
  
 * 
  Table 
 
  
 // CreateDisposition specifies the circumstances under which the destination table will be created. 
  
 // The default is CreateIfNeeded. 
  
 CreateDisposition 
  
  TableCreateDisposition 
 
  
 // WriteDisposition specifies how existing data in the destination table is treated. 
  
 // The default is WriteAppend. 
  
 WriteDisposition 
  
  TableWriteDisposition 
 
  
 // The labels associated with this job. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // If non-nil, the destination table is partitioned by time. 
  
 TimePartitioning 
  
 * 
  TimePartitioning 
 
  
 // If non-nil, the destination table is partitioned by integer range. 
  
 RangePartitioning 
  
 * 
  RangePartitioning 
 
  
 // Clustering specifies the data clustering configuration for the destination table. 
  
 Clustering 
  
 * 
  Clustering 
 
  
 // Custom encryption configuration (e.g., Cloud KMS keys). 
  
 DestinationEncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // Allows the schema of the destination table to be updated as a side effect of 
  
 // the load job. 
  
 SchemaUpdateOptions 
  
 [] 
  string 
 
  
 // For Avro-based loads, controls whether logical type annotations are used. 
  
 // See https://cloud.google.com/bigquery/docs/loading-data-cloud-storage-avro#logical_types 
  
 // for additional information. 
  
 UseAvroLogicalTypes 
  
  bool 
 
  
 // For ingestion from datastore backups, ProjectionFields governs which fields 
  
 // are projected from the backup.  The default behavior projects all fields. 
  
 ProjectionFields 
  
 [] 
  string 
 
  
 // HivePartitioningOptions allows use of Hive partitioning based on the 
  
 // layout of objects in Cloud Storage. 
  
 HivePartitioningOptions 
  
 * 
  HivePartitioningOptions 
 
  
 // DecimalTargetTypes allows selection of how decimal values are converted when 
  
 // processed in bigquery, subject to the value type having sufficient precision/scale 
  
 // to support the values.  In the order of NUMERIC, BIGNUMERIC, and STRING, a type is 
  
 // selected if is present in the list and if supports the necessary precision and scale. 
  
 // 
  
 // StringTargetType supports all precision and scale values. 
  
 DecimalTargetTypes 
  
 [] 
  DecimalTargetType 
 
 } 
 

LoadConfig holds the configuration for a load job.

LoadSource

  type 
  
 LoadSource 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

A LoadSource represents a source of data that can be loaded into a BigQuery table.

This package defines two LoadSources: GCSReference, for Google Cloud Storage objects, and ReaderSource, for data read from an io.Reader.

LoadStatistics

  type 
  
 LoadStatistics 
  
 struct 
  
 { 
  
 // The number of bytes of source data in a load job. 
  
 InputFileBytes 
  
  int64 
 
  
 // The number of source files in a load job. 
  
 InputFiles 
  
  int64 
 
  
 // Size of the loaded data in bytes. Note that while a load job is in the 
  
 // running state, this value may change. 
  
 OutputBytes 
  
  int64 
 
  
 // The number of rows imported in a load job. Note that while an import job is 
  
 // in the running state, this value may change. 
  
 OutputRows 
  
  int64 
 
 } 
 

LoadStatistics contains statistics about a load job.

Loader

  type 
  
 Loader 
  
 struct 
  
 { 
  
  JobIDConfig 
 
  
  LoadConfig 
 
  
 // contains filtered or unexported fields 
 } 
 

A Loader loads data from Google Cloud Storage into a BigQuery table.

func (*Loader) Run

  func 
  
 ( 
 l 
  
 * 
  Loader 
 
 ) 
  
 Run 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 j 
  
 * 
  Job 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Run initiates a load job.

MaterializedViewDefinition

  type 
  
 MaterializedViewDefinition 
  
 struct 
  
 { 
  
 // EnableRefresh governs whether the derived view is updated to reflect 
  
 // changes in the base table. 
  
 EnableRefresh 
  
  bool 
 
  
 // LastRefreshTime reports the time, in millisecond precision, that the 
  
 // materialized view was last updated. 
  
 LastRefreshTime 
  
  time 
 
 . 
  Time 
 
  
 // Query contains the SQL query used to define the materialized view. 
  
 Query 
  
  string 
 
  
 // RefreshInterval defines the maximum frequency, in millisecond precision, 
  
 // at which this this materialized view will be refreshed. 
  
 RefreshInterval 
  
  time 
 
 . 
  Duration 
 
 } 
 

MaterializedViewDefinition contains information for materialized views.

Model

  type 
  
 Model 
  
 struct 
  
 { 
  
 ProjectID 
  
  string 
 
  
 DatasetID 
  
  string 
 
  
 // ModelID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). 
  
 // The maximum length is 1,024 characters. 
  
 ModelID 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

Model represent a reference to a BigQuery ML model. Within the API, models are used largely for communicating statistical information about a given model, as creation of models is only supported via BigQuery queries (e.g. CREATE MODEL .. AS ..).

For more info, see documentation for Bigquery ML, see: https://cloud.google.com/bigquery/docs/bigqueryml

func (*Model) Delete

  func 
  
 ( 
 m 
  
 * 
  Model 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete deletes an ML model.

func (*Model) ExtractorTo

  func 
  
 ( 
 m 
  
 * 
  Model 
 
 ) 
  
 ExtractorTo 
 ( 
 dst 
  
 * 
  GCSReference 
 
 ) 
  
 * 
  Extractor 
 
 

ExtractorTo returns an Extractor which can be persist a BigQuery Model into Google Cloud Storage. The returned Extractor may be further configured before its Run method is called.

func (*Model) FullyQualifiedName

  func 
  
 ( 
 m 
  
 * 
  Model 
 
 ) 
  
 FullyQualifiedName 
 () 
  
  string 
 
 

FullyQualifiedName returns the ID of the model in projectID:datasetID.modelid format.

  func 
  
 ( 
 m 
  
 * 
  Model 
 
 ) 
  
 Metadata 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 mm 
  
 * 
  ModelMetadata 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Metadata fetches the metadata for a model, which includes ML training statistics.

func (*Model) Update

Update updates mutable fields in an ML model.

ModelIterator

  type 
  
 ModelIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

A ModelIterator is an iterator over Models.

func (*ModelIterator) Next

  func 
  
 ( 
 it 
  
 * 
  ModelIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  Model 
 
 , 
  
  error 
 
 ) 
 

Next returns the next result. Its second return value is Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

func (*ModelIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  ModelIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

  type 
  
 ModelMetadata 
  
 struct 
  
 { 
  
 // The user-friendly description of the model. 
  
 Description 
  
  string 
 
  
 // The user-friendly name of the model. 
  
 Name 
  
  string 
 
  
 // The type of the model.  Possible values include: 
  
 // "LINEAR_REGRESSION" - a linear regression model 
  
 // "LOGISTIC_REGRESSION" - a logistic regression model 
  
 // "KMEANS" - a k-means clustering model 
  
 Type 
  
  string 
 
  
 // The creation time of the model. 
  
 CreationTime 
  
  time 
 
 . 
  Time 
 
  
 // The last modified time of the model. 
  
 LastModifiedTime 
  
  time 
 
 . 
  Time 
 
  
 // The expiration time of the model. 
  
 ExpirationTime 
  
  time 
 
 . 
  Time 
 
  
 // The geographic location where the model resides.  This value is 
  
 // inherited from the encapsulating dataset. 
  
 Location 
  
  string 
 
  
 // Custom encryption configuration (e.g., Cloud KMS keys). 
  
 EncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // ETag is the ETag obtained when reading metadata. Pass it to Model.Update 
  
 // to ensure that the metadata hasn't changed since it was read. 
  
 ETag 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

ModelMetadata represents information about a BigQuery ML model.

  func 
  
 ( 
 mm 
  
 * 
  ModelMetadata 
 
 ) 
  
 RawFeatureColumns 
 () 
  
 ([] 
 * 
  StandardSQLField 
 
 , 
  
  error 
 
 ) 
 

RawFeatureColumns exposes the underlying feature columns used to train an ML model and uses types from "google.golang.org/api/bigquery/v2", which are subject to change without warning. It is EXPERIMENTAL and subject to change or removal without notice.

  func 
  
 ( 
 mm 
  
 * 
  ModelMetadata 
 
 ) 
  
 RawLabelColumns 
 () 
  
 ([] 
 * 
  StandardSQLField 
 
 , 
  
  error 
 
 ) 
 

RawLabelColumns exposes the underlying label columns used to train an ML model and uses types from "google.golang.org/api/bigquery/v2", which are subject to change without warning. It is EXPERIMENTAL and subject to change or removal without notice.

  func 
  
 ( 
 mm 
  
 * 
  ModelMetadata 
 
 ) 
  
 RawTrainingRuns 
 () 
  
 [] 
 * 
  TrainingRun 
 
 

RawTrainingRuns exposes the underlying training run stats for a model using types from "google.golang.org/api/bigquery/v2", which are subject to change without warning. It is EXPERIMENTAL and subject to change or removal without notice.

ModelMetadataToUpdate

  type 
  
 ModelMetadataToUpdate 
  
 struct 
  
 { 
  
 // The user-friendly description of this model. 
  
 Description 
  
  optional 
 
 . 
  String 
 
  
 // The user-friendly name of this model. 
  
 Name 
  
  optional 
 
 . 
  String 
 
  
 // The time when this model expires.  To remove a model's expiration, 
  
 // set ExpirationTime to NeverExpire.  The zero value is ignored. 
  
 ExpirationTime 
  
  time 
 
 . 
  Time 
 
  
 // The model's encryption configuration. 
  
 EncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // contains filtered or unexported fields 
 } 
 

ModelMetadataToUpdate is used when updating an ML model's metadata. Only non-nil fields will be updated.

func (*ModelMetadataToUpdate) DeleteLabel

  func 
  
 ( 
 u 
  
 * 
  ModelMetadataToUpdate 
 
 ) 
  
 DeleteLabel 
 ( 
 name 
  
  string 
 
 ) 
 

DeleteLabel causes a label to be deleted on a call to Update.

func (*ModelMetadataToUpdate) SetLabel

  func 
  
 ( 
 u 
  
 * 
  ModelMetadataToUpdate 
 
 ) 
  
 SetLabel 
 ( 
 name 
 , 
  
 value 
  
  string 
 
 ) 
 

SetLabel causes a label to be added or modified on a call to Update.

MultiError

  type 
  
 MultiError 
  
 [] 
  error 
 
 

A MultiError contains multiple related errors.

func (MultiError) Error

  func 
  
 ( 
 m 
  
  MultiError 
 
 ) 
  
 Error 
 () 
  
  string 
 
 

NullBool

  type 
  
 NullBool 
  
 struct 
  
 { 
  
 Bool 
  
  bool 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if Bool is not NULL. 
 } 
 

NullBool represents a BigQuery BOOL that may be NULL.

func (NullBool) MarshalJSON

  func 
  
 ( 
 n 
  
  NullBool 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullBool to JSON.

func (NullBool) String

  func 
  
 ( 
 n 
  
  NullBool 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullBool) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullBool 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullBool.

NullDate

  type 
  
 NullDate 
  
 struct 
  
 { 
  
 Date 
  
  civil 
 
 . 
  Date 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if Date is not NULL. 
 } 
 

NullDate represents a BigQuery DATE that may be null.

func (NullDate) MarshalJSON

  func 
  
 ( 
 n 
  
  NullDate 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullDate to JSON.

func (NullDate) String

  func 
  
 ( 
 n 
  
  NullDate 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullDate) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullDate 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullDate.

NullDateTime

  type 
  
 NullDateTime 
  
 struct 
  
 { 
  
 DateTime 
  
  civil 
 
 . 
  DateTime 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if DateTime is not NULL. 
 } 
 

NullDateTime represents a BigQuery DATETIME that may be null.

func (NullDateTime) MarshalJSON

  func 
  
 ( 
 n 
  
  NullDateTime 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullDateTime to JSON.

func (NullDateTime) String

  func 
  
 ( 
 n 
  
  NullDateTime 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullDateTime) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullDateTime 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullDateTime.

NullFloat64

  type 
  
 NullFloat64 
  
 struct 
  
 { 
  
 Float64 
  
  float64 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if Float64 is not NULL. 
 } 
 

NullFloat64 represents a BigQuery FLOAT64 that may be NULL.

func (NullFloat64) MarshalJSON

  func 
  
 ( 
 n 
  
  NullFloat64 
 
 ) 
  
 MarshalJSON 
 () 
  
 ( 
 b 
  
 [] 
  byte 
 
 , 
  
 err 
  
  error 
 
 ) 
 

MarshalJSON converts the NullFloat64 to JSON.

func (NullFloat64) String

  func 
  
 ( 
 n 
  
  NullFloat64 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullFloat64) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullFloat64 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullFloat64.

NullGeography

  type 
  
 NullGeography 
  
 struct 
  
 { 
  
 GeographyVal 
  
  string 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if GeographyVal is not NULL. 
 } 
 

NullGeography represents a BigQuery GEOGRAPHY string that may be NULL.

func (NullGeography) MarshalJSON

  func 
  
 ( 
 n 
  
  NullGeography 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullGeography to JSON.

func (NullGeography) String

  func 
  
 ( 
 n 
  
  NullGeography 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullGeography) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullGeography 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullGeography.

NullInt64

  type 
  
 NullInt64 
  
 struct 
  
 { 
  
 Int64 
  
  int64 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if Int64 is not NULL. 
 } 
 

NullInt64 represents a BigQuery INT64 that may be NULL.

func (NullInt64) MarshalJSON

  func 
  
 ( 
 n 
  
  NullInt64 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullInt64 to JSON.

func (NullInt64) String

  func 
  
 ( 
 n 
  
  NullInt64 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullInt64) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullInt64 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullInt64.

NullString

  type 
  
 NullString 
  
 struct 
  
 { 
  
 StringVal 
  
  string 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if StringVal is not NULL. 
 } 
 

NullString represents a BigQuery STRING that may be NULL.

func (NullString) MarshalJSON

  func 
  
 ( 
 n 
  
  NullString 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullString to JSON.

func (NullString) String

  func 
  
 ( 
 n 
  
  NullString 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullString) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullString 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullString.

NullTime

  type 
  
 NullTime 
  
 struct 
  
 { 
  
 Time 
  
  civil 
 
 . 
  Time 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if Time is not NULL. 
 } 
 

NullTime represents a BigQuery TIME that may be null.

func (NullTime) MarshalJSON

  func 
  
 ( 
 n 
  
  NullTime 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullTime to JSON.

func (NullTime) String

  func 
  
 ( 
 n 
  
  NullTime 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullTime) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullTime 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullTime.

NullTimestamp

  type 
  
 NullTimestamp 
  
 struct 
  
 { 
  
 Timestamp 
  
  time 
 
 . 
  Time 
 
  
 Valid 
  
  bool 
 
  
 // Valid is true if Time is not NULL. 
 } 
 

NullTimestamp represents a BigQuery TIMESTAMP that may be null.

func (NullTimestamp) MarshalJSON

  func 
  
 ( 
 n 
  
  NullTimestamp 
 
 ) 
  
 MarshalJSON 
 () 
  
 ([] 
  byte 
 
 , 
  
  error 
 
 ) 
 

MarshalJSON converts the NullTimestamp to JSON.

func (NullTimestamp) String

  func 
  
 ( 
 n 
  
  NullTimestamp 
 
 ) 
  
 String 
 () 
  
  string 
 
 

func (*NullTimestamp) UnmarshalJSON

  func 
  
 ( 
 n 
  
 * 
  NullTimestamp 
 
 ) 
  
 UnmarshalJSON 
 ( 
 b 
  
 [] 
  byte 
 
 ) 
  
  error 
 
 

UnmarshalJSON converts JSON into a NullTimestamp.

ParquetOptions

  type 
  
 ParquetOptions 
  
 struct 
  
 { 
  
 // EnumAsString indicates whether to infer Parquet ENUM logical type as 
  
 // STRING instead of BYTES by default. 
  
 EnumAsString 
  
  bool 
 
  
 // EnableListInference indicates whether to use schema inference 
  
 // specifically for Parquet LIST logical type. 
  
 EnableListInference 
  
  bool 
 
 } 
 

ParquetOptions are additional options for Parquet external data sources.

PolicyTagList

  type 
  
 PolicyTagList 
  
 struct 
  
 { 
  
 Names 
  
 [] 
  string 
 
 } 
 

PolicyTagList represents the annotations on a schema column for enforcing column-level security. For more information, see https://cloud.google.com/bigquery/docs/column-level-security-intro

PutMultiError

  type 
  
 PutMultiError 
  
 [] 
  RowInsertionError 
 
 

PutMultiError contains an error for each row which was not successfully inserted into a BigQuery table.

func (PutMultiError) Error

  func 
  
 ( 
 pme 
  
  PutMultiError 
 
 ) 
  
 Error 
 () 
  
  string 
 
 

Query

  type 
  
 Query 
  
 struct 
  
 { 
  
  JobIDConfig 
 
  
  QueryConfig 
 
  
 // contains filtered or unexported fields 
 } 
 

A Query queries data from a BigQuery table. Use Client.Query to create a Query.

func (*Query) Read

  func 
  
 ( 
 q 
  
 * 
  Query 
 
 ) 
  
 Read 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 it 
  
 * 
  RowIterator 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Read submits a query for execution and returns the results via a RowIterator. If the request can be satisfied by running using the optimized query path, it is used in place of the jobs.insert path as this path does not expose a job object.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select name, num from t1" 
 ) 
  
 it 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Read 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Query) Run

  func 
  
 ( 
 q 
  
 * 
  Query 
 
 ) 
  
 Run 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 j 
  
 * 
  Job 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Run initiates a query job.

QueryConfig

  type 
  
 QueryConfig 
  
 struct 
  
 { 
  
 // Dst is the table into which the results of the query will be written. 
  
 // If this field is nil, a temporary table will be created. 
  
 Dst 
  
 * 
  Table 
 
  
 // The query to execute. See https://cloud.google.com/bigquery/query-reference for details. 
  
 Q 
  
  string 
 
  
 // DefaultProjectID and DefaultDatasetID specify the dataset to use for unqualified table names in the query. 
  
 // If DefaultProjectID is set, DefaultDatasetID must also be set. 
  
 DefaultProjectID 
  
  string 
 
  
 DefaultDatasetID 
  
  string 
 
  
 // TableDefinitions describes data sources outside of BigQuery. 
  
 // The map keys may be used as table names in the query string. 
  
 // 
  
 // When a QueryConfig is returned from Job.Config, the map values 
  
 // are always of type *ExternalDataConfig. 
  
 TableDefinitions 
  
 map 
 [ 
  string 
 
 ] 
  ExternalData 
 
  
 // CreateDisposition specifies the circumstances under which the destination table will be created. 
  
 // The default is CreateIfNeeded. 
  
 CreateDisposition 
  
  TableCreateDisposition 
 
  
 // WriteDisposition specifies how existing data in the destination table is treated. 
  
 // The default is WriteEmpty. 
  
 WriteDisposition 
  
  TableWriteDisposition 
 
  
 // DisableQueryCache prevents results being fetched from the query cache. 
  
 // If this field is false, results are fetched from the cache if they are available. 
  
 // The query cache is a best-effort cache that is flushed whenever tables in the query are modified. 
  
 // Cached results are only available when TableID is unspecified in the query's destination Table. 
  
 // For more information, see https://cloud.google.com/bigquery/querying-data#querycaching 
  
 DisableQueryCache 
  
  bool 
 
  
 // DisableFlattenedResults prevents results being flattened. 
  
 // If this field is false, results from nested and repeated fields are flattened. 
  
 // DisableFlattenedResults implies AllowLargeResults 
  
 // For more information, see https://cloud.google.com/bigquery/docs/data#nested 
  
 DisableFlattenedResults 
  
  bool 
 
  
 // AllowLargeResults allows the query to produce arbitrarily large result tables. 
  
 // The destination must be a table. 
  
 // When using this option, queries will take longer to execute, even if the result set is small. 
  
 // For additional limitations, see https://cloud.google.com/bigquery/querying-data#largequeryresults 
  
 AllowLargeResults 
  
  bool 
 
  
 // Priority specifies the priority with which to schedule the query. 
  
 // The default priority is InteractivePriority. 
  
 // For more information, see https://cloud.google.com/bigquery/querying-data#batchqueries 
  
 Priority 
  
  QueryPriority 
 
  
 // MaxBillingTier sets the maximum billing tier for a Query. 
  
 // Queries that have resource usage beyond this tier will fail (without 
  
 // incurring a charge). If this field is zero, the project default will be used. 
  
 MaxBillingTier 
  
  int 
 
  
 // MaxBytesBilled limits the number of bytes billed for 
  
 // this job.  Queries that would exceed this limit will fail (without incurring 
  
 // a charge). 
  
 // If this field is less than 1, the project default will be 
  
 // used. 
  
 MaxBytesBilled 
  
  int64 
 
  
 // UseStandardSQL causes the query to use standard SQL. The default. 
  
 // Deprecated: use UseLegacySQL. 
  
 UseStandardSQL 
  
  bool 
 
  
 // UseLegacySQL causes the query to use legacy SQL. 
  
 UseLegacySQL 
  
  bool 
 
  
 // Parameters is a list of query parameters. The presence of parameters 
  
 // implies the use of standard SQL. 
  
 // If the query uses positional syntax ("?"), then no parameter may have a name. 
  
 // If the query uses named syntax ("@p"), then all parameters must have names. 
  
 // It is illegal to mix positional and named syntax. 
  
 Parameters 
  
 [] 
  QueryParameter 
 
  
 // TimePartitioning specifies time-based partitioning 
  
 // for the destination table. 
  
 TimePartitioning 
  
 * 
  TimePartitioning 
 
  
 // RangePartitioning specifies integer range-based partitioning 
  
 // for the destination table. 
  
 RangePartitioning 
  
 * 
  RangePartitioning 
 
  
 // Clustering specifies the data clustering configuration for the destination table. 
  
 Clustering 
  
 * 
  Clustering 
 
  
 // The labels associated with this job. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // If true, don't actually run this job. A valid query will return a mostly 
  
 // empty response with some processing statistics, while an invalid query will 
  
 // return the same error it would if it wasn't a dry run. 
  
 // 
  
 // Query.Read will fail with dry-run queries. Call Query.Run instead, and then 
  
 // call LastStatus on the returned job to get statistics. Calling Status on a 
  
 // dry-run job will fail. 
  
 DryRun 
  
  bool 
 
  
 // Custom encryption configuration (e.g., Cloud KMS keys). 
  
 DestinationEncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // Allows the schema of the destination table to be updated as a side effect of 
  
 // the query job. 
  
 SchemaUpdateOptions 
  
 [] 
  string 
 
  
 // CreateSession will trigger creation of a new session when true. 
  
 CreateSession 
  
  bool 
 
  
 // ConnectionProperties are optional key-values settings. 
  
 ConnectionProperties 
  
 [] 
 * 
  ConnectionProperty 
 
 } 
 

QueryConfig holds the configuration for a query job.

QueryParameter

  type 
  
 QueryParameter 
  
 struct 
  
 { 
  
 // Name is used for named parameter mode. 
  
 // It must match the name in the query case-insensitively. 
  
 Name 
  
  string 
 
  
 // Value is the value of the parameter. 
  
 // 
  
 // When you create a QueryParameter to send to BigQuery, the following Go types 
  
 // are supported, with their corresponding Bigquery types: 
  
 // int, int8, int16, int32, int64, uint8, uint16, uint32: INT64 
  
 //   Note that uint, uint64 and uintptr are not supported, because 
  
 //   they may contain values that cannot fit into a 64-bit signed integer. 
  
 // float32, float64: FLOAT64 
  
 // bool: BOOL 
  
 // string: STRING 
  
 // []byte: BYTES 
  
 // time.Time: TIMESTAMP 
  
 // *big.Rat: NUMERIC 
  
 // Arrays and slices of the above. 
  
 // Structs of the above. Only the exported fields are used. 
  
 // 
  
 // For scalar values, you can supply the Null types within this library 
  
 // to send the appropriate NULL values (e.g. NullInt64, NullString, etc). 
  
 // 
  
 // When a QueryParameter is returned inside a QueryConfig from a call to 
  
 // Job.Config: 
  
 // Integers are of type int64. 
  
 // Floating-point values are of type float64. 
  
 // Arrays are of type []interface{}, regardless of the array element type. 
  
 // Structs are of type map[string]interface{}. 
  
 // 
  
 // When valid (non-null) Null types are sent, they come back as the Go types indicated 
  
 // above.  Null strings will report in query statistics as a valid empty 
  
 // string. 
  
 Value 
  
 interface 
 {} 
 } 
 

A QueryParameter is a parameter to a query.

QueryPriority

  type 
  
 QueryPriority 
  
  string 
 
 

QueryPriority specifies a priority with which a query is to be executed.

BatchPriority, InteractivePriority

  const 
  
 ( 
  
 // BatchPriority specifies that the query should be scheduled with the 
  
 // batch priority.  BigQuery queues each batch query on your behalf, and 
  
 // starts the query as soon as idle resources are available, usually within 
  
 // a few minutes. If BigQuery hasn't started the query within 24 hours, 
  
 // BigQuery changes the job priority to interactive. Batch queries don't 
  
 // count towards your concurrent rate limit, which can make it easier to 
  
 // start many queries at once. 
  
 // 
  
 // More information can be found at https://cloud.google.com/bigquery/docs/running-queries#batchqueries. 
  
 BatchPriority 
  
  QueryPriority 
 
  
 = 
  
 "BATCH" 
  
 // InteractivePriority specifies that the query should be scheduled with 
  
 // interactive priority, which means that the query is executed as soon as 
  
 // possible. Interactive queries count towards your concurrent rate limit 
  
 // and your daily limit. It is the default priority with which queries get 
  
 // executed. 
  
 // 
  
 // More information can be found at https://cloud.google.com/bigquery/docs/running-queries#queries. 
  
 InteractivePriority 
  
  QueryPriority 
 
  
 = 
  
 "INTERACTIVE" 
 ) 
 

QueryStatistics

  type 
  
 QueryStatistics 
  
 struct 
  
 { 
  
 // Billing tier for the job. 
  
 BillingTier 
  
  int64 
 
  
 // Whether the query result was fetched from the query cache. 
  
 CacheHit 
  
  bool 
 
  
 // The type of query statement, if valid. 
  
 StatementType 
  
  string 
 
  
 // Total bytes billed for the job. 
  
 TotalBytesBilled 
  
  int64 
 
  
 // Total bytes processed for the job. 
  
 TotalBytesProcessed 
  
  int64 
 
  
 // For dry run queries, indicates how accurate the TotalBytesProcessed value is. 
  
 // When indicated, values include: 
  
 // UNKNOWN: accuracy of the estimate is unknown. 
  
 // PRECISE: estimate is precise. 
  
 // LOWER_BOUND: estimate is lower bound of what the query would cost. 
  
 // UPPER_BOUND: estimate is upper bound of what the query would cost. 
  
 TotalBytesProcessedAccuracy 
  
  string 
 
  
 // Describes execution plan for the query. 
  
 QueryPlan 
  
 [] 
 * 
  ExplainQueryStage 
 
  
 // The number of rows affected by a DML statement. Present only for DML 
  
 // statements INSERT, UPDATE or DELETE. 
  
 NumDMLAffectedRows 
  
  int64 
 
  
 // DMLStats provides statistics about the row mutations performed by 
  
 // DML statements. 
  
 DMLStats 
  
 * 
  DMLStatistics 
 
  
 // Describes a timeline of job execution. 
  
 Timeline 
  
 [] 
 * 
  QueryTimelineSample 
 
  
 // ReferencedTables: [Output-only] Referenced tables for 
  
 // the job. Queries that reference more than 50 tables will not have a 
  
 // complete list. 
  
 ReferencedTables 
  
 [] 
 * 
  Table 
 
  
 // The schema of the results. Present only for successful dry run of 
  
 // non-legacy SQL queries. 
  
 Schema 
  
  Schema 
 
  
 // Slot-milliseconds consumed by this query job. 
  
 SlotMillis 
  
  int64 
 
  
 // Standard SQL: list of undeclared query parameter names detected during a 
  
 // dry run validation. 
  
 UndeclaredQueryParameterNames 
  
 [] 
  string 
 
  
 // DDL target table. 
  
 DDLTargetTable 
  
 * 
  Table 
 
  
 // DDL Operation performed on the target table.  Used to report how the 
  
 // query impacted the DDL target table. 
  
 DDLOperationPerformed 
  
  string 
 
  
 // The DDL target table, present only for CREATE/DROP FUNCTION/PROCEDURE queries. 
  
 DDLTargetRoutine 
  
 * 
  Routine 
 
 } 
 

QueryStatistics contains statistics about a query job.

QueryTimelineSample

  type 
  
 QueryTimelineSample 
  
 struct 
  
 { 
  
 // Total number of units currently being processed by workers, represented as largest value since last sample. 
  
 ActiveUnits 
  
  int64 
 
  
 // Total parallel units of work completed by this query. 
  
 CompletedUnits 
  
  int64 
 
  
 // Time elapsed since start of query execution. 
  
 Elapsed 
  
  time 
 
 . 
  Duration 
 
  
 // Total parallel units of work remaining for the active stages. 
  
 PendingUnits 
  
  int64 
 
  
 // Cumulative slot-milliseconds consumed by the query. 
  
 SlotMillis 
  
  int64 
 
 } 
 

QueryTimelineSample represents a sample of execution statistics at a point in time.

RangePartitioning

  type 
  
 RangePartitioning 
  
 struct 
  
 { 
  
 // The field by which the table is partitioned. 
  
 // This field must be a top-level field, and must be typed as an 
  
 // INTEGER/INT64. 
  
 Field 
  
  string 
 
  
 // The details of how partitions are mapped onto the integer range. 
  
 Range 
  
 * 
  RangePartitioningRange 
 
 } 
 

RangePartitioning indicates an integer-range based storage organization strategy.

RangePartitioningRange

  type 
  
 RangePartitioningRange 
  
 struct 
  
 { 
  
 // The start value of defined range of values, inclusive of the specified value. 
  
 Start 
  
  int64 
 
  
 // The end of the defined range of values, exclusive of the defined value. 
  
 End 
  
  int64 
 
  
 // The width of each interval range. 
  
 Interval 
  
  int64 
 
 } 
 

RangePartitioningRange defines the boundaries and width of partitioned values.

ReaderSource

  type 
  
 ReaderSource 
  
 struct 
  
 { 
  
  FileConfig 
 
  
 // contains filtered or unexported fields 
 } 
 

A ReaderSource is a source for a load operation that gets data from an io.Reader.

When a ReaderSource is part of a LoadConfig obtained via Job.Config, its internal io.Reader will be nil, so it cannot be used for a subsequent load operation.

func NewReaderSource

  func 
  
 NewReaderSource 
 ( 
 r 
  
  io 
 
 . 
  Reader 
 
 ) 
  
 * 
  ReaderSource 
 
 

NewReaderSource creates a ReaderSource from an io.Reader. You may optionally configure properties on the ReaderSource that describe the data being read, before passing it to Table.LoaderFrom.

ReservationUsage

  type 
  
 ReservationUsage 
  
 struct 
  
 { 
  
 // SlotMillis reports the slot milliseconds utilized within in the given reservation. 
  
 SlotMillis 
  
  int64 
 
  
 // Name indicates the utilized reservation name, or "unreserved" for ondemand usage. 
  
 Name 
  
  string 
 
 } 
 

ReservationUsage contains information about a job's usage of a single reservation.

Routine

  type 
  
 Routine 
  
 struct 
  
 { 
  
 ProjectID 
  
  string 
 
  
 DatasetID 
  
  string 
 
  
 RoutineID 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

Routine represents a reference to a BigQuery routine. There are multiple types of routines including stored procedures and scalar user-defined functions (UDFs). For more information, see the BigQuery documentation at https://cloud.google.com/bigquery/docs/

func (*Routine) Create

  func 
  
 ( 
 r 
  
 * 
  Routine 
 
 ) 
  
 Create 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 rm 
  
 * 
  RoutineMetadata 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Create creates a Routine in the BigQuery service. Pass in a RoutineMetadata to define the routine.

func (*Routine) Delete

  func 
  
 ( 
 r 
  
 * 
  Routine 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete removes a Routine from a dataset.

func (*Routine) FullyQualifiedName

  func 
  
 ( 
 r 
  
 * 
  Routine 
 
 ) 
  
 FullyQualifiedName 
 () 
  
  string 
 
 

FullyQualifiedName returns an identifer for the routine in project.dataset.routine format.

  func 
  
 ( 
 r 
  
 * 
  Routine 
 
 ) 
  
 Metadata 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 rm 
  
 * 
  RoutineMetadata 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Metadata fetches the metadata for a given Routine.

func (*Routine) Update

Update modifies properties of a Routine using the API.

RoutineArgument

  type 
  
 RoutineArgument 
  
 struct 
  
 { 
  
 // The name of this argument.  Can be absent for function return argument. 
  
 Name 
  
  string 
 
  
 // Kind indicates the kind of argument represented. 
  
 // Possible values: 
  
 //   ARGUMENT_KIND_UNSPECIFIED 
  
 //   FIXED_TYPE - The argument is a variable with fully specified 
  
 //     type, which can be a struct or an array, but not a table. 
  
 //   ANY_TYPE - The argument is any type, including struct or array, 
  
 //     but not a table. 
  
 Kind 
  
  string 
 
  
 // Mode is optional, and indicates whether an argument is input or output. 
  
 // Mode can only be set for procedures. 
  
 // 
  
 // Possible values: 
  
 //   MODE_UNSPECIFIED 
  
 //   IN - The argument is input-only. 
  
 //   OUT - The argument is output-only. 
  
 //   INOUT - The argument is both an input and an output. 
  
 Mode 
  
  string 
 
  
 // DataType provides typing information.  Unnecessary for ANY_TYPE Kind 
  
 // arguments. 
  
 DataType 
  
 * 
  StandardSQLDataType 
 
 } 
 

RoutineArgument represents an argument supplied to a routine such as a UDF or stored procedured.

RoutineDeterminism

  type 
  
 RoutineDeterminism 
  
  string 
 
 

RoutineDeterminism specifies the level of determinism that javascript User Defined Functions exhibit.

Deterministic, NotDeterministic

  const 
  
 ( 
  
 // Deterministic indicates that two calls with the same input to a UDF yield the same output. 
  
 Deterministic 
  
  RoutineDeterminism 
 
  
 = 
  
 "DETERMINISTIC" 
  
 // NotDeterministic indicates that the output of the UDF is not guaranteed to yield the same 
  
 // output each time for a given set of inputs. 
  
 NotDeterministic 
  
  RoutineDeterminism 
 
  
 = 
  
 "NOT_DETERMINISTIC" 
 ) 
 

RoutineIterator

  type 
  
 RoutineIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

A RoutineIterator is an iterator over Routines.

func (*RoutineIterator) Next

  func 
  
 ( 
 it 
  
 * 
  RoutineIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  Routine 
 
 , 
  
  error 
 
 ) 
 

Next returns the next result. Its second return value is Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

func (*RoutineIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  RoutineIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

  type 
  
 RoutineMetadata 
  
 struct 
  
 { 
  
 ETag 
  
  string 
 
  
 // Type indicates the type of routine, such as SCALAR_FUNCTION, PROCEDURE, 
  
 // or TABLE_VALUED_FUNCTION. 
  
 Type 
  
  string 
 
  
 CreationTime 
  
  time 
 
 . 
  Time 
 
  
 Description 
  
  string 
 
  
 // DeterminismLevel is only applicable to Javascript UDFs. 
  
 DeterminismLevel 
  
  RoutineDeterminism 
 
  
 LastModifiedTime 
  
  time 
 
 . 
  Time 
 
  
 // Language of the routine, such as SQL or JAVASCRIPT. 
  
 Language 
  
  string 
 
  
 // The list of arguments for the the routine. 
  
 Arguments 
  
 [] 
 * 
  RoutineArgument 
 
  
 ReturnType 
  
 * 
  StandardSQLDataType 
 
  
 // Set only if the routine type is TABLE_VALUED_FUNCTION. 
  
 ReturnTableType 
  
 * 
  StandardSQLTableType 
 
  
 // For javascript routines, this indicates the paths for imported libraries. 
  
 ImportedLibraries 
  
 [] 
  string 
 
  
 // Body contains the routine's body. 
  
 // For functions, Body is the expression in the AS clause. 
  
 // 
  
 // For SQL functions, it is the substring inside the parentheses of a CREATE 
  
 // FUNCTION statement. 
  
 // 
  
 // For JAVASCRIPT function, it is the evaluated string in the AS clause of 
  
 // a CREATE FUNCTION statement. 
  
 Body 
  
  string 
 
 } 
 

RoutineMetadata represents details of a given BigQuery Routine.

RoutineMetadataToUpdate

  type 
  
 RoutineMetadataToUpdate 
  
 struct 
  
 { 
  
 Arguments 
  
 [] 
 * 
  RoutineArgument 
 
  
 Description 
  
  optional 
 
 . 
  String 
 
  
 DeterminismLevel 
  
  optional 
 
 . 
  String 
 
  
 Type 
  
  optional 
 
 . 
  String 
 
  
 Language 
  
  optional 
 
 . 
  String 
 
  
 Body 
  
  optional 
 
 . 
  String 
 
  
 ImportedLibraries 
  
 [] 
  string 
 
  
 ReturnType 
  
 * 
  StandardSQLDataType 
 
  
 ReturnTableType 
  
 * 
  StandardSQLTableType 
 
 } 
 

RoutineMetadataToUpdate governs updating a routine.

RowInsertionError

  type 
  
 RowInsertionError 
  
 struct 
  
 { 
  
 InsertID 
  
  string 
 
  
 // The InsertID associated with the affected row. 
  
 RowIndex 
  
  int 
 
  
 // The 0-based index of the affected row in the batch of rows being inserted. 
  
 Errors 
  
  MultiError 
 
 } 
 

RowInsertionError contains all errors that occurred when attempting to insert a row.

func (*RowInsertionError) Error

  func 
  
 ( 
 e 
  
 * 
  RowInsertionError 
 
 ) 
  
 Error 
 () 
  
  string 
 
 

RowIterator

  type 
  
 RowIterator 
  
 struct 
  
 { 
  
 // StartIndex can be set before the first call to Next. If PageInfo().Token 
  
 // is also set, StartIndex is ignored. 
  
 StartIndex 
  
  uint64 
 
  
 // The schema of the table. Available after the first call to Next. 
  
 Schema 
  
  Schema 
 
  
 // The total number of rows in the result. Available after the first call to Next. 
  
 // May be zero just after rows were inserted. 
  
 TotalRows 
  
  uint64 
 
  
 // contains filtered or unexported fields 
 } 
 

A RowIterator provides access to the result of a BigQuery lookup.

func (*RowIterator) Next

  func 
  
 ( 
 it 
  
 * 
  RowIterator 
 
 ) 
  
 Next 
 ( 
 dst 
  
 interface 
 {}) 
  
  error 
 
 

Next loads the next row into dst. Its return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

dst may implement ValueLoader, or may be a *[]Value, *map[string]Value, or struct pointer.

If dst is a *[]Value, it will be set to new []Value whose i'th element will be populated with the i'th column of the row.

If dst is a *map[string]Value, a new map will be created if dst is nil. Then for each schema column name, the map key of that name will be set to the column's value. STRUCT types (RECORD types or nested schemas) become nested maps.

If dst is pointer to a struct, each column in the schema will be matched with an exported field of the struct that has the same name, ignoring case. Unmatched schema columns and struct fields will be ignored.

Each BigQuery column type corresponds to one or more Go types; a matching struct field must be of the correct type. The correspondences are:

STRING string BOOL bool INTEGER int, int8, int16, int32, int64, uint8, uint16, uint32 FLOAT float32, float64 BYTES []byte TIMESTAMP time.Time DATE civil.Date TIME civil.Time DATETIME civil.DateTime

A repeated field corresponds to a slice or array of the element type. A STRUCT type (RECORD or nested schema) corresponds to a nested struct or struct pointer. All calls to Next on the same iterator must use the same struct type.

It is an error to attempt to read a BigQuery NULL value into a struct field, unless the field is of type []byte or is one of the special Null types: NullInt64, NullFloat64, NullBool, NullString, NullTimestamp, NullDate, NullTime or NullDateTime. You can also use a *[]Value or *map[string]Value to read from a table with NULLs.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select name, num from t1" 
 ) 
  
 it 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Read 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 for 
  
 { 
  
 var 
  
 row 
  
 [] 
 bigquery 
 . 
  Value 
 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 ( 
& row 
 ) 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 row 
 ) 
  
 } 
 } 
 
struct
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 type 
  
 score 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Num 
  
 int 
  
 } 
  
 q 
  
 := 
  
 client 
 . 
 Query 
 ( 
 "select name, num from t1" 
 ) 
  
 it 
 , 
  
 err 
  
 := 
  
 q 
 . 
 Read 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 for 
  
 { 
  
 var 
  
 s 
  
 score 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 ( 
& s 
 ) 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 s 
 ) 
  
 } 
 } 
 

func (*RowIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  RowIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

func (*RowIterator) SourceJob

  func 
  
 ( 
 ri 
  
 * 
  RowIterator 
 
 ) 
  
 SourceJob 
 () 
  
 * 
  Job 
 
 

SourceJob returns an instance of a Job if the RowIterator is backed by a query, or a nil.

Schema

  type 
  
 Schema 
  
 [] 
 * 
  FieldSchema 
 
 

Schema describes the fields in a table or query result.

func InferSchema

  func 
  
 InferSchema 
 ( 
 st 
  
 interface 
 {}) 
  
 ( 
  Schema 
 
 , 
  
  error 
 
 ) 
 

InferSchema tries to derive a BigQuery schema from the supplied struct value. Each exported struct field is mapped to a field in the schema.

The following BigQuery types are inferred from the corresponding Go types. (This is the same mapping as that used for RowIterator.Next.) Fields inferred from these types are marked required (non-nullable).

STRING string BOOL bool INTEGER int, int8, int16, int32, int64, uint8, uint16, uint32 FLOAT float32, float64 BYTES []byte TIMESTAMP time.Time DATE civil.Date TIME civil.Time DATETIME civil.DateTime NUMERIC *big.Rat

The big.Rat type supports numbers of arbitrary size and precision. Values will be rounded to 9 digits after the decimal point before being transmitted to BigQuery. See https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#numeric-type for more on NUMERIC.

A Go slice or array type is inferred to be a BigQuery repeated field of the element type. The element type must be one of the above listed types.

Due to lack of unique native Go type for GEOGRAPHY, there is no schema inference to GEOGRAPHY at this time.

Nullable fields are inferred from the NullXXX types, declared in this package:

STRING NullString BOOL NullBool INTEGER NullInt64 FLOAT NullFloat64 TIMESTAMP NullTimestamp DATE NullDate TIME NullTime DATETIME NullDateTime GEOGRAPHY NullGeography

For a nullable BYTES field, use the type []byte and tag the field "nullable" (see below). For a nullable NUMERIC field, use the type *big.Rat and tag the field "nullable".

A struct field that is of struct type is inferred to be a required field of type RECORD with a schema inferred recursively. For backwards compatibility, a field of type pointer to struct is also inferred to be required. To get a nullable RECORD field, use the "nullable" tag (see below).

InferSchema returns an error if any of the examined fields is of type uint, uint64, uintptr, map, interface, complex64, complex128, func, or chan. Future versions may handle these cases without error.

Recursively defined structs are also disallowed.

Struct fields may be tagged in a way similar to the encoding/json package. A tag of the form bigquery:"name" uses "name" instead of the struct field name as the BigQuery field name. A tag of the form bigquery:"-" omits the field from the inferred schema. The "nullable" option marks the field as nullable (not required). It is only needed for []byte, *big.Rat and pointer-to-struct fields, and cannot appear on other fields. In this example, the Go name of the field is retained: bigquery:",nullable"

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 type 
  
 Item 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Size 
  
 float64 
  
 Count 
  
 int 
  
 } 
  
 schema 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
  InferSchema 
 
 ( 
 Item 
 {}) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 err 
 ) 
  
 // TODO: Handle error. 
  
 } 
  
 for 
  
 _ 
 , 
  
 fs 
  
 := 
  
 range 
  
 schema 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 fs 
 . 
 Name 
 , 
  
 fs 
 . 
 Type 
 ) 
  
 } 
 } 
 
tags
  package 
  
 main 
 import 
  
 ( 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 type 
  
 Item 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Size 
  
 float64 
  
 Count 
  
 int 
  
 `bigquery:"number"` 
  
 Secret 
  
 [] 
 byte 
  
 `bigquery:"-"` 
  
 Optional 
  
 bigquery 
 . 
  NullBool 
 
  
 OptBytes 
  
 [] 
 byte 
  
 `bigquery:",nullable"` 
  
 } 
  
 schema 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
  InferSchema 
 
 ( 
 Item 
 {}) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 err 
 ) 
  
 // TODO: Handle error. 
  
 } 
  
 for 
  
 _ 
 , 
  
 fs 
  
 := 
  
 range 
  
 schema 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 fs 
 . 
 Name 
 , 
  
 fs 
 . 
 Type 
 , 
  
 fs 
 . 
 Required 
 ) 
  
 } 
 } 
 

func SchemaFromJSON

  func 
  
 SchemaFromJSON 
 ( 
 schemaJSON 
  
 [] 
  byte 
 
 ) 
  
 ( 
  Schema 
 
 , 
  
  error 
 
 ) 
 

SchemaFromJSON takes a JSON BigQuery table schema definition (as generated by https://github.com/GoogleCloudPlatform/protoc-gen-bq-schema ) and returns a fully-populated Schema.

func (Schema) Relax

  func 
  
 ( 
 s 
  
  Schema 
 
 ) 
  
 Relax 
 () 
  
  Schema 
 
 

Relax returns a version of the schema where no fields are marked as Required.

ScriptStackFrame

  type 
  
 ScriptStackFrame 
  
 struct 
  
 { 
  
 StartLine 
  
  int64 
 
  
 StartColumn 
  
  int64 
 
  
 EndLine 
  
  int64 
 
  
 EndColumn 
  
  int64 
 
  
 // Name of the active procedure.  Empty if in a top-level script. 
  
 ProcedureID 
  
  string 
 
  
 // Text of the current statement/expression. 
  
 Text 
  
  string 
 
 } 
 

ScriptStackFrame represents the location of the statement/expression being evaluated.

Line and column numbers are defined as follows:

  • Line and column numbers start with one. That is, line 1 column 1 denotes the start of the script.
  • When inside a stored procedure, all line/column numbers are relative to the procedure body, not the script in which the procedure was defined.
  • Start/end positions exclude leading/trailing comments and whitespace. The end position always ends with a ";", when present.
  • Multi-byte Unicode characters are treated as just one column.
  • If the original script (or procedure definition) contains TAB characters, a tab "snaps" the indentation forward to the nearest multiple of 8 characters, plus 1. For example, a TAB on column 1, 2, 3, 4, 5, 6 , or 8 will advance the next character to column 9. A TAB on column 9, 10, 11, 12, 13, 14, 15, or 16 will advance the next character to column 17.

ScriptStatistics

  type 
  
 ScriptStatistics 
  
 struct 
  
 { 
  
 EvaluationKind 
  
  string 
 
  
 StackFrames 
  
 [] 
 * 
  ScriptStackFrame 
 
 } 
 

ScriptStatistics report information about script-based query jobs.

SessionInfo

  type 
  
 SessionInfo 
  
 struct 
  
 { 
  
 SessionID 
  
  string 
 
 } 
 

SessionInfo contains information about a session associated with a job.

SnapshotDefinition

  type 
  
 SnapshotDefinition 
  
 struct 
  
 { 
  
 // BaseTableReference describes the ID of the table that this snapshot 
  
 // came from. 
  
 BaseTableReference 
  
 * 
  Table 
 
  
 // SnapshotTime indicates when the base table was snapshot. 
  
 SnapshotTime 
  
  time 
 
 . 
  Time 
 
 } 
 

SnapshotDefinition provides metadata related to the origin of a snapshot.

StandardSQLDataType

  type 
  
 StandardSQLDataType 
  
 struct 
  
 { 
  
 // ArrayElementType indicates the type of an array's elements, when the 
  
 // TypeKind is ARRAY. 
  
 ArrayElementType 
  
 * 
  StandardSQLDataType 
 
  
 // StructType indicates the struct definition (fields), when the 
  
 // TypeKind is STRUCT. 
  
 StructType 
  
 * 
  StandardSQLStructType 
 
  
 // The top-level type of this type definition. 
  
 // Can be any standard SQL data type.  For more information about BigQuery 
  
 // data types, see 
  
 // https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types 
  
 // 
  
 // Additional information is available in the REST documentation: 
  
 // https://cloud.google.com/bigquery/docs/reference/rest/v2/StandardSqlDataType 
  
 TypeKind 
  
  string 
 
 } 
 

StandardSQLDataType conveys type information using the Standard SQL type system.

StandardSQLField

  type 
  
 StandardSQLField 
  
 struct 
  
 { 
  
 // The name of this field.  Can be absent for struct fields. 
  
 Name 
  
  string 
 
  
 // Data type for the field. 
  
 Type 
  
 * 
  StandardSQLDataType 
 
 } 
 

StandardSQLField represents a field using the Standard SQL data type system.

StandardSQLStructType

  type 
  
 StandardSQLStructType 
  
 struct 
  
 { 
  
 Fields 
  
 [] 
 * 
  StandardSQLField 
 
 } 
 

StandardSQLStructType represents a structure type, which is a list of Standard SQL fields. For more information, see: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#struct-type

StandardSQLTableType

  type 
  
 StandardSQLTableType 
  
 struct 
  
 { 
  
 // The columns of the table. 
  
 Columns 
  
 [] 
 * 
  StandardSQLField 
 
 } 
 

StandardSQLTableType models a table-like resource, which has a set of columns.

State

  type 
  
 State 
  
  int 
 
 

State is one of a sequence of states that a Job progresses through as it is processed.

StateUnspecified, Pending, Running, Done

  const 
  
 ( 
  
 // StateUnspecified is the default JobIterator state. 
  
 StateUnspecified 
  
  State 
 
  
 = 
  
  iota 
 
  
 // Pending is a state that describes that the job is pending. 
  
 Pending 
  
 // Running is a state that describes that the job is running. 
  
 Running 
  
 // Done is a state that describes that the job is done. 
  
 Done 
 ) 
 

Statistics

  type 
  
 Statistics 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

Statistics is one of ExtractStatistics, LoadStatistics or QueryStatistics.

StreamingBuffer

  type 
  
 StreamingBuffer 
  
 struct 
  
 { 
  
 // A lower-bound estimate of the number of bytes currently in the streaming 
  
 // buffer. 
  
 EstimatedBytes 
  
  uint64 
 
  
 // A lower-bound estimate of the number of rows currently in the streaming 
  
 // buffer. 
  
 EstimatedRows 
  
  uint64 
 
  
 // The time of the oldest entry in the streaming buffer. 
  
 OldestEntryTime 
  
  time 
 
 . 
  Time 
 
 } 
 

StreamingBuffer holds information about the streaming buffer.

StructSaver

  type 
  
 StructSaver 
  
 struct 
  
 { 
  
 // Schema determines what fields of the struct are uploaded. It should 
  
 // match the table's schema. 
  
 // Schema is optional for StructSavers that are passed to Uploader.Put. 
  
 Schema 
  
  Schema 
 
  
 // InsertID governs the best-effort deduplication feature of 
  
 // BigQuery streaming inserts. 
  
 // 
  
 // If the InsertID is empty, a random InsertID will be generated by 
  
 // this library to facilitate deduplication. 
  
 // 
  
 // If the InsertID is set to the sentinel value NoDedupeID, an InsertID 
  
 // is not sent. 
  
 // 
  
 // For all other non-empty values, BigQuery will use the provided 
  
 // value for best-effort deduplication. 
  
 InsertID 
  
  string 
 
  
 // Struct should be a struct or a pointer to a struct. 
  
 Struct 
  
 interface 
 {} 
 } 
 

StructSaver implements ValueSaver for a struct. The struct is converted to a map of values by using the values of struct fields corresponding to schema fields. Additional and missing fields are ignored, as are nested struct pointers that are nil.

func (*StructSaver) Save

  func 
  
 ( 
 ss 
  
 * 
  StructSaver 
 
 ) 
  
 Save 
 () 
  
 ( 
 row 
  
 map 
 [ 
  string 
 
 ] 
  Value 
 
 , 
  
 insertID 
  
  string 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Save implements ValueSaver.

Table

  type 
  
 Table 
  
 struct 
  
 { 
  
 // ProjectID, DatasetID and TableID may be omitted if the Table is the destination for a query. 
  
 // In this case the result will be stored in an ephemeral table. 
  
 ProjectID 
  
  string 
 
  
 DatasetID 
  
  string 
 
  
 // TableID must contain only letters (a-z, A-Z), numbers (0-9), or underscores (_). 
  
 // The maximum length is 1,024 characters. 
  
 TableID 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

A Table is a reference to a BigQuery table.

func (*Table) CopierFrom

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 CopierFrom 
 ( 
 srcs 
  
 ...* 
  Table 
 
 ) 
  
 * 
  Copier 
 
 

CopierFrom returns a Copier which can be used to copy data into a BigQuery table from one or more BigQuery tables. The returned Copier may optionally be further configured before its Run method is called.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 c 
  
 := 
  
 ds 
 . 
 Table 
 ( 
 "combined" 
 ). 
  CopierFrom 
 
 ( 
 ds 
 . 
 Table 
 ( 
 "t1" 
 ), 
  
 ds 
 . 
 Table 
 ( 
 "t2" 
 )) 
  
 c 
 . 
 WriteDisposition 
  
 = 
  
 bigquery 
 . 
  WriteTruncate 
 
  
 // TODO: set other options on the Copier. 
  
 job 
 , 
  
 err 
  
 := 
  
 c 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 status 
 . 
  Err 
 
 () 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

func (*Table) Create

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 Create 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 tm 
  
 * 
  TableMetadata 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Create creates a table in the BigQuery service. Pass in a TableMetadata value to configure the table. If tm.View.Query is non-empty, the created table will be of type VIEW. If no ExpirationTime is specified, the table will never expire. After table creation, a view can be modified only if its table was initially created with a view.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 t 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "new-table" 
 ) 
  
 if 
  
 err 
  
 := 
  
 t 
 . 
 Create 
 ( 
 ctx 
 , 
  
 nil 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 
encryptionKey
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 // Infer table schema from a Go type. 
  
 schema 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
  InferSchema 
 
 ( 
 Item 
 {}) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 t 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "new-table" 
 ) 
  
 // TODO: Replace this key with a key you have created in Cloud KMS. 
  
 keyName 
  
 := 
  
 "projects/P/locations/L/keyRings/R/cryptoKeys/K" 
  
 if 
  
 err 
  
 := 
  
 t 
 . 
 Create 
 ( 
 ctx 
 , 
  
& bigquery 
 . 
  TableMetadata 
 
 { 
  
 Name 
 : 
  
 "My New Table" 
 , 
  
 Schema 
 : 
  
 schema 
 , 
  
 EncryptionConfig 
 : 
  
& bigquery 
 . 
  EncryptionConfig 
 
 { 
 KMSKeyName 
 : 
  
 keyName 
 }, 
  
 }); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 type 
  
 Item 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Size 
  
 float64 
  
 Count 
  
 int 
 } 
 // Save implements the ValueSaver interface. 
 func 
  
 ( 
 i 
  
 * 
 Item 
 ) 
  
 Save 
 () 
  
 ( 
 map 
 [ 
 string 
 ] 
 bigquery 
 . 
  Value 
 
 , 
  
 string 
 , 
  
 error 
 ) 
  
 { 
  
 return 
  
 map 
 [ 
 string 
 ] 
 bigquery 
 . 
  Value 
 
 { 
  
 "Name" 
 : 
  
 i 
 . 
 Name 
 , 
  
 "Size" 
 : 
  
 i 
 . 
 Size 
 , 
  
 "Count" 
 : 
  
 i 
 . 
 Count 
 , 
  
 }, 
  
 "" 
 , 
  
 nil 
 } 
 
initialize
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "time" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 // Infer table schema from a Go type. 
  
 schema 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
  InferSchema 
 
 ( 
 Item 
 {}) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 t 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "new-table" 
 ) 
  
 if 
  
 err 
  
 := 
  
 t 
 . 
 Create 
 ( 
 ctx 
 , 
  
& bigquery 
 . 
  TableMetadata 
 
 { 
  
 Name 
 : 
  
 "My New Table" 
 , 
  
 Schema 
 : 
  
 schema 
 , 
  
 ExpirationTime 
 : 
  
 time 
 . 
 Now 
 (). 
 Add 
 ( 
 24 
  
 * 
  
 time 
 . 
 Hour 
 ), 
  
 }); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 type 
  
 Item 
  
 struct 
  
 { 
  
 Name 
  
 string 
  
 Size 
  
 float64 
  
 Count 
  
 int 
 } 
 // Save implements the ValueSaver interface. 
 func 
  
 ( 
 i 
  
 * 
 Item 
 ) 
  
 Save 
 () 
  
 ( 
 map 
 [ 
 string 
 ] 
 bigquery 
 . 
  Value 
 
 , 
  
 string 
 , 
  
 error 
 ) 
  
 { 
  
 return 
  
 map 
 [ 
 string 
 ] 
 bigquery 
 . 
  Value 
 
 { 
  
 "Name" 
 : 
  
 i 
 . 
 Name 
 , 
  
 "Size" 
 : 
  
 i 
 . 
 Size 
 , 
  
 "Count" 
 : 
  
 i 
 . 
 Count 
 , 
  
 }, 
  
 "" 
 , 
  
 nil 
 } 
 

func (*Table) Delete

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete deletes the table.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Delete 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

func (*Table) ExtractorTo

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 ExtractorTo 
 ( 
 dst 
  
 * 
  GCSReference 
 
 ) 
  
 * 
  Extractor 
 
 

ExtractorTo returns an Extractor which can be used to extract data from a BigQuery table into Google Cloud Storage. The returned Extractor may optionally be further configured before its Run method is called.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 gcsRef 
  
 := 
  
 bigquery 
 . 
  NewGCSReference 
 
 ( 
 "gs://my-bucket/my-object" 
 ) 
  
 gcsRef 
 . 
 FieldDelimiter 
  
 = 
  
 ":" 
  
 // TODO: set other options on the GCSReference. 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 extractor 
  
 := 
  
 ds 
 . 
 Table 
 ( 
 "my_table" 
 ). 
 ExtractorTo 
 ( 
 gcsRef 
 ) 
  
 extractor 
 . 
 DisableHeader 
  
 = 
  
 true 
  
 // TODO: set other options on the Extractor. 
  
 job 
 , 
  
 err 
  
 := 
  
 extractor 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 status 
 . 
  Err 
 
 () 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 

func (*Table) FullyQualifiedName

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 FullyQualifiedName 
 () 
  
  string 
 
 

FullyQualifiedName returns the ID of the table in projectID:datasetID.tableID format.

func (*Table) IAM

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 IAM 
 () 
  
 * 
  iam 
 
 . 
  Handle 
 
 

IAM provides access to an iam.Handle that allows access to IAM functionality for the given BigQuery table. For more information, see https://pkg.go.dev/cloud.google.com/go/iam

func (*Table) Inserter

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 Inserter 
 () 
  
 * 
  Inserter 
 
 

Inserter returns an Inserter that can be used to append rows to t. The returned Inserter may optionally be further configured before its Put method is called.

To stream rows into a date-partitioned table at a particular date, add the $yyyymmdd suffix to the table name when constructing the Table.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ins 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Inserter 
 () 
  
 _ 
  
 = 
  
 ins 
  
 // TODO: Use ins. 
 } 
 
options
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 ins 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Inserter 
 () 
  
 ins 
 . 
 SkipInvalidRows 
  
 = 
  
 true 
  
 ins 
 . 
 IgnoreUnknownValues 
  
 = 
  
 true 
  
 _ 
  
 = 
  
 ins 
  
 // TODO: Use ins. 
 } 
 

func (*Table) LoaderFrom

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 LoaderFrom 
 ( 
 src 
  
  LoadSource 
 
 ) 
  
 * 
  Loader 
 
 

LoaderFrom returns a Loader which can be used to load data into a BigQuery table. The returned Loader may optionally be further configured before its Run method is called. See GCSReference and ReaderSource for additional configuration options that affect loading.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 gcsRef 
  
 := 
  
 bigquery 
 . 
  NewGCSReference 
 
 ( 
 "gs://my-bucket/my-object" 
 ) 
  
 gcsRef 
 . 
 AllowJaggedRows 
  
 = 
  
 true 
  
 gcsRef 
 . 
 MaxBadRecords 
  
 = 
  
 5 
  
 gcsRef 
 . 
  Schema 
 
  
 = 
  
 schema 
  
 // TODO: set other options on the GCSReference. 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 loader 
  
 := 
  
 ds 
 . 
 Table 
 ( 
 "my_table" 
 ). 
  LoaderFrom 
 
 ( 
 gcsRef 
 ) 
  
 loader 
 . 
 CreateDisposition 
  
 = 
  
 bigquery 
 . 
  CreateNever 
 
  
 // TODO: set other options on the Loader. 
  
 job 
 , 
  
 err 
  
 := 
  
 loader 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 status 
 . 
  Err 
 
 () 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 var 
  
 schema 
  
 bigquery 
 . 
  Schema 
 
 
reader
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "os" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 f 
 , 
  
 err 
  
 := 
  
 os 
 . 
 Open 
 ( 
 "data.csv" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 rs 
  
 := 
  
 bigquery 
 . 
  NewReaderSource 
 
 ( 
 f 
 ) 
  
 rs 
 . 
 AllowJaggedRows 
  
 = 
  
 true 
  
 rs 
 . 
 MaxBadRecords 
  
 = 
  
 5 
  
 rs 
 . 
  Schema 
 
  
 = 
  
 schema 
  
 // TODO: set other options on the GCSReference. 
  
 ds 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ) 
  
 loader 
  
 := 
  
 ds 
 . 
 Table 
 ( 
 "my_table" 
 ). 
  LoaderFrom 
 
 ( 
 rs 
 ) 
  
 loader 
 . 
 CreateDisposition 
  
 = 
  
 bigquery 
 . 
  CreateNever 
 
  
 // TODO: set other options on the Loader. 
  
 job 
 , 
  
 err 
  
 := 
  
 loader 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 status 
 , 
  
 err 
  
 := 
  
 job 
 . 
 Wait 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 status 
 . 
  Err 
 
 () 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
 } 
 var 
  
 schema 
  
 bigquery 
 . 
  Schema 
 
 
  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 Metadata 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 md 
  
 * 
  TableMetadata 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Metadata fetches the metadata for the table.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 md 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Metadata 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 md 
 ) 
 } 
 

func (*Table) Read

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 Read 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  RowIterator 
 
 

Read fetches the contents of the table.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ). 
 Read 
 ( 
 ctx 
 ) 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Table) Update

Update modifies specific Table metadata fields.

Examples

blindWrite
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 t 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ) 
  
 tm 
 , 
  
 err 
  
 := 
  
 t 
 . 
 Update 
 ( 
 ctx 
 , 
  
 bigquery 
 . 
  TableMetadataToUpdate 
 
 { 
  
 Description 
 : 
  
 "my favorite table" 
 , 
  
 }, 
  
 "" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 tm 
 ) 
 } 
 
readModifyWrite
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 t 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
 Table 
 ( 
 "my_table" 
 ) 
  
 md 
 , 
  
 err 
  
 := 
  
 t 
 . 
 Metadata 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 md2 
 , 
  
 err 
  
 := 
  
 t 
 . 
 Update 
 ( 
 ctx 
 , 
  
 bigquery 
 . 
  TableMetadataToUpdate 
 
 { 
 Name 
 : 
  
 "new " 
  
 + 
  
 md 
 . 
 Name 
 }, 
  
 md 
 . 
 ETag 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 md2 
 ) 
 } 
 

func (*Table) Uploader (deprecated)

  func 
  
 ( 
 t 
  
 * 
  Table 
 
 ) 
  
 Uploader 
 () 
  
 * 
  Inserter 
 
 

Uploader calls Inserter. Deprecated: use Table.Inserter instead.

TableCopyOperationType

  type 
  
 TableCopyOperationType 
  
  string 
 
 

TableCopyOperationType is used to indicate the type of operation performed by a BigQuery copy job.

CopyOperation, SnapshotOperation, RestoreOperation

  var 
  
 ( 
  
 // CopyOperation indicates normal table to table copying. 
  
 CopyOperation 
  
  TableCopyOperationType 
 
  
 = 
  
 "COPY" 
  
 // SnapshotOperation indicates creating a snapshot from a regular table. 
  
 SnapshotOperation 
  
  TableCopyOperationType 
 
  
 = 
  
 "SNAPSHOT" 
  
 // RestoreOperation indicates creating/restoring a table from a snapshot. 
  
 RestoreOperation 
  
  TableCopyOperationType 
 
  
 = 
  
 "RESTORE" 
 ) 
 

TableCreateDisposition

  type 
  
 TableCreateDisposition 
  
  string 
 
 

TableCreateDisposition specifies the circumstances under which destination table will be created. Default is CreateIfNeeded.

CreateIfNeeded, CreateNever

  const 
  
 ( 
  
 // CreateIfNeeded will create the table if it does not already exist. 
  
 // Tables are created atomically on successful completion of a job. 
  
 CreateIfNeeded 
  
  TableCreateDisposition 
 
  
 = 
  
 "CREATE_IF_NEEDED" 
  
 // CreateNever ensures the table must already exist and will not be 
  
 // automatically created. 
  
 CreateNever 
  
  TableCreateDisposition 
 
  
 = 
  
 "CREATE_NEVER" 
 ) 
 

TableIterator

  type 
  
 TableIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

A TableIterator is an iterator over Tables.

func (*TableIterator) Next

  func 
  
 ( 
 it 
  
 * 
  TableIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  Table 
 
 , 
  
  error 
 
 ) 
 

Next returns the next result. Its second return value is Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/bigquery" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 bigquery 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
 Dataset 
 ( 
 "my_dataset" 
 ). 
  Tables 
 
 ( 
 ctx 
 ) 
  
 for 
  
 { 
  
 t 
 , 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 t 
 ) 
  
 } 
 } 
 

func (*TableIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  TableIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

  type 
  
 TableMetadata 
  
 struct 
  
 { 
  
 // The user-friendly name for the table. 
  
 Name 
  
  string 
 
  
 // Output-only location of the table, based on the encapsulating dataset. 
  
 Location 
  
  string 
 
  
 // The user-friendly description of the table. 
  
 Description 
  
  string 
 
  
 // The table schema. If provided on create, ViewQuery must be empty. 
  
 Schema 
  
  Schema 
 
  
 // If non-nil, this table is a materialized view. 
  
 MaterializedView 
  
 * 
  MaterializedViewDefinition 
 
  
 // The query to use for a logical view. If provided on create, Schema must be nil. 
  
 ViewQuery 
  
  string 
 
  
 // Use Legacy SQL for the view query. 
  
 // At most one of UseLegacySQL and UseStandardSQL can be true. 
  
 UseLegacySQL 
  
  bool 
 
  
 // Use Standard SQL for the view query. The default. 
  
 // At most one of UseLegacySQL and UseStandardSQL can be true. 
  
 // Deprecated: use UseLegacySQL. 
  
 UseStandardSQL 
  
  bool 
 
  
 // If non-nil, the table is partitioned by time. Only one of 
  
 // time partitioning or range partitioning can be specified. 
  
 TimePartitioning 
  
 * 
  TimePartitioning 
 
  
 // If non-nil, the table is partitioned by integer range.  Only one of 
  
 // time partitioning or range partitioning can be specified. 
  
 RangePartitioning 
  
 * 
  RangePartitioning 
 
  
 // If set to true, queries that reference this table must specify a 
  
 // partition filter (e.g. a WHERE clause) that can be used to eliminate 
  
 // partitions. Used to prevent unintentional full data scans on large 
  
 // partitioned tables. 
  
 RequirePartitionFilter 
  
  bool 
 
  
 // Clustering specifies the data clustering configuration for the table. 
  
 Clustering 
  
 * 
  Clustering 
 
  
 // The time when this table expires. If set, this table will expire at the 
  
 // specified time. Expired tables will be deleted and their storage 
  
 // reclaimed. The zero value is ignored. 
  
 ExpirationTime 
  
  time 
 
 . 
  Time 
 
  
 // User-provided labels. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // Information about a table stored outside of BigQuery. 
  
 ExternalDataConfig 
  
 * 
  ExternalDataConfig 
 
  
 // Custom encryption configuration (e.g., Cloud KMS keys). 
  
 EncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 FullID 
  
  string 
 
  
 // An opaque ID uniquely identifying the table. 
  
 Type 
  
  TableType 
 
  
 CreationTime 
  
  time 
 
 . 
  Time 
 
  
 LastModifiedTime 
  
  time 
 
 . 
  Time 
 
  
 // The size of the table in bytes. 
  
 // This does not include data that is being buffered during a streaming insert. 
  
 NumBytes 
  
  int64 
 
  
 // The number of bytes in the table considered "long-term storage" for reduced 
  
 // billing purposes.  See https://cloud.google.com/bigquery/pricing#long-term-storage 
  
 // for more information. 
  
 NumLongTermBytes 
  
  int64 
 
  
 // The number of rows of data in this table. 
  
 // This does not include data that is being buffered during a streaming insert. 
  
 NumRows 
  
  uint64 
 
  
 // SnapshotDefinition contains additional information about the provenance of a 
  
 // given snapshot table. 
  
 SnapshotDefinition 
  
 * 
  SnapshotDefinition 
 
  
 // Contains information regarding this table's streaming buffer, if one is 
  
 // present. This field will be nil if the table is not being streamed to or if 
  
 // there is no data in the streaming buffer. 
  
 StreamingBuffer 
  
 * 
  StreamingBuffer 
 
  
 // ETag is the ETag obtained when reading metadata. Pass it to Table.Update to 
  
 // ensure that the metadata hasn't changed since it was read. 
  
 ETag 
  
  string 
 
 } 
 

TableMetadata contains information about a BigQuery table.

TableMetadataToUpdate

  type 
  
 TableMetadataToUpdate 
  
 struct 
  
 { 
  
 // The user-friendly description of this table. 
  
 Description 
  
  optional 
 
 . 
  String 
 
  
 // The user-friendly name for this table. 
  
 Name 
  
  optional 
 
 . 
  String 
 
  
 // The table's schema. 
  
 // When updating a schema, you can add columns but not remove them. 
  
 Schema 
  
  Schema 
 
  
 // The table's clustering configuration. 
  
 // For more information on how modifying clustering affects the table, see: 
  
 // https://cloud.google.com/bigquery/docs/creating-clustered-tables#modifying-cluster-spec 
  
 Clustering 
  
 * 
  Clustering 
 
  
 // The table's encryption configuration. 
  
 EncryptionConfig 
  
 * 
  EncryptionConfig 
 
  
 // The time when this table expires. To remove a table's expiration, 
  
 // set ExpirationTime to NeverExpire. The zero value is ignored. 
  
 ExpirationTime 
  
  time 
 
 . 
  Time 
 
  
 // The query to use for a view. 
  
 ViewQuery 
  
  optional 
 
 . 
  String 
 
  
 // Use Legacy SQL for the view query. 
  
 UseLegacySQL 
  
  optional 
 
 . 
  Bool 
 
  
 // MaterializedView allows changes to the underlying materialized view 
  
 // definition. When calling Update, ensure that all mutable fields of 
  
 // MaterializedViewDefinition are populated. 
  
 MaterializedView 
  
 * 
  MaterializedViewDefinition 
 
  
 // TimePartitioning allows modification of certain aspects of partition 
  
 // configuration such as partition expiration and whether partition 
  
 // filtration is required at query time.  When calling Update, ensure 
  
 // that all mutable fields of TimePartitioning are populated. 
  
 TimePartitioning 
  
 * 
  TimePartitioning 
 
  
 // RequirePartitionFilter governs whether the table enforces partition 
  
 // elimination when referenced in a query. 
  
 RequirePartitionFilter 
  
  optional 
 
 . 
  Bool 
 
  
 // contains filtered or unexported fields 
 } 
 

TableMetadataToUpdate is used when updating a table's metadata. Only non-nil fields will be updated.

func (*TableMetadataToUpdate) DeleteLabel

  func 
  
 ( 
 u 
  
 * 
  TableMetadataToUpdate 
 
 ) 
  
 DeleteLabel 
 ( 
 name 
  
  string 
 
 ) 
 

DeleteLabel causes a label to be deleted on a call to Update.

func (*TableMetadataToUpdate) SetLabel

  func 
  
 ( 
 u 
  
 * 
  TableMetadataToUpdate 
 
 ) 
  
 SetLabel 
 ( 
 name 
 , 
  
 value 
  
  string 
 
 ) 
 

SetLabel causes a label to be added or modified on a call to Update.

TableType

  type 
  
 TableType 
  
  string 
 
 

TableType is the type of table.

RegularTable, ViewTable, ExternalTable, MaterializedView, Snapshot

  const 
  
 ( 
  
 // RegularTable is a regular table. 
  
 RegularTable 
  
  TableType 
 
  
 = 
  
 "TABLE" 
  
 // ViewTable is a table type describing that the table is a logical view. 
  
 // See more information at https://cloud.google.com/bigquery/docs/views. 
  
 ViewTable 
  
  TableType 
 
  
 = 
  
 "VIEW" 
  
 // ExternalTable is a table type describing that the table is an external 
  
 // table (also known as a federated data source). See more information at 
  
 // https://cloud.google.com/bigquery/external-data-sources. 
  
 ExternalTable 
  
  TableType 
 
  
 = 
  
 "EXTERNAL" 
  
 // MaterializedView represents a managed storage table that's derived from 
  
 // a base table. 
  
 MaterializedView 
  
  TableType 
 
  
 = 
  
 "MATERIALIZED_VIEW" 
  
 // Snapshot represents an immutable point in time snapshot of some other 
  
 // table. 
  
 Snapshot 
  
  TableType 
 
  
 = 
  
 "SNAPSHOT" 
 ) 
 

TableWriteDisposition

  type 
  
 TableWriteDisposition 
  
  string 
 
 

TableWriteDisposition specifies how existing data in a destination table is treated. Default is WriteAppend.

WriteAppend, WriteTruncate, WriteEmpty

  const 
  
 ( 
  
 // WriteAppend will append to any existing data in the destination table. 
  
 // Data is appended atomically on successful completion of a job. 
  
 WriteAppend 
  
  TableWriteDisposition 
 
  
 = 
  
 "WRITE_APPEND" 
  
 // WriteTruncate overrides the existing data in the destination table. 
  
 // Data is overwritten atomically on successful completion of a job. 
  
 WriteTruncate 
  
  TableWriteDisposition 
 
  
 = 
  
 "WRITE_TRUNCATE" 
  
 // WriteEmpty fails writes if the destination table already contains data. 
  
 WriteEmpty 
  
  TableWriteDisposition 
 
  
 = 
  
 "WRITE_EMPTY" 
 ) 
 

TimePartitioning

  type 
  
 TimePartitioning 
  
 struct 
  
 { 
  
 // Defines the partition interval type.  Supported values are "HOUR", "DAY", "MONTH", and "YEAR". 
  
 // When the interval type is not specified, default behavior is DAY. 
  
 Type 
  
  TimePartitioningType 
 
  
 // The amount of time to keep the storage for a partition. 
  
 // If the duration is empty (0), the data in the partitions do not expire. 
  
 Expiration 
  
  time 
 
 . 
  Duration 
 
  
 // If empty, the table is partitioned by pseudo column '_PARTITIONTIME'; if set, the 
  
 // table is partitioned by this field. The field must be a top-level TIMESTAMP or 
  
 // DATE field. Its mode must be NULLABLE or REQUIRED. 
  
 Field 
  
  string 
 
  
 // If set to true, queries that reference this table must specify a 
  
 // partition filter (e.g. a WHERE clause) that can be used to eliminate 
  
 // partitions. Used to prevent unintentional full data scans on large 
  
 // partitioned tables. 
  
 // DEPRECATED: use the top-level RequirePartitionFilter in TableMetadata. 
  
 RequirePartitionFilter 
  
  bool 
 
 } 
 

TimePartitioning describes the time-based date partitioning on a table. For more information see: https://cloud.google.com/bigquery/docs/creating-partitioned-tables .

TimePartitioningType

  type 
  
 TimePartitioningType 
  
  string 
 
 

TimePartitioningType defines the interval used to partition managed data.

DayPartitioningType, HourPartitioningType, MonthPartitioningType, YearPartitioningType

  const 
  
 ( 
  
 // DayPartitioningType uses a day-based interval for time partitioning. 
  
 DayPartitioningType 
  
  TimePartitioningType 
 
  
 = 
  
 "DAY" 
  
 // HourPartitioningType uses an hour-based interval for time partitioning. 
  
 HourPartitioningType 
  
  TimePartitioningType 
 
  
 = 
  
 "HOUR" 
  
 // MonthPartitioningType uses a month-based interval for time partitioning. 
  
 MonthPartitioningType 
  
  TimePartitioningType 
 
  
 = 
  
 "MONTH" 
  
 // YearPartitioningType uses a year-based interval for time partitioning. 
  
 YearPartitioningType 
  
  TimePartitioningType 
 
  
 = 
  
 "YEAR" 
 ) 
 

TrainingRun

  type 
  
 TrainingRun 
  
  bq 
 
 . 
  TrainingRun 
 
 

TrainingRun represents information about a single training run for a BigQuery ML model. Experimental: This information may be modified or removed in future versions of this package.

TransactionInfo

  type 
  
 TransactionInfo 
  
 struct 
  
 { 
  
 // TransactionID is the system-generated identifier for the transaction. 
  
 TransactionID 
  
  string 
 
 } 
 

TransactionInfo contains information about a multi-statement transaction that may have associated with a job.

Uploader

  type 
  
 Uploader 
  
 = 
  
  Inserter 
 
 

Uploader is an obsolete name for Inserter.

Value

  type 
  
 Value 
  
 interface 
 {} 
 

Value stores the contents of a single cell from a BigQuery result.

ValueLoader

  type 
  
 ValueLoader 
  
 interface 
  
 { 
  
 Load 
 ( 
 v 
  
 [] 
  Value 
 
 , 
  
 s 
  
  Schema 
 
 ) 
  
  error 
 
 } 
 

ValueLoader stores a slice of Values representing a result row from a Read operation. See RowIterator.Next for more information.

ValueSaver

  type 
  
 ValueSaver 
  
 interface 
  
 { 
  
 // Save returns a row to be inserted into a BigQuery table, represented 
  
 // as a map from field name to Value. 
  
 // The insertID governs the best-effort deduplication feature of 
  
 // BigQuery streaming inserts. 
  
 // 
  
 // If the insertID is empty, a random insertID will be generated by 
  
 // this library to facilitate deduplication. 
  
 // 
  
 // If the insertID is set to the sentinel value NoDedupeID, an insertID 
  
 // is not sent. 
  
 // 
  
 // For all other non-empty values, BigQuery will use the provided 
  
 // value for best-effort deduplication. 
  
 Save 
 () 
  
 ( 
 row 
  
 map 
 [ 
  string 
 
 ] 
  Value 
 
 , 
  
 insertID 
  
  string 
 
 , 
  
 err 
  
  error 
 
 ) 
 } 
 

A ValueSaver returns a row of data to be inserted into a table.

ValuesSaver

  type 
  
 ValuesSaver 
  
 struct 
  
 { 
  
 Schema 
  
  Schema 
 
  
 // InsertID governs the best-effort deduplication feature of 
  
 // BigQuery streaming inserts. 
  
 // 
  
 // If the InsertID is empty, a random insertID will be generated by 
  
 // this library to facilitate deduplication. 
  
 // 
  
 // If the InsertID is set to the sentinel value NoDedupeID, an insertID 
  
 // is not sent. 
  
 // 
  
 // For all other non-empty values, BigQuery will use the provided 
  
 // value for best-effort deduplication. 
  
 InsertID 
  
  string 
 
  
 Row 
  
 [] 
  Value 
 
 } 
 

ValuesSaver implements ValueSaver for a slice of Values.

func (*ValuesSaver) Save

  func 
  
 ( 
 vls 
  
 * 
  ValuesSaver 
 
 ) 
  
 Save 
 () 
  
 ( 
 map 
 [ 
  string 
 
 ] 
  Value 
 
 , 
  
  string 
 
 , 
  
  error 
 
 ) 
 

Save implements ValueSaver.

Create a Mobile Website
View Site in Mobile | Classic
Share by: