use Google\Cloud\BigQuery\BigQueryClient;
$bigQuery = new BigQueryClient();
$query = $bigQuery->query('SELECT commit FROM `bigquery-public-data.github_repos.commits` LIMIT 100');
Sets the table where the query results should be stored. If not set, a
new table will be created to store the results. This property must be set
for large results that exceed the maximum response size.
Sets the billing tier limit for this job. Queries that have resource
usage beyond this tier will fail (without incurring a charge). If
unspecified, this will be set to your project default.
Sets a bytes billed limit for this job. Queries that will have bytes
billed beyond this limit will fail (without incurring a charge). If
unspecified, this will be set to your project default.
Parameters to use on the query. When providing
a non-associative array positional parameters (?) will be used.
When providing an associative array named parameters will be used
(@name).
Sets the parameter types for positional parameters.
Note, that this is of high importance when an empty array can be passed as
a positional parameter, as we have no way of guessing the data type of the
array contents.
$queryStr = 'SELECT * FROM `bigquery-public-data.github_repos.commits` ' .
'WHERE author.time_sec IN UNNEST (?) AND message IN UNNEST (?) AND committer.name = ? LIMIT 10';
$queryJobConfig = $bigQuery->query("")
->parameters([[], ["abc", "def"], "John"])
->setParamTypes(['INT64']);
In the above example, the first array will have a type of INT64
while the next one will have a type of
STRING (even though the second array type is not supplied).
Sets options to allow the schema of the destination table to be updated
as a side effect of the query job. Schema update options are supported
in two cases: when writeDisposition is"WRITE_APPEND"; when
writeDisposition is"WRITE_TRUNCATE"and the destination table is a
partition of a table, specified by partition decorators. For normal
tables,"WRITE_TRUNCATE"will always overwrite the schema.
Schema update options. Acceptable
values include"ALLOW_FIELD_ADDITION"(allow adding a nullable
field to the schema),"ALLOW_FIELD_RELAXATION"(allow relaxing
a required field in the original schema to nullable).
Sets table definitions for querying an external data source outside of
BigQuery. Describes the data format, location and other properties of the
data source.
Sets the action that occurs if the destination table already exists. Each
action is atomic and only occurs if BigQuery is able to complete the job
successfully. Creation, truncation and append actions occur as one atomic
update upon job completion.
Example:
$query->writeDisposition('WRITE_TRUNCATE');
Parameter
Name
Description
writeDisposition
string
The write disposition. Acceptable values
include"WRITE_TRUNCATE","WRITE_APPEND","WRITE_EMPTY".Defaults to*"WRITE_EMPTY".
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-08-28 UTC."],[],[],null,["# BigQuery Client - Class QueryJobConfiguration (1.34.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.34.0 (latest)](/php/docs/reference/cloud-bigquery/latest/QueryJobConfiguration)\n- [1.33.1](/php/docs/reference/cloud-bigquery/1.33.1/QueryJobConfiguration)\n- [1.32.0](/php/docs/reference/cloud-bigquery/1.32.0/QueryJobConfiguration)\n- [1.31.1](/php/docs/reference/cloud-bigquery/1.31.1/QueryJobConfiguration)\n- [1.30.3](/php/docs/reference/cloud-bigquery/1.30.3/QueryJobConfiguration)\n- [1.29.0](/php/docs/reference/cloud-bigquery/1.29.0/QueryJobConfiguration)\n- [1.28.3](/php/docs/reference/cloud-bigquery/1.28.3/QueryJobConfiguration)\n- [1.27.0](/php/docs/reference/cloud-bigquery/1.27.0/QueryJobConfiguration)\n- [1.26.1](/php/docs/reference/cloud-bigquery/1.26.1/QueryJobConfiguration)\n- [1.25.1](/php/docs/reference/cloud-bigquery/1.25.1/QueryJobConfiguration)\n- [1.24.2](/php/docs/reference/cloud-bigquery/1.24.2/QueryJobConfiguration)\n- [1.23.10](/php/docs/reference/cloud-bigquery/1.23.10/QueryJobConfiguration) \nReference documentation and code samples for the BigQuery Client class QueryJobConfiguration.\n\nRepresents a configuration for a query job. For more information on the\navailable settings please see the\n[Jobs configuration API documentation](https://cloud.google.com/bigquery/docs/reference/rest/v2/Job).\n\nExample: \n\n use Google\\Cloud\\BigQuery\\BigQueryClient;\n\n $bigQuery = new BigQueryClient();\n $query = $bigQuery-\u003equery('SELECT commit FROM `bigquery-public-data.github_repos.commits` LIMIT 100');\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ BigQuery\n\nMethods\n-------\n\n### __construct\n\n### allowLargeResults\n\nSets whether or not the query can produce arbitrarily large result\ntables at a slight cost in performance.\n\nOnly applies to queries performed with legacy SQL dialect and requires a\n[QueryJobConfiguration::destinationTable()](/php/docs/reference/cloud-bigquery/latest/QueryJobConfiguration#_Google_Cloud_BigQuery_QueryJobConfiguration__destinationTable__) to\nbe set.\n\nExample: \n\n $query-\u003eallowLargeResults(true);\n\n### clustering\n\nSee also:\n\n- [Introduction to Clustered Tables](https://cloud.google.com/bigquery/docs/clustered-tables)\n\n### createDisposition\n\nSets whether the job is allowed to create new tables.\n\nExample: \n\n $query-\u003ecreateDisposition('CREATE_NEVER');\n\n### defaultDataset\n\nSets the default dataset to use for unqualified table names in the query.\n\nExample: \n\n $dataset = $bigQuery-\u003edataset('my_dataset');\n $query-\u003edefaultDataset($dataset);\n\n### destinationEncryptionConfiguration\n\nSets the custom encryption configuration (e.g., Cloud KMS keys).\n\nExample: \n\n $query-\u003edestinationEncryptionConfiguration([\n 'kmsKeyName' =\u003e 'my_key'\n ]);\n\n### destinationTable\n\nSets the table where the query results should be stored. If not set, a\nnew table will be created to store the results. This property must be set\nfor large results that exceed the maximum response size.\n\nExample: \n\n $table = $bigQuery-\u003edataset('my_dataset')\n -\u003etable('my_table');\n $query-\u003edestinationTable($table);\n\n### flattenResults\n\nSets whether or not to flatten all nested and repeated fields in the\nquery results.\n\nOnly applies to queries performed with legacy SQL dialect.\n[QueryJobConfiguration::allowLargeResults()](/php/docs/reference/cloud-bigquery/latest/QueryJobConfiguration#_Google_Cloud_BigQuery_QueryJobConfiguration__allowLargeResults__) must be true if this\nis set to false.\n\nExample: \n\n $query-\u003euseLegacySql(true)\n -\u003eflattenResults(true);\n\n### maximumBillingTier\n\nSets the billing tier limit for this job. Queries that have resource\nusage beyond this tier will fail (without incurring a charge). If\nunspecified, this will be set to your project default.\n\nExample: \n\n $query-\u003emaximumBillingTier(1);\n\n### maximumBytesBilled\n\nSets a bytes billed limit for this job. Queries that will have bytes\nbilled beyond this limit will fail (without incurring a charge). If\nunspecified, this will be set to your project default.\n\nExample: \n\n $query-\u003emaximumBytesBilled(3000);\n\n### parameters\n\nSets parameters to be used on the query. Only available for standard SQL\nqueries.\n\nFor examples of usage please see\n[BigQueryClient::runQuery()](/php/docs/reference/cloud-bigquery/latest/BigQueryClient#_Google_Cloud_BigQuery_BigQueryClient__runQuery__).\n\n### setParamTypes\n\nSets the parameter types for positional parameters.\n\nNote, that this is of high importance when an empty array can be passed as\na positional parameter, as we have no way of guessing the data type of the\narray contents. \n\n $queryStr = 'SELECT * FROM `bigquery-public-data.github_repos.commits` ' .\n 'WHERE author.time_sec IN UNNEST (?) AND message IN UNNEST (?) AND committer.name = ? LIMIT 10';\n\n $queryJobConfig = $bigQuery-\u003equery(\"\")\n -\u003eparameters([[], [\"abc\", \"def\"], \"John\"])\n -\u003esetParamTypes(['INT64']);\n\nIn the above example, the first array will have a type of INT64\nwhile the next one will have a type of\nSTRING (even though the second array type is not supplied).\n\nFor named params, we can simply call: \n\n $queryJobConfig = $bigQuery-\u003equery(\"\")\n -\u003eparameters(['times' =\u003e [], 'messages' =\u003e [\"abc\", \"def\"]])\n -\u003esetParamTypes(['times' =\u003e 'INT64']);\n\n### priority\n\nSets a priority for the query.\n\nExample: \n\n $query-\u003epriority('BATCH');\n\n### query\n\nSets the SQL query.\n\nExample: \n\n $query-\u003equery(\n 'SELECT commit FROM `bigquery-public-data.github_repos.commits` LIMIT 100'\n );\n\n### schemaUpdateOptions\n\nSets options to allow the schema of the destination table to be updated\nas a side effect of the query job. Schema update options are supported\nin two cases: when writeDisposition is `\"WRITE_APPEND\"`; when\nwriteDisposition is `\"WRITE_TRUNCATE\"` and the destination table is a\npartition of a table, specified by partition decorators. For normal\ntables, `\"WRITE_TRUNCATE\"` will always overwrite the schema.\n\nExample: \n\n $query-\u003eschemaUpdateOptions([\n 'ALLOW_FIELD_ADDITION'\n ]);\n\n### tableDefinitions\n\nSets table definitions for querying an external data source outside of\nBigQuery. Describes the data format, location and other properties of the\ndata source.\n\nExample: \n\n $query-\u003etableDefinitions([\n 'autodetect' =\u003e true,\n 'sourceUris' =\u003e [\n 'gs://my_bucket/table.json'\n ]\n ]);\n\n### timePartitioning\n\nSets time-based partitioning for the destination table.\n\nOnly one of timePartitioning and rangePartitioning should be specified.\n\nExample: \n\n $query-\u003etimePartitioning([\n 'type' =\u003e 'DAY'\n ]);\n\n### rangePartitioning\n\nSets range partitioning specification for the destination table.\n\nOnly one of timePartitioning and rangePartitioning should be specified.\n\nExample: \n\n $query-\u003erangePartitioning([\n 'field' =\u003e 'myInt',\n 'range' =\u003e [\n 'start' =\u003e '0',\n 'end' =\u003e '1000',\n 'interval' =\u003e '100'\n ]\n ]);\n\n### useLegacySql\n\nSets whether or not to use legacy SQL dialect. When not set, defaults to\nfalse in this client.\n\nExample: \n\n $query-\u003euseLegacySql(true);\n\n### useQueryCache\n\nSee also:\n\n- [Using cached results](https://cloud.google.com/bigquery/docs/cached-results)\n\n### userDefinedFunctionResources\n\nSets user-defined function resources used in the query.\n\nExample: \n\n $query-\u003euserDefinedFunctionResources([\n ['resourceUri' =\u003e 'gs://my_bucket/code_path']\n ]);\n\n### writeDisposition\n\nSets the action that occurs if the destination table already exists. Each\naction is atomic and only occurs if BigQuery is able to complete the job\nsuccessfully. Creation, truncation and append actions occur as one atomic\nupdate upon job completion.\n\nExample: \n\n $query-\u003ewriteDisposition('WRITE_TRUNCATE');"]]