Class QueryJob (3.18.0)

  QueryJob 
 ( 
 job_id 
 , 
 query 
 , 
 client 
 , 
 job_config 
 = 
 None 
 ) 
 

Asynchronous job: query tables.

Parameters

Name
Description
job_id
str

the job's ID, within the project belonging to client .

query
str

SQL query string.

client
google.cloud.bigquery.client.Client

A client which holds credentials and project configuration for the dataset (which requires a project).

job_config
Optional[ google.cloud.bigquery.job.QueryJobConfig ]

Extra configuration options for the query job.

Properties

allow_large_results

billing_tier

Returns
Type
Description
Optional[int]
Billing tier used by the job, or None if job is not yet complete.

cache_hit

Return whether or not query results were served from cache.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.cache_hit

Returns
Type
Description
Optional[bool]
whether the query results were returned from cache, or None if job is not yet complete.

clustering_fields

configuration

The configuration for this query job.

connection_properties

See connection_properties .

.. versionadded:: 2.29.0

create_disposition

create_session

See create_session .

.. versionadded:: 2.29.0

created

Datetime at which the job was created.

Returns
Type
Description
Optional[datetime.datetime]
the creation time (None until set from the server).

ddl_operation_performed

ddl_target_routine

Optional[ google.cloud.bigquery.routine.RoutineReference ]: Return the DDL target routine, present for CREATE/DROP FUNCTION/PROCEDURE queries.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.ddl_target_routine

ddl_target_table

Optional[ google.cloud.bigquery.table.TableReference ]: Return the DDL target table, present for CREATE/DROP TABLE/VIEW queries.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.ddl_target_table

default_dataset

destination

See destination .

destination_encryption_configuration

google.cloud.bigquery.encryption_configuration.EncryptionConfiguration : Custom encryption configuration for the destination table.

Custom encryption configuration (e.g., Cloud KMS keys) or :data: None if using default encryption.

See destination_encryption_configuration .

dry_run

See dry_run .

ended

Datetime at which the job finished.

Returns
Type
Description
Optional[datetime.datetime]
the end time (None until set from the server).

error_result

Error information about the job as a whole.

Returns
Type
Description
Optional[Mapping]
the error information (None until set from the server).

errors

Information about individual errors generated by the job.

Returns
Type
Description
Optional[List[Mapping]]
the error information (None until set from the server).

estimated_bytes_processed

Returns
Type
Description
Optional[int]
number of DML rows affected by the job, or None if job is not yet complete.

etag

ETag for the job resource.

Returns
Type
Description
Optional[str]
the ETag (None until set from the server).

flatten_results

job_id

str: ID of the job.

job_type

Type of job.

Returns
Type
Description
str
one of 'load', 'copy', 'extract', 'query'.

labels

Dict[str, str]: Labels for the job.

location

str: Location where the job runs.

maximum_billing_tier

maximum_bytes_billed

num_child_jobs

num_dml_affected_rows

Returns
Type
Description
Optional[int]
number of DML rows affected by the job, or None if job is not yet complete.

parent_job_id

Returns
Type
Description
Optional[str]
parent job id.

path

URL path for the job's APIs.

Returns
Type
Description
str
the path based on project and job ID.

priority

See priority .

project

Project bound to the job.

Returns
Type
Description
str
the project (derived from the client).

query

query_id

[Preview] ID of a completed query.

This ID is auto-generated and not guaranteed to be populated.

query_parameters

query_plan

Returns
Type
Description
mappings describing the query plan, or an empty list if the query has not yet completed.

range_partitioning

referenced_tables

Returns
Type
Description
List[Dict]
mappings describing the query plan, or an empty list if the query has not yet completed.

reservation_usage

Job resource usage breakdown by reservation.

Returns
Type
Description
Reservation usage stats. Can be empty if not set from the server.

schema

The schema of the results.

Present only for successful dry run of non-legacy SQL queries.

schema_update_options

script_statistics

Statistics for a child job of a script.

search_stats

Returns a SearchStats object.

URL for the job resource.

Returns
Type
Description
Optional[str]
the URL (None until set from the server).

session_info

[Preview] Information of the session if this job is part of one.

.. versionadded:: 2.29.0

slot_millis

Union[int, None]: Slot-milliseconds used by this query job.

started

Datetime at which the job was started.

Returns
Type
Description
Optional[datetime.datetime]
the start time (None until set from the server).

state

Status of the job.

Returns
Type
Description
Optional[str]
the state (None until set from the server).

statement_type

Returns
Type
Description
Optional[str]
type of statement used by the job, or None if job is not yet complete.

table_definitions

time_partitioning

timeline

List(TimelineEntry): Return the query execution timeline from job statistics.

total_bytes_billed

Returns
Type
Description
Optional[int]
Total bytes processed by the job, or None if job is not yet complete.

total_bytes_processed

Returns
Type
Description
Optional[int]
Total bytes processed by the job, or None if job is not yet complete.

transaction_info

Information of the multi-statement transaction if this job is part of one.

Since a scripting query job can execute multiple transactions, this property is only expected on child jobs. Use the list_jobs method with the parent_job parameter to iterate over child jobs.

.. versionadded:: 2.24.0

udf_resources

undeclared_query_parameters

Return undeclared query parameters from job statistics, if present.

See: https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#JobStatistics2.FIELDS.undeclared_query_parameters

Returns
Type
Description
Undeclared parameters, or an empty list if the query has not yet completed.

use_legacy_sql

use_query_cache

user_email

E-mail address of user who submitted the job.

Returns
Type
Description
Optional[str]
the URL (None until set from the server).

write_disposition

Methods

add_done_callback

  add_done_callback 
 ( 
 fn 
 ) 
 

Add a callback to be executed when the operation is complete.

If the operation is not already complete, this will start a helper thread to poll for the status of the operation in the background.

Parameter
Name
Description
fn
Callable[Future]

The callback to execute when the operation is complete.

cancel

  cancel 
 ( 
 client 
 = 
 None 
 , 
 retry 
 : 
 typing 
 . 
 Optional 
 [ 
 google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 ] 
 = 
< google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 object 
> , 
 timeout 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 ) 
 - 
> bool 
 

API call: cancel job via a POST request

See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/cancel

Parameters
Name
Description
retry
Optional[google.api_core.retry.Retry]

How to retry the RPC.

timeout
Optional[float]

The number of seconds to wait for the underlying HTTP transport before using retry

client
Optional[ google.cloud.bigquery.client.Client ]

the client to use. If not passed, falls back to the client stored on the current dataset.

Returns
Type
Description
bool
Boolean indicating that the cancel request was sent.

cancelled

  cancelled 
 () 
 

Check if the job has been cancelled.

This always returns False. It's not possible to check if a job was cancelled in the API. This method is here to satisfy the interface for google.api_core.future.Future .

Returns
Type
Description
bool
False

done

  done 
 ( 
 retry 
 : 
 google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 = 
< google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 object 
> , 
 timeout 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 reload 
 : 
 bool 
 = 
 True 
 ) 
 - 
> bool 
 

Checks if the job is complete.

Parameters
Name
Description
timeout
Optional[float]

The number of seconds to wait for the underlying HTTP transport before using retry .

reload
Optional[bool]

If True , make an API call to refresh the job state of unfinished jobs before checking. Default True .

retry
Optional[google.api_core.retry.Retry]

How to retry the RPC. If the job state is DONE , retrying is aborted early, as the job will not change anymore.

Returns
Type
Description
bool
True if the job is complete, False otherwise.

exception

  exception 
 ( 
 timeout 
 = 
< object 
 object 
> ) 
 

Get the exception from the operation, blocking if necessary.

See the documentation for the result method for details on how this method operates, as both result and this method rely on the exact same polling logic. The only difference is that this method does not accept retry and polling arguments but relies on the default ones instead.

Parameter
Name
Description
timeout
int

How long to wait for the operation to complete.

Returns
Type
Description
Optional[google.api_core.GoogleAPICallError]
The operation's error.

exists

  exists 
 ( 
 client 
 = 
 None 
 , 
 retry 
 : 
 google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 = 
< google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 object 
> , 
 timeout 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 ) 
 - 
> bool 
 

API call: test for the existence of the job via a GET request

See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get

Parameters
Name
Description
timeout
Optional[float]

The number of seconds to wait for the underlying HTTP transport before using retry .

client
Optional[ google.cloud.bigquery.client.Client ]

the client to use. If not passed, falls back to the client stored on the current dataset.

retry
Optional[google.api_core.retry.Retry]

How to retry the RPC.

Returns
Type
Description
bool
Boolean indicating existence of the job.

from_api_repr

  from_api_repr 
 ( 
 resource 
 : 
 dict 
 , 
 client 
 : 
 Client 
 ) 
 - 
> QueryJob 
 

Factory: construct a job given its API representation

Parameters
Name
Description
resource
Dict

dataset job representation returned from the API

client
google.cloud.bigquery.client.Client

Client which holds credentials and project configuration for the dataset.

Returns
Type
Description
google.cloud.bigquery.job.QueryJob
Job parsed from resource .

reload

  reload 
 ( 
 client 
 = 
 None 
 , 
 retry 
 : 
 google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 = 
< google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 object 
> , 
 timeout 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 ) 
 

API call: refresh job properties via a GET request.

See https://cloud.google.com/bigquery/docs/reference/rest/v2/jobs/get

Parameters
Name
Description
timeout
Optional[float]

The number of seconds to wait for the underlying HTTP transport before using retry .

client
Optional[ google.cloud.bigquery.client.Client ]

the client to use. If not passed, falls back to the client stored on the current dataset.

retry
Optional[google.api_core.retry.Retry]

How to retry the RPC.

result

  result 
 ( 
 page_size 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 max_results 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 retry 
 : 
 typing 
 . 
 Optional 
 [ 
 google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 ] 
 = 
< google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 object 
> , 
 timeout 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 start_index 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 job_retry 
 : 
 typing 
 . 
 Optional 
 [ 
 google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 ] 
 = 
< google 
 . 
 api_core 
 . 
 retry 
 . 
 retry_unary 
 . 
 Retry 
 object 
> ) 
 - 
> typing 
 . 
 Union 
 [ 
 RowIterator 
 , 
 google 
 . 
 cloud 
 . 
 bigquery 
 . 
 table 
 . 
 _EmptyRowIterator 
 ] 
 

Start the job and wait for it to complete and get the result.

Parameters
Name
Description
page_size
Optional[int]

The maximum number of rows in each page of results from this request. Non-positive values are ignored.

max_results
Optional[int]

The maximum total number of rows from this request.

retry
Optional[google.api_core.retry.Retry]

How to retry the call that retrieves rows. This only applies to making RPC calls. It isn't used to retry failed jobs. This has a reasonable default that should only be overridden with care. If the job state is DONE , retrying is aborted early even if the results are not available, as this will not change anymore.

timeout
Optional[float]

The number of seconds to wait for the underlying HTTP transport before using retry . If multiple requests are made under the hood, timeout applies to each individual request.

start_index
Optional[int]

The zero-based index of the starting row to read.

job_retry
Optional[google.api_core.retry.Retry]

How to retry failed jobs. The default retries rate-limit-exceeded errors. Passing None disables job retry. Not all jobs can be retried. If job_id was provided to the query that created this job, then the job returned by the query will not be retryable, and an exception will be raised if non- None non-default job_retry is also provided.

Exceptions
Type
Description
google.cloud.exceptions.GoogleAPICallError
If the job failed and retries aren't successful.
concurrent.futures.TimeoutError
If the job did not complete in the given timeout.
TypeError
If Non- None and non-default job_retry is provided and the job is not retryable.
Returns
Type
Description
Iterator of row data Row -s. During each page, the iterator will have the total_rows attribute set, which counts the total number of rows **in the result set** (this is distinct from the total number of rows in the current page: iterator.page.num_items ). If the query is a special query that produces no results, e.g. a DDL query, an _EmptyRowIterator instance is returned.

running

  running 
 () 
 

True if the operation is currently running.

set_exception

  set_exception 
 ( 
 exception 
 ) 
 

Set the Future's exception.

set_result

  set_result 
 ( 
 result 
 ) 
 

Set the Future's result.

to_api_repr

  to_api_repr 
 () 
 

Generate a resource for _begin .

to_arrow

  to_arrow 
 ( 
 progress_bar_type 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 bqstorage_client 
 : 
 typing 
 . 
 Optional 
 [ 
 bigquery_storage 
 . 
 BigQueryReadClient 
 ] 
 = 
 None 
 , 
 create_bqstorage_client 
 : 
 bool 
 = 
 True 
 , 
 max_results 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 ) 
 - 
> pyarrow 
 . 
 Table 
 

[Beta] Create a class: pyarrow.Table by loading all pages of a table or query.

Parameters
Name
Description
progress_bar_type
Optional[str]

If set, use the tqdm https://tqdm.github.io/ _ library to display a progress bar while the data downloads. Install the tqdm package to use this feature. Possible values of progress_bar_type include: None No progress bar. 'tqdm' Use the tqdm.tqdm function to print a progress bar to :data: sys.stdout . 'tqdm_notebook' Use the tqdm.notebook.tqdm function to display a progress bar as a Jupyter notebook widget. 'tqdm_gui' Use the tqdm.tqdm_gui function to display a progress bar as a graphical dialog box.

bqstorage_client
Optional[google.cloud.bigquery_storage_v1.BigQueryReadClient]

A BigQuery Storage API client. If supplied, use the faster BigQuery Storage API to fetch rows from BigQuery. This API is a billable API. This method requires google-cloud-bigquery-storage library. Reading from a specific partition or snapshot is not currently supported by this method.

create_bqstorage_client
Optional[bool]

If True (default), create a BigQuery Storage API client using the default API settings. The BigQuery Storage API is a faster way to fetch rows from BigQuery. See the bqstorage_client parameter for more information. This argument does nothing if bqstorage_client is supplied. .. versionadded:: 1.24.0

max_results
Optional[int]

Maximum number of rows to include in the result. No limit by default. .. versionadded:: 2.21.0

Exceptions
Type
Description
ValueError
If the pyarrow library cannot be imported. .. versionadded:: 1.17.0

to_dataframe

  to_dataframe 
 ( 
 bqstorage_client 
 : 
 typing 
 . 
 Optional 
 [ 
 bigquery_storage 
 . 
 BigQueryReadClient 
 ] 
 = 
 None 
 , 
 dtypes 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 typing 
 . 
 Any 
 ]] 
 = 
 None 
 , 
 progress_bar_type 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 create_bqstorage_client 
 : 
 bool 
 = 
 True 
 , 
 max_results 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 geography_as_object 
 : 
 bool 
 = 
 False 
 , 
 bool_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 DefaultPandasDTypes 
 . 
 BOOL_DTYPE 
 , 
 int_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 DefaultPandasDTypes 
 . 
 INT_DTYPE 
 , 
 float_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 None 
 , 
 string_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 None 
 , 
 date_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 DefaultPandasDTypes 
 . 
 DATE_DTYPE 
 , 
 datetime_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 None 
 , 
 time_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 DefaultPandasDTypes 
 . 
 TIME_DTYPE 
 , 
 timestamp_dtype 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Any 
 ] 
 = 
 None 
 , 
 ) 
 - 
> pandas 
 . 
 DataFrame 
 

Return a pandas DataFrame from a QueryJob

Parameters
Name
Description
bqstorage_client
Optional[google.cloud.bigquery_storage_v1.BigQueryReadClient]

A BigQuery Storage API client. If supplied, use the faster BigQuery Storage API to fetch rows from BigQuery. This API is a billable API. This method requires the fastavro and google-cloud-bigquery-storage libraries. Reading from a specific partition or snapshot is not currently supported by this method.

dtypes
Optional[Map[str, Union[str, pandas.Series.dtype]]]

A dictionary of column names pandas dtype s. The provided dtype is used when constructing the series for the column specified. Otherwise, the default pandas behavior is used.

progress_bar_type
Optional[str]

If set, use the tqdm https://tqdm.github.io/ _ library to display a progress bar while the data downloads. Install the tqdm package to use this feature. See to_dataframe for details. .. versionadded:: 1.11.0

create_bqstorage_client
Optional[bool]

If True (default), create a BigQuery Storage API client using the default API settings. The BigQuery Storage API is a faster way to fetch rows from BigQuery. See the bqstorage_client parameter for more information. This argument does nothing if bqstorage_client is supplied. .. versionadded:: 1.24.0

max_results
Optional[int]

Maximum number of rows to include in the result. No limit by default. .. versionadded:: 2.21.0

geography_as_object
Optional[bool]

If True , convert GEOGRAPHY data to shapely geometry objects. If False (default), don't cast geography data to shapely geometry objects. .. versionadded:: 2.24.0

bool_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.BooleanDtype() ) to convert BigQuery Boolean type, instead of relying on the default pandas.BooleanDtype() . If you explicitly set the value to None , then the data type will be numpy.dtype("bool") . BigQuery Boolean type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#boolean_type .. versionadded:: 3.8.0

int_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.Int64Dtype() ) to convert BigQuery Integer types, instead of relying on the default pandas.Int64Dtype() . If you explicitly set the value to None , then the data type will be numpy.dtype("int64") . A list of BigQuery Integer types can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#integer_types .. versionadded:: 3.8.0

float_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.Float32Dtype() ) to convert BigQuery Float type, instead of relying on the default numpy.dtype("float64") . If you explicitly set the value to None , then the data type will be numpy.dtype("float64") . BigQuery Float type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#floating_point_types .. versionadded:: 3.8.0

string_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.StringDtype() ) to convert BigQuery String type, instead of relying on the default numpy.dtype("object") . If you explicitly set the value to None , then the data type will be numpy.dtype("object") . BigQuery String type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#string_type .. versionadded:: 3.8.0

date_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.ArrowDtype(pyarrow.date32()) ) to convert BigQuery Date type, instead of relying on the default db_dtypes.DateDtype() . If you explicitly set the value to None , then the data type will be numpy.dtype("datetime64[ns]") or object if out of bound. BigQuery Date type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#date_type .. versionadded:: 3.10.0

datetime_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.ArrowDtype(pyarrow.timestamp("us")) ) to convert BigQuery Datetime type, instead of relying on the default numpy.dtype("datetime64[ns] . If you explicitly set the value to None , then the data type will be numpy.dtype("datetime64[ns]") or object if out of bound. BigQuery Datetime type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#datetime_type .. versionadded:: 3.10.0

time_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.ArrowDtype(pyarrow.time64("us")) ) to convert BigQuery Time type, instead of relying on the default db_dtypes.TimeDtype() . If you explicitly set the value to None , then the data type will be numpy.dtype("object") . BigQuery Time type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#time_type .. versionadded:: 3.10.0

timestamp_dtype
Optional[pandas.Series.dtype, None]

If set, indicate a pandas ExtensionDtype (e.g. pandas.ArrowDtype(pyarrow.timestamp("us", tz="UTC")) ) to convert BigQuery Timestamp type, instead of relying on the default numpy.dtype("datetime64[ns, UTC]") . If you explicitly set the value to None , then the data type will be numpy.dtype("datetime64[ns, UTC]") or object if out of bound. BigQuery Datetime type can be found at: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#timestamp_type .. versionadded:: 3.10.0

Exceptions
Type
Description
ValueError
If the pandas library cannot be imported, or the bigquery_storage_v1 module is required but cannot be imported. Also if geography_as_object is True , but the shapely library cannot be imported.
Returns
Type
Description
pandas.DataFrame
A pandas.DataFrame populated with row data and column headers from the query results. The column headers are derived from the destination table's schema.

to_geodataframe

  to_geodataframe 
 ( 
 bqstorage_client 
 : 
 typing 
 . 
 Optional 
 [ 
 bigquery_storage 
 . 
 BigQueryReadClient 
 ] 
 = 
 None 
 , 
 dtypes 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 typing 
 . 
 Any 
 ]] 
 = 
 None 
 , 
 progress_bar_type 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 create_bqstorage_client 
 : 
 bool 
 = 
 True 
 , 
 max_results 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 geography_column 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 ) 
 - 
> geopandas 
 . 
 GeoDataFrame 
 

Return a GeoPandas GeoDataFrame from a QueryJob

Parameters
Name
Description
bqstorage_client
Optional[google.cloud.bigquery_storage_v1.BigQueryReadClient]

A BigQuery Storage API client. If supplied, use the faster BigQuery Storage API to fetch rows from BigQuery. This API is a billable API. This method requires the fastavro and google-cloud-bigquery-storage libraries. Reading from a specific partition or snapshot is not currently supported by this method.

dtypes
Optional[Map[str, Union[str, pandas.Series.dtype]]]

A dictionary of column names pandas dtype s. The provided dtype is used when constructing the series for the column specified. Otherwise, the default pandas behavior is used.

progress_bar_type
Optional[str]

If set, use the tqdm https://tqdm.github.io/ _ library to display a progress bar while the data downloads. Install the tqdm package to use this feature. See to_dataframe for details. .. versionadded:: 1.11.0

create_bqstorage_client
Optional[bool]

If True (default), create a BigQuery Storage API client using the default API settings. The BigQuery Storage API is a faster way to fetch rows from BigQuery. See the bqstorage_client parameter for more information. This argument does nothing if bqstorage_client is supplied. .. versionadded:: 1.24.0

max_results
Optional[int]

Maximum number of rows to include in the result. No limit by default. .. versionadded:: 2.21.0

geography_column
Optional[str]

If there are more than one GEOGRAPHY column, identifies which one to use to construct a GeoPandas GeoDataFrame. This option can be ommitted if there's only one GEOGRAPHY column.

Exceptions
Type
Description
ValueError
If the geopandas library cannot be imported, or the bigquery_storage_v1 module is required but cannot be imported. .. versionadded:: 2.24.0
Returns
Type
Description
geopandas.GeoDataFrame
A geopandas.GeoDataFrame populated with row data and column headers from the query results. The column headers are derived from the destination table's schema.
Design a Mobile Site
View Site in Mobile | Classic
Share by: