- 3.36.0 (latest)
- 3.35.1
- 3.34.0
- 3.33.0
- 3.31.0
- 3.30.0
- 3.29.0
- 3.27.0
- 3.26.0
- 3.25.0
- 3.24.0
- 3.23.1
- 3.22.0
- 3.21.0
- 3.20.1
- 3.19.0
- 3.18.0
- 3.17.2
- 3.16.0
- 3.15.0
- 3.14.1
- 3.13.0
- 3.12.0
- 3.11.4
- 3.4.0
- 3.3.6
- 3.2.0
- 3.1.0
- 3.0.1
- 2.34.4
- 2.33.0
- 2.32.0
- 2.31.0
- 2.30.1
- 2.29.0
- 2.28.1
- 2.27.1
- 2.26.0
- 2.25.2
- 2.24.1
- 2.23.3
- 2.22.1
- 2.21.0
- 2.20.0
- 2.19.0
- 2.18.0
- 2.17.0
- 2.16.1
- 2.15.0
- 2.14.0
- 2.13.1
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.2
- 2.5.0
- 2.4.0
- 2.3.1
- 2.2.0
- 2.1.0
- 2.0.0
- 1.28.2
- 1.27.2
- 1.26.1
- 1.25.0
- 1.24.0
- 1.23.1
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
- 1.16.0
Dataset
(
dataset_ref
)
Datasets are containers for tables.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#resource-dataset
Parameter
dataset_ref
Union[ google.cloud.bigquery.dataset.DatasetReference
, str]
A pointer to a dataset. If dataset_ref
is a string, it must include both the project ID and the dataset ID, separated by .
.
Properties
access_entries
List[ google.cloud.bigquery.dataset.AccessEntry ]: Dataset's access entries.
role
augments the entity type and must be present unlessthe
entity type is view
or routine
.
TypeError
ValueError
created
Union[datetime.datetime, None]: Datetime at which the dataset was
created (:data: None
until set from the server).
dataset_id
str: Dataset ID.
default_encryption_configuration
google.cloud.bigquery.encryption_configuration.EncryptionConfiguration : Custom encryption configuration for all tables in the dataset.
Custom encryption configuration (e.g., Cloud KMS keys) or :data: None
if using default encryption.
See protecting data with Cloud KMS keys
< https://cloud.google.com/bigquery/docs/customer-managed-encryption>
;
_
in the BigQuery documentation.
default_partition_expiration_ms
Optional[int]: The default partition expiration for all partitioned tables in the dataset, in milliseconds.
Once this property is set, all newly-created partitioned tables in
the dataset will have an time_paritioning.expiration_ms
property
set to this value, and changing the value will only affect new
tables, not existing ones. The storage in a partition will have an
expiration time of its partition time plus this value.
Setting this property overrides the use of default_table_expiration_ms
for partitioned tables: only one of default_table_expiration_ms
and default_partition_expiration_ms
will be used for any new
partitioned table. If you provide an explicit time_partitioning.expiration_ms
when creating or updating a
partitioned table, that value takes precedence over the default
partition expiration time indicated by this property.
default_rounding_mode
Union[str, None]: defaultRoundingMode of the dataset as set by the user
(defaults to :data: None
).
Set the value to one of 'ROUND_HALF_AWAY_FROM_ZERO'
, 'ROUND_HALF_EVEN'
, or 'ROUNDING_MODE_UNSPECIFIED'
.
See default rounding mode
< https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#Dataset.FIELDS.default_rounding_mode>
;
in REST API docs and updating the default rounding model
< https://cloud.google.com/bigquery/docs/updating-datasets#update_rounding_mode>
;
guide.
ValueError
default_table_expiration_ms
Union[int, None]: Default expiration time for tables in the dataset
(defaults to :data: None
).
ValueError
description
Optional[str]: Description of the dataset as set by the user
(defaults to :data: None
).
ValueError
etag
Union[str, None]: ETag for the dataset resource (:data: None
until
set from the server).
friendly_name
Union[str, None]: Title of the dataset as set by the user
(defaults to :data: None
).
ValueError
full_dataset_id
Union[str, None]: ID for the dataset resource (:data: None
until
set from the server)
In the format project_id:dataset_id
.
is_case_insensitive
Optional[bool]: True if the dataset and its table names are case-insensitive, otherwise False. By default, this is False, which means the dataset and its table names are case-sensitive. This field does not affect routine references.
ValueError
labels
Dict[str, str]: Labels for the dataset.
This method always returns a dict. To change a dataset's labels,
modify the dict, then call
xref_update_dataset. To delete
a label, set its value to :data: None
before updating.
ValueError
location
Union[str, None]: Location in which the dataset is hosted as set by
the user (defaults to :data: None
).
ValueError
max_time_travel_hours
Optional[int]: Defines the time travel window in hours. The value can be from 48 to 168 hours (2 to 7 days), and in multiple of 24 hours (48, 72, 96, 120, 144, 168). The default value is 168 hours if this is not set.
modified
Union[datetime.datetime, None]: Datetime at which the dataset was
last modified (:data: None
until set from the server).
path
str: URL path for the dataset based on project and dataset ID.
project
str: Project ID of the project bound to the dataset.
reference
google.cloud.bigquery.dataset.DatasetReference : A reference to this dataset.
self_link
Union[str, None]: URL for the dataset resource (:data: None
until
set from the server).
storage_billing_model
Union[str, None]: StorageBillingModel of the dataset as set by the user
(defaults to :data: None
).
Set the value to one of 'LOGICAL'
, 'PHYSICAL'
, or 'STORAGE_BILLING_MODEL_UNSPECIFIED'
. This change takes 24 hours to
take effect and you must wait 14 days before you can change the storage
billing model again.
See storage billing model
< https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#Dataset.FIELDS.storage_billing_model>
;
in REST API docs and updating the storage billing model
< https://cloud.google.com/bigquery/docs/updating-datasets#update_storage_billing_models>
;
guide.
ValueError
Methods
from_api_repr
from_api_repr
(
resource
:
dict
)
-
> google
.
cloud
.
bigquery
.
dataset
.
Dataset
Factory: construct a dataset given its API representation
google.cloud.bigquery.dataset.Dataset
resource
.from_string
from_string
(
full_dataset_id
:
str
)
-
> google
.
cloud
.
bigquery
.
dataset
.
Dataset
Construct a dataset from fully-qualified dataset ID.
full_dataset_id
str
A fully-qualified dataset ID in standard SQL format. Must include both the project ID and the dataset ID, separated by .
.
ValueError
full_dataset_id
is not a fully-qualified dataset ID in standard SQL format.Dataset .. rubric:: Examples >>> Dataset.from_string('my-project-id.some_dataset') Dataset(DatasetReference('my-project-id', 'some_dataset'))
full_dataset_id
.model
model
(
model_id
)
Constructs a ModelReference.
model_id
str
the ID of the model.
routine
routine
(
routine_id
)
Constructs a RoutineReference.
routine_id
str
the ID of the routine.
table
table
(
table_id
:
str
)
-
> google
.
cloud
.
bigquery
.
table
.
TableReference
Constructs a TableReference.
table_id
str
The ID of the table.
to_api_repr
to_api_repr
()
-
> dict
Construct the API resource representation of this dataset
Dict[str, object]
__init__
__init__
(
dataset_ref
)
-
> None
Initialize self. See help(type(self)) for accurate signature.
Dataset
Dataset
(
dataset_ref
)
Datasets are containers for tables.
See https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#resource-dataset
dataset_ref
Union[ google.cloud.bigquery.dataset.DatasetReference
, str]
A pointer to a dataset. If dataset_ref
is a string, it must include both the project ID and the dataset ID, separated by .
.