- 3.5.0 (latest)
- 3.4.1
- 3.3.1
- 3.2.0
- 3.1.1
- 3.0.0
- 2.19.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Buckets
Create / interact with Google Cloud Storage buckets.
class google.cloud.storage.bucket.Bucket(client, name=None, user_project=None)
Bases: google.cloud.storage._helpers._PropertyMixin
A class representing a Bucket on Cloud Storage.
-
Parameters
-
client(
google.cloud.storage.client.Client) – A client which holds credentials and project configuration for the bucket (which requires a project). -
name( str ) – The name of the bucket. Bucket names must start and end with a number or letter.
-
user_project( str ) – (Optional) the project ID to be billed for API requests made via this instance.
-
STORAGE_CLASSES( = ('STANDARD', 'NEARLINE', 'COLDLINE', 'MULTI_REGIONAL', 'REGIONAL', 'DURABLE_REDUCED_AVAILABILITY' )
Allowed values for storage_class
.
Default value is STANDARD_STORAGE_CLASS
.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#storageClass https://cloud.google.com/storage/docs/storage-classes
property acl()
Create our ACL on demand.
add_lifecycle_delete_rule(**kw)
Add a “delete” rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
bucket = client.get_bucket("my-bucket")
bucket.add_lifecycle_rule_delete(age=2)
bucket.patch()
-
Params kw
arguments passed to
LifecycleRuleConditions.
add_lifecycle_set_storage_class_rule(storage_class, **kw)
Add a “delete” rule to lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
bucket = client.get_bucket("my-bucket")
bucket.add_lifecycle_rule_set_storage_class(
"COLD_LINE", matches_storage_class=["NEARLINE"]
)
bucket.patch()
-
Parameters
storage_class(str, one of
STORAGE_CLASSES.) – new storage class to assign to matching items. -
Params kw
arguments passed to
LifecycleRuleConditions.
blob(blob_name, chunk_size=None, encryption_key=None, kms_key_name=None, generation=None)
Factory constructor for blob object.
NOTE: This will not make an HTTP request; it simply instantiates a blob object owned by this bucket.
-
Parameters
-
blob_name( str ) – The name of the blob to be instantiated.
-
chunk_size( int ) – The size of a chunk of data whenever iterating (in bytes). This must be a multiple of 256 KB per the API specification.
-
encryption_key( bytes ) – Optional 32 byte encryption key for customer-supplied encryption.
-
kms_key_name( str ) – Optional resource name of KMS key used to encrypt blob’s content.
-
generation( long ) – Optional. If present, selects a specific revision of this object.
-
-
Return type
-
Returns
The blob object created.
clear_lifecyle_rules()
Set lifestyle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
property client()
The client bound to this bucket.
configure_website(main_page_suffix=None, not_found_page=None)
Configure website-related properties.
See https://cloud.google.com/storage/docs/hosting-static-website
NOTE: This (apparently) only works if your bucket name is a domain name (and to do that, you need to get approved somehow…).
If you want this bucket to host a website, just provide the name of an index page and a page to use when a blob isn’t found:
client = storage.Client()
bucket = client.get_bucket(bucket_name)
bucket.configure_website("index.html", "404.html")
You probably should also make the whole bucket public:
bucket.make_public(recursive=True, future=True)
This says: “Make the bucket public, and all the stuff already in the bucket, and anything else I add to the bucket. Just make it all public.”
-
Parameters
copy_blob(blob, destination_bucket, new_name=None, client=None, preserve_acl=True, source_generation=None)
Copy the given blob to the given bucket, optionally with a new name.
If user_project
is set, bills the API request to that project.
-
Parameters
-
blob(
google.cloud.storage.blob.Blob) – The blob to be copied. -
destination_bucket(
google.cloud.storage.bucket.Bucket) – The bucket into which the blob should be copied. -
new_name( str ) – (optional) the new name for the copied file.
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket. -
preserve_acl( bool ) – Optional. Copies ACL from old blob to new blob. Default: True.
-
source_generation( long ) – Optional. The generation of the blob to be copied.
-
-
Return type
-
Returns
The new Blob.
property cors()
Retrieve or set CORS policies configured for this bucket.
See http://www.w3.org/TR/cors/ and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
NOTE: The getter for this property returns a list which contains copies of the bucket’s CORS policy mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:
>>> policies = bucket.cors
>>> policies.append({'origin': '/foo', ...})
>>> policies[1]['maxAgeSeconds'] = 3600
>>> del policies[0]
>>> bucket.cors = policies
>>> bucket.update()
-
Setter
Set CORS policies for this bucket.
-
Getter
Gets the CORS policies for this bucket.
-
Return type
list of dictionaries
-
Returns
A sequence of mappings describing each CORS policy.
create(client=None, project=None, location=None, predefined_acl=None, predefined_default_object_acl=None)
Creates current bucket.
If the bucket already exists, will raise google.cloud.exceptions.Conflict
.
This implements “storage.buckets.insert”.
If user_project
is set, bills the API request to that project.
-
Parameters
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket. -
project( str ) – Optional. The project under which the bucket is to be created. If not passed, uses the project set on the client.
-
location( str ) – Optional. The location of the bucket. If not passed, the default location, US, will be used. See https://cloud.google.com/storage/docs/bucket-locations
-
predefined_acl( str ) – Optional. Name of predefined ACL to apply to bucket. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl
-
predefined_default_object_acl( str ) – Optional. Name of predefined ACL to apply to bucket’s objects. See: https://cloud.google.com/storage/docs/access-control/lists#predefined-acl
-
-
Raises
-
ValueError – if
user_projectis set. -
ValueError – if
projectis None and client’sprojectis also None.
-
property default_event_based_hold()
Are uploaded objects automatically placed under an even-based hold?
If True, uploaded objects will be placed under an event-based hold to be released at a future time. When released an object will then begin the retention period determined by the policy retention period for the object bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
If the property is not set locally, returns None
.
-
Return type
bool or
NoneType
property default_kms_key_name()
Retrieve / set default KMS encryption key for objects in the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Setter
Set default KMS encryption key for items in this bucket.
-
Getter
Get default KMS encryption key for items in this bucket.
-
Return type
-
Returns
Default KMS encryption key, or
Noneif not set.
property default_object_acl()
Create our defaultObjectACL on demand.
delete(force=False, client=None)
Delete this bucket.
The bucket mustbe empty in order to submit a delete request. If force=True
is passed, this will first attempt to delete all the
objects / blobs in the bucket (i.e. try to empty the bucket).
If the bucket doesn’t exist, this will raise google.cloud.exceptions.NotFound
. If the bucket is not empty
(and force=False
), will raise google.cloud.exceptions.Conflict
.
If force=True
and the bucket contains more than 256 objects / blobs
this will cowardly refuse to delete the objects (or the bucket). This
is to prevent accidental bucket deletion and to prevent extremely long
runtime of this method.
If user_project
is set, bills the API request to that project.
-
Parameters
-
Raises
ValueErrorifforceisTrueand the bucket contains more than 256 objects / blobs.
delete_blob(blob_name, client=None, generation=None)
Deletes a blob from the current bucket.
If the blob isn’t found (backend 404), raises a google.cloud.exceptions.NotFound
.
For example:
from google.cloud.exceptions import NotFound
client = storage.Client()
bucket = client.get_bucket("my-bucket")
blobs = list(bucket.list_blobs())
assert len(blobs) > 0
# [<Blob: my-bucket, my-file.txt>]
bucket.delete_blob("my-file.txt")
try:
bucket.delete_blob("doesnt-exist")
except NotFound:
pass
If user_project
is set, bills the API request to that project.
-
Parameters
-
Raises
google.cloud.exceptions.NotFound(to suppress the exception, calldelete_blobs, passing a no-opon_errorcallback, e.g.:
bucket.delete_blobs([blob], on_error=lambda blob: None)
delete_blobs(blobs, on_error=None, client=None)
Deletes a list of blobs from the current bucket.
Uses delete_blob()
to delete each individual blob.
If user_project
is set, bills the API request to that project.
-
Parameters
-
on_error( callable ) – (Optional) Takes single argument:
blob. Called called once for each blob raisingNotFound; otherwise, the exception is propagated. -
client(
Client) – (Optional) The client to use. If not passed, falls back to theclientstored on the current bucket.
-
Raises
NotFound(if on_error is not passed).
disable_logging()
Disable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#disabling
disable_website()
Disable the website configuration for this bucket.
This is really just a shortcut for setting the website-related
attributes to None
.
enable_logging(bucket_name, object_prefix='')
Enable access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs
-
Parameters
property etag()
Retrieve the ETag for the bucket.
See https://tools.ietf.org/html/rfc2616#section-3.11 and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
-
Return type
str or
NoneType -
Returns
The bucket etag or
Noneif the bucket’s resource has not been loaded from the server.
exists(client=None)
Determines whether or not this bucket exists.
If user_project
is set, bills the API request to that project.
-
Parameters
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket. -
Return type
-
Returns
True if the bucket exists in Cloud Storage.
classmethod from_string(uri, client=None)
Get a constructor for bucket object by URI.
-
Parameters
-
Return type
google.cloud.storage.bucket.Bucket -
Returns
The bucket object created.
Example
Get a constructor for bucket object by URI..
>>> from google.cloud import storage
>>> from google.cloud.storage.bucket import Bucket
>>> client = storage
. Client
()
>>> bucket = Bucket. from_string
("gs://bucket",client)
generate_signed_url(expiration=None, api_access_endpoint=' https://storage.googleapis.com ', method='GET', headers=None, query_parameters=None, client=None, credentials=None, version=None)
Generates a signed URL for this bucket.
NOTE: If you are on Google Compute Engine, you can’t generate a signed URL using GCE service account. Follow Issue 50 for updates on this. If you’d like to be able to generate a signed URL from GCE, you can use a standard service account from a JSON file rather than a GCE service account.
If you have a bucket that you want to allow access to for a set amount of time, you can use this method to generate a URL that is only valid within a certain time period.
This is particularly useful if you don’t want publicly accessible buckets, but don’t want to require users to explicitly log in.
-
Parameters
-
expiration( Union [ Integer , *[ datetime.datetime ]( https://python.readthedocs.io/en/latest/library/datetime.html#datetime.datetime ) , [ datetime.timedelta ]( https://python.readthedocs.io/en/latest/library/datetime.html#datetime.timedelta ) ]*) – Point in time when the signed URL should expire.
-
api_access_endpoint( str ) – Optional URI base.
-
method( str ) – The HTTP verb that will be used when requesting the URL.
-
headers( dict ) – (Optional) Additional HTTP headers to be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers Requests using the signed URL must pass the specified header (name and value) with each request for the URL.
-
query_parameters( dict ) – (Optional) Additional query paramtersto be included as part of the signed URLs. See: https://cloud.google.com/storage/docs/xml-api/reference-headers#query
-
client(
ClientorNoneType) – (Optional) The client to use. If not passed, falls back to theclientstored on the blob’s bucket. -
credentials(
google.auth.credentials.CredentialsorNoneType) – The authorization credentials to attach to requests. These credentials identify this application to the service. If none are specified, the client will attempt to ascertain the credentials from the environment. -
version( str ) – (Optional) The version of signed credential to create. Must be one of ‘v2’ | ‘v4’.
-
-
Raises
ValueErrorwhen version is invalid. -
Raises
TypeErrorwhen expiration is not a valid type. -
Raises
AttributeErrorif credentials is not an instance ofgoogle.auth.credentials.Signing. -
Return type
-
Returns
A signed URL you can use to access the resource until expiration.
generate_upload_policy(conditions, expiration=None, client=None)
Create a signed upload policy for uploading objects.
This method generates and signs a policy document. You can use policy documents to allow visitors to a website to upload files to Google Cloud Storage without giving them direct write access.
For example:
bucket = client.bucket("my-bucket")
conditions = [["starts-with", "$key", ""], {"acl": "public-read"}]
policy = bucket.generate_upload_policy(conditions)
# Generate an upload form using the form fields.
policy_fields = "".join(
'<input type="hidden" name="{key}" value="{value}">'.format(
key=key, value=value
)
for key, value in policy.items()
)
upload_form = (
'<form action="http://{bucket_name}.storage.googleapis.com"'
' method="post" enctype="multipart/form-data">'
'<input type="text" name="key" value="my-test-key">'
'<input type="hidden" name="bucket" value="{bucket_name}">'
'<input type="hidden" name="acl" value="public-read">'
'<input name="file" type="file">'
'<input type="submit" value="Upload">'
"{policy_fields}"
"</form>"
).format(bucket_name=bucket.name, policy_fields=policy_fields)
print(upload_form)
-
Parameters
-
expiration( datetime ) – Optional expiration in UTC. If not specified, the policy will expire in 1 hour.
-
conditions( list ) – A list of conditions as described in the policy documents documentation.
-
client(
Client) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Return type
-
Returns
A dictionary of (form field name, form field value) of form fields that should be added to your HTML upload form in order to attach the signature.
get_blob(blob_name, client=None, encryption_key=None, generation=None, **kwargs)
Get a blob object by name.
This will return None if the blob doesn’t exist:
client = storage.Client()
bucket = client.get_bucket("my-bucket")
assert isinstance(bucket.get_blob("/path/to/blob.txt"), Blob)
# <Blob: my-bucket, /path/to/blob.txt>
assert not bucket.get_blob("/does-not-exist.txt")
# None
If user_project
is set, bills the API request to that project.
-
Parameters
-
blob_name( str ) – The name of the blob to retrieve.
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket. -
encryption_key( bytes ) – Optional 32 byte encryption key for customer-supplied encryption. See https://cloud.google.com/storage/docs/encryption#customer-supplied .
-
generation( long ) – Optional. If present, selects a specific revision of this object.
-
kwargs– Keyword arguments to pass to the
Blobconstructor.
-
-
Return type
google.cloud.storage.blob.Blobor None -
Returns
The blob object if it exists, otherwise None.
get_iam_policy(client=None)
Retrieve the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/getIamPolicy
If user_project
is set, bills the API request to that project.
-
Parameters
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket. -
Return type
-
Returns
the policy instance, based on the resource returned from the
getIamPolicyAPI request.
get_logging()
Return info about access logging for this bucket.
See https://cloud.google.com/storage/docs/access-logs#status
-
Return type
-
Returns
a dict w/ keys,
logBucketandlogObjectPrefix(if logging is enabled), or None (if not).
property iam_configuration()
Retrieve IAM configuration for this bucket.
-
Return type
IAMConfiguration -
Returns
an instance for managing the bucket’s IAM configuration.
property id()
Retrieve the ID for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Return type
str or
NoneType -
Returns
The ID of the bucket or
Noneif the bucket’s resource has not been loaded from the server.
property labels()
Retrieve or set labels assigned to this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets#labels
NOTE: The getter for this property returns a dict which is a copy of the bucket’s labels. Mutating that dict has no effect unless you then re-assign the dict via the setter. E.g.:
>>> labels = bucket.labels
>>> labels['new_key'] = 'some-label'
>>> del labels['old_key']
>>> bucket.labels = labels
>>> bucket.update()
-
Setter
Set labels for this bucket.
-
Getter
Gets the labels for this bucket.
-
Return type
-
Returns
Name-value pairs (string->string) labelling the bucket.
property lifecycle_rules()
Retrieve or set lifecycle rules configured for this bucket.
See https://cloud.google.com/storage/docs/lifecycle and
[https://cloud.google.com/storage/docs/json_api/v1/buckets](https://cloud.google.com/storage/docs/json_api/v1/buckets)
NOTE: The getter for this property returns a list which contains copies of the bucket’s lifecycle rules mappings. Mutating the list or one of its dicts has no effect unless you then re-assign the dict via the setter. E.g.:
>>> rules = bucket.lifecycle_rules
>>> rules.append({'origin': '/foo', ...})
>>> rules[1]['rule']['action']['type'] = 'Delete'
>>> del rules[0]
>>> bucket.lifecycle_rules = rules
>>> bucket.update()
-
Setter
Set lifestyle rules for this bucket.
-
Getter
Gets the lifestyle rules for this bucket.
-
Return type
generator( dict )
-
Returns
A sequence of mappings describing each lifecycle rule.
list_blobs(max_results=None, page_token=None, prefix=None, delimiter=None, versions=None, projection='noAcl', fields=None, client=None)
Return an iterator used to find blobs in the bucket.
NOTE: Direct use of this method is deprecated. Use Client.list_blobs
instead.
If user_project
is set, bills the API request to that project.
-
Parameters
-
max_results( int ) – (Optional) The maximum number of blobs in each page of results from this request. Non-positive values are ignored. Defaults to a sensible value set by the API.
-
page_token( str ) – (Optional) If present, return the next batch of blobs, using the value, which must correspond to the
nextPageTokenvalue returned in the previous response. Deprecated: use thepagesproperty of the returned iterator instead of manually passing the token. -
prefix( str ) – (Optional) prefix used to filter blobs.
-
delimiter( str ) – (Optional) Delimiter, used with
prefixto emulate hierarchy. -
versions( bool ) – (Optional) Whether object versions should be returned as separate blobs.
-
projection( str ) – (Optional) If used, must be ‘full’ or ‘noAcl’. Defaults to
'noAcl'. Specifies the set of properties to return. -
fields( str ) – (Optional) Selector specifying which fields to include in a partial response. Must be a list of fields. For example to get a partial response with just the next page token and the name and language of each blob returned:
'items(name,contentLanguage),nextPageToken'. See: https://cloud.google.com/storage/docs/json_api/v1/parameters#fields -
client(
Client) – (Optional) The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Return type
-
Returns
Iterator of all
Blobin this bucket matching the arguments.
list_notifications(client=None)
List Pub / Sub notifications for this bucket.
See: https://cloud.google.com/storage/docs/json_api/v1/notifications/list
If user_project
is set, bills the API request to that project.
-
Parameters
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket. -
Return type
list of
BucketNotification -
Returns
notification instances
property location()
Retrieve location configured for this bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets and https://cloud.google.com/storage/docs/bucket-locations
Returns None
if the property has not been set before creation,
or if the bucket’s resource has not been loaded from the server.
:rtype: str or NoneType
property location_type()
Retrieve or set the location type for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
-
Setter
Set the location type for this bucket.
-
Getter
Gets the the location type for this bucket.
-
Return type
str or
NoneType -
Returns
If set, one of
MULTI_REGION_LOCATION_TYPE,REGION_LOCATION_TYPE, orDUAL_REGION_LOCATION_TYPE, elseNone.
lock_retention_policy(client=None)
Lock the bucket’s retention policy.
-
Raises
ValueError – if the bucket has no metageneration (i.e., new or never reloaded); if the bucket has no retention policy assigned; if the bucket’s retention policy is already locked.
make_private(recursive=False, future=False, client=None)
Update bucket’s ACL, revoking read access for anonymous users.
-
Parameters
-
recursive( bool ) – If True, this will make all blobs inside the bucket private as well.
-
future( bool ) – If True, this will make all objects created in the future private as well.
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Raises
ValueError – If
recursiveis True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned bylist_blobs()and callmake_private()for each blob.
make_public(recursive=False, future=False, client=None)
Update bucket’s ACL, granting read access to anonymous users.
-
Parameters
-
recursive( bool ) – If True, this will make all blobs inside the bucket public as well.
-
future( bool ) – If True, this will make all objects created in the future public as well.
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Raises
ValueError – If
recursiveis True, and the bucket contains more than 256 blobs. This is to prevent extremely long runtime of this method. For such buckets, iterate over the blobs returned bylist_blobs()and callmake_public()for each blob.
property metageneration()
Retrieve the metageneration for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Return type
int or
NoneType -
Returns
The metageneration of the bucket or
Noneif the bucket’s resource has not been loaded from the server.
notification(topic_name, topic_project=None, custom_attributes=None, event_types=None, blob_name_prefix=None, payload_format='NONE')
Factory: create a notification resource for the bucket.
See: BucketNotification
for parameters.
-
Return type
property owner()
Retrieve info about the owner of the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Return type
dict or
NoneType -
Returns
Mapping of owner’s role/ID. Returns
Noneif the bucket’s resource has not been loaded from the server.
patch(client=None)
Sends all changed properties in a PATCH request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
-
Parameters
client(
ClientorNoneType) – the client to use. If not passed, falls back to theclientstored on the current object.
property path()
The URL path to this bucket.
static path_helper(bucket_name)
Relative URL path for a bucket.
-
Parameters
bucket_name( str ) – The bucket name in the path.
-
Return type
-
Returns
The relative URL path for
bucket_name.
property project_number()
Retrieve the number of the project to which the bucket is assigned.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Return type
int or
NoneType -
Returns
The project number that owns the bucket or
Noneif the bucket’s resource has not been loaded from the server.
reload(client=None)
Reload properties from Cloud Storage.
If user_project
is set, bills the API request to that project.
-
Parameters
client(
ClientorNoneType) – the client to use. If not passed, falls back to theclientstored on the current object.
rename_blob(blob, new_name, client=None)
Rename the given blob using copy and delete operations.
If user_project
is set, bills the API request to that project.
Effectively, copies blob to the same bucket with a new name, then deletes the blob.
WARNING: This method will first duplicate the data and then delete the old blob. This means that with very large objects renaming could be a very (temporarily) costly or a very slow operation.
-
Parameters
-
blob(
google.cloud.storage.blob.Blob) – The blob to be renamed. -
new_name( str ) – The new name for this blob.
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Return type
Blob -
Returns
The newly-renamed blob.
property requester_pays()
Does the requester pay for API requests for this bucket?
See https://cloud.google.com/storage/docs/requester-pays for details.
-
Setter
Update whether requester pays for this bucket.
-
Getter
Query whether requester pays for this bucket.
-
Return type
-
Returns
True if requester pays for API requests for the bucket, else False.
property retention_period()
Retrieve or set the retention period for items in the bucket.
-
Return type
int or
NoneType -
Returns
number of seconds to retain items after upload or release from event-based lock, or
Noneif the property is not set locally.
property retention_policy_effective_time()
Retrieve the effective time of the bucket’s retention policy.
-
Return type
datetime.datetime or
NoneType -
Returns
point-in time at which the bucket’s retention policy is effective, or
Noneif the property is not set locally.
property retention_policy_locked()
Retrieve whthere the bucket’s retention policy is locked.
-
Return type
-
Returns
True if the bucket’s policy is locked, or else False if the policy is not locked, or the property is not set locally.
property self_link()
Retrieve the URI for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Return type
str or
NoneType -
Returns
The self link for the bucket or
Noneif the bucket’s resource has not been loaded from the server.
set_iam_policy(policy, client=None)
Update the IAM policy for the bucket.
See https://cloud.google.com/storage/docs/json_api/v1/buckets/setIamPolicy
If user_project
is set, bills the API request to that project.
-
Parameters
-
policy(
google.api_core.iam.Policy) – policy instance used to update bucket’s IAM policy. -
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Return type
-
Returns
the policy instance, based on the resource returned from the
setIamPolicyAPI request.
property storage_class()
Retrieve or set the storage class for the bucket.
See https://cloud.google.com/storage/docs/storage-classes
-
Setter
Set the storage class for this bucket.
-
Getter
Gets the the storage class for this bucket.
-
Return type
str or
NoneType -
Returns
If set, one of
NEARLINE_STORAGE_CLASS,COLDLINE_STORAGE_CLASS,STANDARD_STORAGE_CLASS,MULTI_REGIONAL_LEGACY_STORAGE_CLASS,REGIONAL_LEGACY_STORAGE_CLASS, orDURABLE_REDUCED_AVAILABILITY_LEGACY_STORAGE_CLASS, elseNone.
test_iam_permissions(permissions, client=None)
API call: test permissions
See https://cloud.google.com/storage/docs/json_api/v1/buckets/testIamPermissions
If user_project
is set, bills the API request to that project.
-
Parameters
-
permissions( list of string ) – the permissions to check
-
client(
ClientorNoneType) – Optional. The client to use. If not passed, falls back to theclientstored on the current bucket.
-
-
Return type
list of string
-
Returns
the permissions returned by the
testIamPermissionsAPI request.
property time_created()
Retrieve the timestamp at which the bucket was created.
See https://cloud.google.com/storage/docs/json_api/v1/buckets
-
Return type
datetime.datetimeorNoneType -
Returns
Datetime object parsed from RFC3339 valid timestamp, or
Noneif the bucket’s resource has not been loaded from the server.
update(client=None)
Sends all properties in a PUT request.
Updates the _properties
with the response from the backend.
If user_project
is set, bills the API request to that project.
-
Parameters
client(
ClientorNoneType) – the client to use. If not passed, falls back to theclientstored on the current object.
property user_project()
Project ID to be billed for API requests made via this bucket.
If unset, API requests are billed to the bucket owner.
-
Return type
property versioning_enabled()
Is versioning enabled for this bucket?
See https://cloud.google.com/storage/docs/object-versioning for details.
-
Setter
Update whether versioning is enabled for this bucket.
-
Getter
Query whether versioning is enabled for this bucket.
-
Return type
-
Returns
True if enabled, else False.
class google.cloud.storage.bucket.IAMConfiguration(bucket, bucket_policy_only_enabled=False, bucket_policy_only_locked_time=None)
Bases: dict
Map a bucket’s IAM configuration.
-
Params bucket
Bucket for which this instance is the policy.
-
Params bucket_policy_only_enabled
(optional) whether the IAM-only policy is enabled for the bucket.
-
Params bucket_policy_only_locked_time
(optional) When the bucket’s IAM-only policy was ehabled. This value should normally only be set by the back-end API.
property bucket()
Bucket for which this instance is the policy.
-
Return type
Bucket -
Returns
the instance’s bucket.
property bucket_policy_only_enabled()
If set, access checks only use bucket-level IAM policies or above.
-
Return type
-
Returns
whether the bucket is configured to allow only IAM.
property bucket_policy_only_locked_time()
Deadline for changing bucket_policy_only_enabled
from true to false.
If the bucket’s bucket_policy_only_enabled
is true, this property
is time time after which that setting becomes immutable.
If the bucket’s bucket_policy_only_enabled
is false, this property
is None
.
-
Return type
Union[
datetime.datetime, None] -
Returns
(readonly) Time after which
bucket_policy_only_enabledwill be frozen as true.
clear()
copy()
classmethod from_api_repr(resource, bucket)
Factory: construct instance from resource.
-
Params bucket
Bucket for which this instance is the policy.
-
Parameters
resource( dict ) – mapping as returned from API call.
-
Return type
IAMConfiguration -
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
class google.cloud.storage.bucket.LifecycleRuleConditions(age=None, created_before=None, is_live=None, matches_storage_class=None, number_of_newer_versions=None, _factory=False)
Bases: dict
Map a single lifecycle rule for a bucket.
See: https://cloud.google.com/storage/docs/lifecycle
-
Parameters
-
age( int ) – (optional) apply rule action to items whos age, in days, exceeds this value.
-
created_before( datetime.date ) – (optional) apply rule action to items created before this date.
-
is_live( bool ) – (optional) if true, apply rule action to non-versioned items, or to items with no newer versions. If false, apply rule action to versioned items with at least one newer version.
-
matches_storage_class(list(str), one or more of
Bucket.STORAGE_CLASSES.) – (optional) apply rule action to items which whose storage class matches this value. -
number_of_newer_versions( int ) – (optional) apply rule action to versioned items having N newer versions.
-
-
Raises
ValueError – if no arguments are passed.
property age()
Conditon’s age value.
clear()
copy()
property created_before()
Conditon’s created_before value.
classmethod from_api_repr(resource)
Factory: construct instance from resource.
-
Parameters
resource( dict ) – mapping as returned from API call.
-
Return type
LifecycleRuleConditions -
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
property is_live()
Conditon’s ‘is_live’ value.
items()
keys()
property matches_storage_class()
Conditon’s ‘matches_storage_class’ value.
property number_of_newer_versions()
Conditon’s ‘number_of_newer_versions’ value.
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
class google.cloud.storage.bucket.LifecycleRuleDelete(**kw)
Bases: dict
Map a lifecycle rule deleting matching items.
-
Params kw
arguments passed to
LifecycleRuleConditions.
clear()
copy()
classmethod from_api_repr(resource)
Factory: construct instance from resource.
-
Parameters
resource( dict ) – mapping as returned from API call.
-
Return type
LifecycleRuleDelete -
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]
values()
class google.cloud.storage.bucket.LifecycleRuleSetStorageClass(storage_class, **kw)
Bases: dict
Map a lifecycle rule upating storage class of matching items.
-
Parameters
storage_class(str, one of
Bucket.STORAGE_CLASSES.) – new storage class to assign to matching items. -
Params kw
arguments passed to
LifecycleRuleConditions.
clear()
copy()
classmethod from_api_repr(resource)
Factory: construct instance from resource.
-
Parameters
resource( dict ) – mapping as returned from API call.
-
Return type
LifecycleRuleDelete -
Returns
Instance created from resource.
fromkeys(value=None, /)
Create a new dictionary with keys from iterable and values set to value.
get(key, default=None, /)
Return the value for key if key is in the dictionary, else default.
items()
keys()
pop(k, )
If key is not found, default is returned if given, otherwise KeyError is raised
popitem()
Remove and return a (key, value) pair as a 2-tuple.
Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.
setdefault(key, default=None, /)
Insert key with a value of default if key is not in the dictionary.
Return the value for key if key is in the dictionary, else default.
update(**F)
If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

