- 3.3.1 (latest)
- 3.3.0
- 3.2.0
- 3.1.1
- 3.0.0
- 2.19.0
- 2.17.0
- 2.16.0
- 2.15.0
- 2.14.0
- 2.13.0
- 2.12.0
- 2.11.0
- 2.10.0
- 2.9.0
- 2.8.0
- 2.7.0
- 2.6.0
- 2.5.0
- 2.4.0
- 2.3.0
- 2.2.1
- 2.1.0
- 2.0.0
- 1.44.0
- 1.43.0
- 1.42.3
- 1.41.1
- 1.40.0
- 1.39.0
- 1.38.0
- 1.37.1
- 1.36.2
- 1.35.1
- 1.34.0
- 1.33.0
- 1.32.0
- 1.31.2
- 1.30.0
- 1.29.0
- 1.28.1
- 1.27.0
- 1.26.0
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.0
- 1.21.0
- 1.20.0
- 1.19.0
- 1.18.0
- 1.17.0
Concurrent media operations.
Modules Functions
download_chunks_concurrently
download_chunks_concurrently
(
blob
,
filename
,
chunk_size
=
33554432
,
download_kwargs
=
None
,
deadline
=
None
,
worker_type
=
"process"
,
max_workers
=
8
,
*
,
crc32c_checksum
=
True
)
Download a single file in chunks, concurrently.
In some environments, using this feature with mutiple processes will result in faster downloads of large files.
Using this feature with multiple threads is unlikely to improve download performance under normal circumstances due to Python interpreter threading behavior. The default is therefore to use processes instead of threads.
blob
filename
str
The destination filename or path.
chunk_size
int
The size in bytes of each chunk to send. The optimal chunk size for maximum throughput may vary depending on the exact network environment and size of the blob.
download_kwargs
dict
A dictionary of keyword arguments to pass to the download method. Refer to the documentation for blob.download_to_file()
or blob.download_to_filename()
for more information. The dict is directly passed into the download methods and is not validated by this function. Keyword arguments "start" and "end" which are not supported and will cause a ValueError if present. The key "checksum" is also not supported in download_kwargs
, but see the argument crc32c_checksum
(which does not go in download_kwargs
) below.
deadline
int
The number of seconds to wait for all threads to resolve. If the deadline is reached, all threads will be terminated regardless of their progress and concurrent.futures.TimeoutError
will be raised. This can be left as the default of None
(no deadline) for most use cases.
worker_type
str
The worker type to use; one of google.cloud.storage.transfer_manager.PROCESS
or google.cloud.storage.transfer_manager.THREAD
. Although the exact performance impact depends on the use case, in most situations the PROCESS worker type will use more system resources (both memory and CPU) and result in faster operations than THREAD workers. Because the subprocesses of the PROCESS worker type can't access memory from the main process, Client objects have to be serialized and then recreated in each subprocess. The serialization of the Client object for use in subprocesses is an approximation and may not capture every detail of the Client object, especially if the Client was modified after its initial creation or if Client._http
was modified in any way. THREAD worker types are observed to be relatively efficient for operations with many small files, but not for operations with large files. PROCESS workers are recommended for large file operations.
max_workers
int
The maximum number of workers to create to handle the workload. With PROCESS workers, a larger number of workers will consume more system resources (memory and CPU) at once. How many workers is optimal depends heavily on the specific use case, and the default is a conservative number that should work okay in most cases without consuming excessive resources.
crc32c_checksum
bool
Whether to compute a checksum for the resulting object, using the crc32c algorithm. As the checksums for each chunk must be combined using a feature of crc32c that is not available for md5, md5 is not supported.
`concurrent.futures.TimeoutError
google.cloud.storage._media
exception is used here for consistency with other download methods despite the exception originating elsewhere.download_many
download_many
(
blob_file_pairs
,
download_kwargs
=
None
,
threads
=
None
,
deadline
=
None
,
raise_exception
=
False
,
worker_type
=
"process"
,
max_workers
=
8
,
*
,
skip_if_exists
=
False
)
Download many blobs concurrently via a worker pool.
`concurrent.futures.TimeoutError
list
download_many_to_path
download_many_to_path
(
bucket
,
blob_names
,
destination_directory
=
""
,
blob_name_prefix
=
""
,
download_kwargs
=
None
,
threads
=
None
,
deadline
=
None
,
create_directories
=
True
,
raise_exception
=
False
,
worker_type
=
"process"
,
max_workers
=
8
,
*
,
skip_if_exists
=
False
)
Download many files concurrently by their blob names.
The destination files are automatically created, with paths based on the source blob_names and the destination_directory.
The destination files are not automatically deleted if their downloads fail,
so please check the return value of this function for any exceptions, or
enable raise_exception=True
, and process the files accordingly.
For example, if the blob_names
include "icon.jpg", destination_directory
is "/home/myuser/", and blob_name_prefix
is "images/", then the blob named
"images/icon.jpg" will be downloaded to a file named
"/home/myuser/icon.jpg".
`concurrent.futures.TimeoutError
list
upload_chunks_concurrently
upload_chunks_concurrently
(
filename
,
blob
,
content_type
=
None
,
chunk_size
=
33554432
,
deadline
=
None
,
worker_type
=
'process'
,
max_workers
=
8
,
*
,
checksum
=
'auto'
,
timeout
=
60
,
retry
=
< google
.
api_core
.
retry
.
retry_unary
.
Retry
object
> )
Upload a single file in chunks, concurrently.
This function uses the XML MPU API to initialize an upload and upload a file in chunks, concurrently with a worker pool.
The XML MPU API is significantly different from other uploads; please review
the documentation at https://cloud.google.com/storage/docs/multipart-uploads
before using this feature.
The library will attempt to cancel uploads that fail due to an exception.
If the upload fails in a way that precludes cancellation, such as a
hardware failure, process termination, or power outage, then the incomplete
upload may persist indefinitely. To mitigate this, set the AbortIncompleteMultipartUpload
with a nonzero Age
in bucket lifecycle
rules, or refer to the XML API documentation linked above to learn more
about how to list and delete individual downloads.
Using this feature with multiple threads is unlikely to improve upload performance under normal circumstances due to Python interpreter threading behavior. The default is therefore to use processes instead of threads.
ACL information cannot be sent with this function and should be set
separately with ObjectACL
methods.
filename
str
The path to the file to upload. File-like objects are not supported.
blob
content_type
str
(Optional) Type of content being uploaded.
chunk_size
int
The size in bytes of each chunk to send. The optimal chunk size for maximum throughput may vary depending on the exact network environment and size of the blob. The remote API has restrictions on the minimum and maximum size allowable, see: https://cloud.google.com/storage/quotas#requests
deadline
int
The number of seconds to wait for all threads to resolve. If the deadline is reached, all threads will be terminated regardless of their progress and concurrent.futures.TimeoutError
will be raised. This can be left as the default of None
(no deadline) for most use cases.
worker_type
str
The worker type to use; one of google.cloud.storage.transfer_manager.PROCESS
or google.cloud.storage.transfer_manager.THREAD
. Although the exact performance impact depends on the use case, in most situations the PROCESS worker type will use more system resources (both memory and CPU) and result in faster operations than THREAD workers. Because the subprocesses of the PROCESS worker type can't access memory from the main process, Client objects have to be serialized and then recreated in each subprocess. The serialization of the Client object for use in subprocesses is an approximation and may not capture every detail of the Client object, especially if the Client was modified after its initial creation or if Client._http
was modified in any way. THREAD worker types are observed to be relatively efficient for operations with many small files, but not for operations with large files. PROCESS workers are recommended for large file operations.
max_workers
int
The maximum number of workers to create to handle the workload. With PROCESS workers, a larger number of workers will consume more system resources (memory and CPU) at once. How many workers is optimal depends heavily on the specific use case, and the default is a conservative number that should work okay in most cases without consuming excessive resources.
checksum
str
(Optional) The checksum scheme to use: either "md5", "crc32c", "auto" or None. The default is "auto", which will try to detect if the C extension for crc32c is installed and fall back to md5 otherwise. Each individual part is checksummed. At present, the selected checksum rule is only applied to parts and a separate checksum of the entire resulting blob is not computed. Please compute and compare the checksum of the file to the resulting blob separately if needed, using the "crc32c" algorithm as per the XML MPU documentation.
timeout
float or tuple
(Optional) The amount of time, in seconds, to wait for the server response. See: configuring_timeouts
retry
google.api_core.retry.Retry
(Optional) How to retry the RPC. A None value will disable retries. A google.api_core.retry.Retry
value will enable retries, and the object will configure backoff and timeout options. Custom predicates (customizable error codes) are not supported for media operations such as this one. This function does not accept ConditionalRetryPolicy
values because preconditions are not supported by the underlying API call. See the retry.py source code and docstrings in this package ( google.cloud.storage.retry
) for information on retry types and how to configure them.
`concurrent.futures.TimeoutError
upload_many
upload_many
(
file_blob_pairs
,
skip_if_exists
=
False
,
upload_kwargs
=
None
,
threads
=
None
,
deadline
=
None
,
raise_exception
=
False
,
worker_type
=
"process"
,
max_workers
=
8
,
)
Upload many files concurrently via a worker pool.
`concurrent.futures.TimeoutError
list
upload_many_from_filenames
upload_many_from_filenames
(
bucket
,
filenames
,
source_directory
=
""
,
blob_name_prefix
=
""
,
skip_if_exists
=
False
,
blob_constructor_kwargs
=
None
,
upload_kwargs
=
None
,
threads
=
None
,
deadline
=
None
,
raise_exception
=
False
,
worker_type
=
"process"
,
max_workers
=
8
,
*
,
additional_blob_attributes
=
None
)
Upload many files concurrently by their filenames.
The destination blobs are automatically created, with blob names based on the source filenames and the blob_name_prefix.
For example, if the filenames
include "images/icon.jpg", source_directory
is "/home/myuser/", and blob_name_prefix
is "myfiles/",
then the file at "/home/myuser/images/icon.jpg" will be uploaded to a blob
named "myfiles/images/icon.jpg".
`concurrent.futures.TimeoutError
list