Index
-
StorageTransferService
(interface) -
AgentPool
(message) -
AgentPool.BandwidthLimit
(message) -
AgentPool.State
(enum) -
AwsAccessKey
(message) -
AwsS3CompatibleData
(message) -
AwsS3Data
(message) -
AzureBlobStorageData
(message) -
AzureBlobStorageData.FederatedIdentityConfig
(message) -
AzureCredentials
(message) -
CreateAgentPoolRequest
(message) -
CreateTransferJobRequest
(message) -
DeleteAgentPoolRequest
(message) -
DeleteTransferJobRequest
(message) -
ErrorLogEntry
(message) -
ErrorSummary
(message) -
EventStream
(message) -
GcsData
(message) -
GetAgentPoolRequest
(message) -
GetGoogleServiceAccountRequest
(message) -
GetTransferJobRequest
(message) -
GoogleServiceAccount
(message) -
HdfsData
(message) -
HttpData
(message) -
ListAgentPoolsRequest
(message) -
ListAgentPoolsResponse
(message) -
ListTransferJobsRequest
(message) -
ListTransferJobsResponse
(message) -
LoggingConfig
(message) -
LoggingConfig.LoggableAction
(enum) -
LoggingConfig.LoggableActionState
(enum) -
MetadataOptions
(message) -
MetadataOptions.Acl
(enum) -
MetadataOptions.GID
(enum) -
MetadataOptions.KmsKey
(enum) -
MetadataOptions.Mode
(enum) -
MetadataOptions.StorageClass
(enum) -
MetadataOptions.Symlink
(enum) -
MetadataOptions.TemporaryHold
(enum) -
MetadataOptions.TimeCreated
(enum) -
MetadataOptions.UID
(enum) -
NotificationConfig
(message) -
NotificationConfig.EventType
(enum) -
NotificationConfig.PayloadFormat
(enum) -
ObjectConditions
(message) -
PauseTransferOperationRequest
(message) -
PosixFilesystem
(message) -
ReplicationSpec
(message) -
ResumeTransferOperationRequest
(message) -
RunTransferJobRequest
(message) -
S3CompatibleMetadata
(message) -
S3CompatibleMetadata.AuthMethod
(enum) -
S3CompatibleMetadata.ListApi
(enum) -
S3CompatibleMetadata.NetworkProtocol
(enum) -
S3CompatibleMetadata.RequestModel
(enum) -
Schedule
(message) -
TransferCounters
(message) -
TransferJob
(message) -
TransferJob.Status
(enum) -
TransferManifest
(message) -
TransferOperation
(message) -
TransferOperation.Status
(enum) -
TransferOptions
(message) -
TransferOptions.OverwriteWhen
(enum) -
TransferSpec
(message) -
UpdateAgentPoolRequest
(message) -
UpdateTransferJobRequest
(message)
StorageTransferService
Storage Transfer Service and its protos. Transfers data between between Google Cloud Storage buckets or from a data source external to Google to a Cloud Storage bucket.
rpc CreateAgentPool(
CreateAgentPoolRequest
) returns ( AgentPool
)
Creates an agent pool resource.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc CreateTransferJob(
CreateTransferJobRequest
) returns ( TransferJob
)
Creates a transfer job that runs periodically.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc DeleteAgentPool(
DeleteAgentPoolRequest
) returns ( Empty
)
Deletes an agent pool.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc DeleteTransferJob(
DeleteTransferJobRequest
) returns ( Empty
)
Deletes a transfer job. Deleting a transfer job sets its status to DELETED
.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc GetAgentPool(
GetAgentPoolRequest
) returns ( AgentPool
)
Gets an agent pool.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc GetGoogleServiceAccount(
GetGoogleServiceAccountRequest
) returns ( GoogleServiceAccount
)
Returns the Google service account that is used by Storage Transfer Service to access buckets in the project where transfers run or in other projects. Each Google service account is associated with one Google Cloud project. Users should add this service account to the Google Cloud Storage bucket ACLs to grant access to Storage Transfer Service. This service account is created and owned by Storage Transfer Service and can only be used by Storage Transfer Service.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc GetTransferJob(
GetTransferJobRequest
) returns ( TransferJob
)
Gets a transfer job.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc ListAgentPools(
ListAgentPoolsRequest
) returns ( ListAgentPoolsResponse
)
Lists agent pools.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc ListTransferJobs(
ListTransferJobsRequest
) returns ( ListTransferJobsResponse
)
Lists transfer jobs.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc PauseTransferOperation(
PauseTransferOperationRequest
) returns ( Empty
)
Pauses a transfer operation.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc ResumeTransferOperation(
ResumeTransferOperationRequest
) returns ( Empty
)
Resumes a transfer operation that is paused.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc RunTransferJob(
RunTransferJobRequest
) returns ( Operation
)
Starts a new operation for the specified transfer job. A TransferJob
has a maximum of one active TransferOperation
. If this method is called while a TransferOperation
is active, an error is returned.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc UpdateAgentPool(
UpdateAgentPoolRequest
) returns ( AgentPool
)
Updates an existing agent pool resource.
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
rpc UpdateTransferJob(
UpdateTransferJobRequest
) returns ( TransferJob
)
Updates a transfer job. Updating a job's transfer spec does not affect transfer operations that are running already.
Note:The job's status
field can be modified using this RPC (for example, to set a job's status to DELETED
, DISABLED
, or ENABLED
).
- Authorization scopes
-
Requires the following OAuth scope:
-
https://www.googleapis.com/auth/cloud-platform
For more information, see the Authentication Overview .
-
AgentPool
Represents an agent pool.
Fields | |
---|---|
name
|
Required. Specifies a unique string that identifies the agent pool. Format: |
display_name
|
Specifies the client-specified AgentPool description. |
state
|
Output only. Specifies the state of the AgentPool. |
bandwidth_limit
|
Specifies the bandwidth limit details. If this field is unspecified, the default value is set as 'No Limit'. |
BandwidthLimit
Specifies a bandwidth limit for an agent pool.
Fields | |
---|---|
limit_mbps
|
Bandwidth rate in megabytes per second, distributed across all the agents in the pool. |
State
The state of an AgentPool.
Enums | |
---|---|
STATE_UNSPECIFIED
|
Default value. This value is unused. |
CREATING
|
This is an initialization state. During this stage, resources are allocated for the AgentPool. |
CREATED
|
Determines that the AgentPool is created for use. At this state, Agents can join the AgentPool and participate in the transfer jobs in that pool. |
DELETING
|
Determines that the AgentPool deletion has been initiated, and all the resources are scheduled to be cleaned up and freed. |
AwsAccessKey
AWS access key (see AWS Security Credentials ).
For information on our data retention policy for user credentials, see User credentials .
Fields | |
---|---|
access_key_id
|
Required. AWS access key ID. |
secret_access_key
|
Required. AWS secret access key. This field is not returned in RPC responses. |
AwsS3CompatibleData
An AwsS3CompatibleData resource.
bucket_name
string
Required. Specifies the name of the bucket.
path
string
Specifies the root path to transfer objects.
Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
endpoint
string
Required. Specifies the endpoint of the storage service.
region
string
Specifies the region to sign requests with. This can be left blank if requests should be signed with an empty region.
data_provider
. Specifies the metadata of the S3 compatible data provider. Each provider may contain some attributes that do not apply to all S3-compatible data providers. When not specified, S3CompatibleMetadata is used by default. data_provider
can be only one of the following:AwsS3Data
An AwsS3Data resource can be a data source, but not a data sink. In an AwsS3Data resource, an object's name is the S3 object's key name.
bucket_name
string
Required. S3 Bucket name (see Creating a bucket ).
aws_access_key
Input only. AWS access key used to sign the API requests to the AWS S3 bucket. Permissions on the bucket must be granted to the access ID of the AWS access key.
For information on our data retention policy for user credentials, see User credentials .
path
string
Root path to transfer objects.
Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
role_arn
string
The Amazon Resource Name (ARN) of the role to support temporary credentials via AssumeRoleWithWebIdentity
. For more information about ARNs, see IAM ARNs
.
When a role ARN is provided, Transfer Service fetches temporary credentials for the session using a AssumeRoleWithWebIdentity
call for the provided role using the GoogleServiceAccount
for this project.
cloudfront_domain
string
Optional. The CloudFront distribution domain name pointing to this bucket, to use when fetching.
See Transfer from S3 via CloudFront for more information.
Format: https://{id}.cloudfront.net
or any valid custom domain. Must begin with https://
.
credentials_secret
string
Optional. The Resource name of a secret in Secret Manager.
AWS credentials must be stored in Secret Manager in JSON format:
{ "access_key_id": "ACCESS_KEY_ID", "secret_access_key": "SECRET_ACCESS_KEY" }
GoogleServiceAccount
must be granted roles/secretmanager.secretAccessor
for the resource.
See Configure access to a source: Amazon S3 for more information.
If credentials_secret
is specified, do not specify role_arn
or aws_access_key
.
Format: projects/{project_number}/secrets/{secret_name}
Union field private_network
.
private_network
can be only one of the following:
managed_private_network
bool
Egress bytes over a Google-managed private network. This network is shared between other users of Storage Transfer Service.
AzureBlobStorageData
An AzureBlobStorageData resource can be a data source, but not a data sink. An AzureBlobStorageData resource represents one Azure container. The storage account determines the Azure endpoint . In an AzureBlobStorageData resource, a blobs's name is the Azure Blob Storage blob's key name .
Fields | |
---|---|
storage_account
|
Required. The name of the Azure Storage account. |
azure_credentials
|
Required. Input only. Credentials used to authenticate API requests to Azure. For information on our data retention policy for user credentials, see User credentials . |
container
|
Required. The container to transfer from the Azure Storage account. |
path
|
Root path to transfer objects. Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'. |
credentials_secret
|
Optional. The Resource name of a secret in Secret Manager. The Azure SAS token must be stored in Secret Manager in JSON format: { "sas_token" : "SAS_TOKEN" } See Configure access to a source: Microsoft Azure Blob Storage for more information. If Format: |
federated_identity_config
|
Optional. Federated identity config of a user registered Azure application. If |
FederatedIdentityConfig
The identity of an Azure application through which Storage Transfer Service can authenticate requests using Azure workload identity federation.
Storage Transfer Service can issue requests to Azure Storage through registered Azure applications, eliminating the need to pass credentials to Storage Transfer Service directly.
To configure federated identity, see Configure access to Microsoft Azure Storage .
Fields | |
---|---|
client_id
|
Required. The client (application) ID of the application with federated credentials. |
tenant_id
|
Required. The tenant (directory) ID of the application with federated credentials. |
AzureCredentials
Azure credentials
For information on our data retention policy for user credentials, see User credentials .
Fields | |
---|---|
sas_token
|
Required. Azure shared access signature (SAS). For more information about SAS, see Grant limited access to Azure Storage resources using shared access signatures (SAS) . |
CreateAgentPoolRequest
Specifies the request passed to CreateAgentPool.
project_id
string
Required. The ID of the Google Cloud project that owns the agent pool.
Authorization requires the following IAM
permission on the specified resource projectId
:
-
storagetransfer.agentpools.create
agent_pool
Required. The agent pool to create.
agent_pool_id
string
Required. The ID of the agent pool to create.
The agent_pool_id
must meet the following requirements:
- Length of 128 characters or less.
- Not start with the string
goog
. - Start with a lowercase ASCII character, followed by:
- Zero or more: lowercase Latin alphabet characters, numerals, hyphens (
-
), periods (.
), underscores (_
), or tildes (~
). - One or more numerals or lowercase ASCII characters.
- Zero or more: lowercase Latin alphabet characters, numerals, hyphens (
As expressed by the regular expression: ^(?!goog)[a-z]([a-z0-9-._~]*[a-z0-9])?$
.
CreateTransferJobRequest
Request passed to CreateTransferJob.
transfer_job
Required. The job to create.
Authorization requires the following IAM
permission on the specified resource transferJob
:
-
storagetransfer.jobs.create
DeleteAgentPoolRequest
Specifies the request passed to DeleteAgentPool.
name
string
Required. The name of the agent pool to delete.
Authorization requires the following IAM
permission on the specified resource name
:
-
storagetransfer.agentpools.delete
DeleteTransferJobRequest
Request passed to DeleteTransferJob.
job_name
string
Required. The job to delete.
Authorization requires the following IAM
permission on the specified resource jobName
:
-
storagetransfer.jobs.delete
project_id
string
Required. The ID of the Google Cloud project that owns the job.
ErrorLogEntry
An entry describing an error that has occurred.
Fields | |
---|---|
url
|
Required. A URL that refers to the target (a data source, a data sink, or an object) with which the error is associated. |
error_details[]
|
A list of messages that carry the error details. |
ErrorSummary
A summary of errors by error code, plus a count and sample error log entries.
Fields | |
---|---|
error_code
|
Required. |
error_count
|
Required. Count of this type of error. |
error_log_entries[]
|
Error samples. At most 5 error log entries are recorded for a given error code for a single transfer operation. |
EventStream
Specifies the Event-driven transfer options. Event-driven transfers listen to an event stream to transfer updated files.
Fields | |
---|---|
name
|
Required. Specifies a unique name of the resource such as AWS SQS ARN in the form 'arn:aws:sqs:region:account_id:queue_name', or Pub/Sub subscription resource name in the form 'projects/{project}/subscriptions/{sub}'. |
event_stream_start_time
|
Specifies the date and time that Storage Transfer Service starts listening for events from this stream. If no start time is specified or start time is in the past, Storage Transfer Service starts listening immediately. |
event_stream_expiration_time
|
Specifies the data and time at which Storage Transfer Service stops listening for events from this stream. After this time, any transfers in progress will complete, but no new transfers are initiated. |
GcsData
In a GcsData resource, an object's name is the Cloud Storage object's name and its "last modification time" refers to the object's updated
property of Cloud Storage objects, which changes when the content or the metadata of the object is updated.
bucket_name
string
Required. Cloud Storage bucket name. Must meet Bucket Name Requirements .
path
string
Root path to transfer objects.
Must be an empty string or full path name that ends with a '/'. This field is treated as an object prefix. As such, it should generally not begin with a '/'.
The root path value must meet Object Name Requirements .
managed_folder_transfer_enabled
bool
Preview. Enables the transfer of managed folders between Cloud Storage buckets. Set this option on the gcs_data_source.
If set to true:
- Managed folders in the source bucket are transferred to the destination bucket.
- Managed folders in the destination bucket are overwritten. Other OVERWRITE options are not supported.
GetAgentPoolRequest
Specifies the request passed to GetAgentPool.
name
string
Required. The name of the agent pool to get.
Authorization requires the following IAM
permission on the specified resource name
:
-
storagetransfer.agentpools.get
GetGoogleServiceAccountRequest
Request passed to GetGoogleServiceAccount.
project_id
string
Required. The ID of the Google Cloud project that the Google service account is associated with.
Authorization requires the following IAM
permission on the specified resource projectId
:
-
storagetransfer.projects.getServiceAccount
GetTransferJobRequest
Request passed to GetTransferJob.
job_name
string
Required. The job to get.
Authorization requires the following IAM
permission on the specified resource jobName
:
-
storagetransfer.jobs.get
project_id
string
Required. The ID of the Google Cloud project that owns the job.
GoogleServiceAccount
Google service account
Fields | |
---|---|
account_email
|
Email address of the service account. |
subject_id
|
Unique identifier for the service account. |
HdfsData
An HdfsData resource specifies a path within an HDFS entity (e.g. a cluster). All cluster-specific settings, such as namenodes and ports, are configured on the transfer agents servicing requests, so HdfsData only contains the root path to the data in our transfer.
Fields | |
---|---|
path
|
Root path to transfer files. |
HttpData
An HttpData resource specifies a list of objects on the web to be transferred over HTTP. The information of the objects to be transferred is contained in a file referenced by a URL. The first line in the file must be "TsvHttpData-1.0"
, which specifies the format of the file. Subsequent lines specify the information of the list of objects, one object per list entry. Each entry has the following tab-delimited fields:
-
HTTP URL— The location of the object.
-
Length— The size of the object in bytes.
-
MD5— The base64-encoded MD5 hash of the object.
For an example of a valid TSV file, see Transferring data from URLs .
When transferring data based on a URL list, keep the following in mind:
-
When an object located at
http(s)://hostname:port/<URL-path>
is transferred to a data sink, the name of the object at the data sink is<hostname>/<URL-path>
. -
If the specified size of an object does not match the actual size of the object fetched, the object is not transferred.
-
If the specified MD5 does not match the MD5 computed from the transferred bytes, the object transfer fails.
-
Ensure that each URL you specify is publicly accessible. For example, in Cloud Storage you can share an object publicly and get a link to it.
-
Storage Transfer Service obeys
robots.txt
rules and requires the source HTTP server to supportRange
requests and to return aContent-Length
header in each response. -
ObjectConditions
have no effect when filtering objects to transfer.
Fields | |
---|---|
list_url
|
Required. The URL that points to the file that stores the object list entries. This file must allow public access. The URL is either an HTTP/HTTPS address (e.g. |
ListAgentPoolsRequest
The request passed to ListAgentPools.
project_id
string
Required. The ID of the Google Cloud project that owns the job.
Authorization requires the following IAM
permission on the specified resource projectId
:
-
storagetransfer.agentpools.list
filter
string
An optional list of query parameters specified as JSON text in the form of:
{"agentPoolNames":["agentpool1","agentpool2",...]}
Since agentPoolNames
support multiple values, its values must be specified with array notation. When the filter is either empty or not provided, the list returns all agent pools for the project.
page_size
int32
The list page size. The max allowed value is 256
.
page_token
string
The list page token.
ListAgentPoolsResponse
Response from ListAgentPools.
Fields | |
---|---|
agent_pools[]
|
A list of agent pools. |
next_page_token
|
The list next page token. |
ListTransferJobsRequest
projectId
, jobNames
, and jobStatuses
are query parameters that can be specified when listing transfer jobs.
filter
string
Required. A list of query parameters specified as JSON text in the form of:
{
"projectId":"my_project_id",
"jobNames":["jobid1","jobid2",...],
"jobStatuses":["status1","status2",...],
"dataBackend":"QUERY_REPLICATION_CONFIGS",
"sourceBucket":"source-bucket-name",
"sinkBucket":"sink-bucket-name",
}
The JSON formatting in the example is for display only; provide the query parameters without spaces or line breaks.
-
projectId
is required. - Since
jobNames
andjobStatuses
support multiple values, their values must be specified with array notation.jobNames
andjobStatuses
are optional. Valid values are case-insensitive: - Specify
"dataBackend":"QUERY_REPLICATION_CONFIGS"
to return a list of cross-bucket replication jobs. - Limit the results to jobs from a particular bucket with
sourceBucket
and/or to a particular bucket withsinkBucket
.
Authorization requires the following IAM
permission on the specified resource filter
:
-
storagetransfer.jobs.list
page_size
int32
The list page size. The max allowed value is 256.
page_token
string
The list page token.
ListTransferJobsResponse
Response from ListTransferJobs.
Fields | |
---|---|
transfer_jobs[]
|
A list of transfer jobs. |
next_page_token
|
The list next page token. |
LoggingConfig
Specifies the logging behavior for transfer operations.
Logs can be sent to Cloud Logging for all transfer types. See Read transfer logs for details.
Fields | |
---|---|
log_actions[]
|
Specifies the actions to be logged. If empty, no logs are generated. |
log_action_states[]
|
States in which |
enable_onprem_gcs_transfer_logs
|
For PosixFilesystem transfers, enables file system transfer logs instead of, or in addition to, Cloud Logging. This option ignores [LoggableAction] and [LoggableActionState]. If these are set, Cloud Logging will also be enabled for this transfer. |
LoggableAction
Loggable actions.
Enums | |
---|---|
LOGGABLE_ACTION_UNSPECIFIED
|
Default value. This value is unused. |
FIND
|
Listing objects in a bucket. |
DELETE
|
Deleting objects at the source or the destination. |
COPY
|
Copying objects to the destination. |
LoggableActionState
Loggable action states.
Enums | |
---|---|
LOGGABLE_ACTION_STATE_UNSPECIFIED
|
Default value. This value is unused. |
SUCCEEDED
|
LoggableAction
completed successfully. SUCCEEDED
actions are logged as INFO
. |
FAILED
|
LoggableAction
terminated in an error state. FAILED
actions are logged as ERROR
. |
SKIPPED
|
The COPY
action was skipped for this file. Only supported for agent-based transfers. SKIPPED
actions are logged as INFO
. |
MetadataOptions
Specifies the metadata options for running a transfer.
Fields | |
---|---|
symlink
|
Specifies how symlinks should be handled by the transfer. By default, symlinks are not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers. |
mode
|
Specifies how each file's mode attribute should be handled by the transfer. By default, mode is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers. |
gid
|
Specifies how each file's POSIX group ID (GID) attribute should be handled by the transfer. By default, GID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers. |
uid
|
Specifies how each file's POSIX user ID (UID) attribute should be handled by the transfer. By default, UID is not preserved. Only applicable to transfers involving POSIX file systems, and ignored for other transfers. |
acl
|
Specifies how each object's ACLs should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as ACL_DESTINATION_BUCKET_DEFAULT. |
storage_class
|
Specifies the storage class to set on objects being transferred to Google Cloud Storage buckets. If unspecified, the default behavior is the same as |
temporary_hold
|
Specifies how each object's temporary hold status should be preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as |
kms_key
|
Specifies how each object's Cloud KMS customer-managed encryption key (CMEK) is preserved for transfers between Google Cloud Storage buckets. If unspecified, the default behavior is the same as |
time_created
|
Specifies how each object's |
Acl
Options for handling Cloud Storage object ACLs.
Enums | |
---|---|
ACL_UNSPECIFIED
|
ACL behavior is unspecified. |
ACL_DESTINATION_BUCKET_DEFAULT
|
Use the destination bucket's default object ACLS, if applicable. |
ACL_PRESERVE
|
Preserve the object's original ACLs. This requires the service account to have storage.objects.getIamPolicy
permission for the source object. Uniform bucket-level access
must not be enabled on either the source or destination buckets. |
GID
Options for handling file GID attribute.
Enums | |
---|---|
GID_UNSPECIFIED
|
GID behavior is unspecified. |
GID_SKIP
|
Do not preserve GID during a transfer job. |
GID_NUMBER
|
Preserve GID during a transfer job. |
KmsKey
Options for handling the KmsKey setting for Google Cloud Storage objects.
Enums | |
---|---|
KMS_KEY_UNSPECIFIED
|
KmsKey behavior is unspecified. |
KMS_KEY_DESTINATION_BUCKET_DEFAULT
|
Use the destination bucket's default encryption settings. |
KMS_KEY_PRESERVE
|
Preserve the object's original Cloud KMS customer-managed encryption key (CMEK) if present. Objects that do not use a Cloud KMS encryption key will be encrypted using the destination bucket's encryption settings. |
Mode
Options for handling file mode attribute.
Enums | |
---|---|
MODE_UNSPECIFIED
|
Mode behavior is unspecified. |
MODE_SKIP
|
Do not preserve mode during a transfer job. |
MODE_PRESERVE
|
Preserve mode during a transfer job. |
StorageClass
Options for handling Google Cloud Storage object storage class.
Enums | |
---|---|
STORAGE_CLASS_UNSPECIFIED
|
Storage class behavior is unspecified. |
STORAGE_CLASS_DESTINATION_BUCKET_DEFAULT
|
Use the destination bucket's default storage class. |
STORAGE_CLASS_PRESERVE
|
Preserve the object's original storage class. This is only supported for transfers from Google Cloud Storage buckets. REGIONAL and MULTI_REGIONAL storage classes will be mapped to STANDARD to ensure they can be written to the destination bucket. |
STORAGE_CLASS_STANDARD
|
Set the storage class to STANDARD. |
STORAGE_CLASS_NEARLINE
|
Set the storage class to NEARLINE. |
STORAGE_CLASS_COLDLINE
|
Set the storage class to COLDLINE. |
STORAGE_CLASS_ARCHIVE
|
Set the storage class to ARCHIVE. |
Symlink
Whether symlinks should be skipped or preserved during a transfer job.
Enums | |
---|---|
SYMLINK_UNSPECIFIED
|
Symlink behavior is unspecified. |
SYMLINK_SKIP
|
Do not preserve symlinks during a transfer job. |
SYMLINK_PRESERVE
|
Preserve symlinks during a transfer job. |
TemporaryHold
Options for handling temporary holds for Google Cloud Storage objects.
Enums | |
---|---|
TEMPORARY_HOLD_UNSPECIFIED
|
Temporary hold behavior is unspecified. |
TEMPORARY_HOLD_SKIP
|
Do not set a temporary hold on the destination object. |
TEMPORARY_HOLD_PRESERVE
|
Preserve the object's original temporary hold status. |
TimeCreated
Options for handling timeCreated
metadata for Google Cloud Storage objects.
Enums | |
---|---|
TIME_CREATED_UNSPECIFIED
|
TimeCreated behavior is unspecified. |
TIME_CREATED_SKIP
|
Do not preserve the timeCreated
metadata from the source object. |
TIME_CREATED_PRESERVE_AS_CUSTOM_TIME
|
Preserves the source object's timeCreated
or lastModified
metadata in the customTime
field in the destination object. Note that any value stored in the source object's customTime
field will not be propagated to the destination object. |
UID
Options for handling file UID attribute.
Enums | |
---|---|
UID_UNSPECIFIED
|
UID behavior is unspecified. |
UID_SKIP
|
Do not preserve UID during a transfer job. |
UID_NUMBER
|
Preserve UID during a transfer job. |
NotificationConfig
Specification to configure notifications published to Pub/Sub. Notifications are published to the customer-provided topic using the following PubsubMessage.attributes
:
-
"eventType"
: one of theEventType
values -
"payloadFormat"
: one of thePayloadFormat
values -
"projectId"
: theproject_id
of theTransferOperation
-
"transferJobName"
: thetransfer_job_name
of theTransferOperation
-
"transferOperationName"
: thename
of theTransferOperation
The PubsubMessage.data
contains a TransferOperation
resource formatted according to the specified PayloadFormat
.
Fields | |
---|---|
pubsub_topic
|
Required. The |
event_types[]
|
Event types for which a notification is desired. If empty, send notifications for all event types. |
payload_format
|
Required. The desired format of the notification message payloads. |
EventType
Enum for specifying event types for which notifications are to be published.
Additional event types may be added in the future. Clients should either safely ignore unrecognized event types or explicitly specify which event types they are prepared to accept.
Enums | |
---|---|
EVENT_TYPE_UNSPECIFIED
|
Illegal value, to avoid allowing a default. |
TRANSFER_OPERATION_SUCCESS
|
TransferOperation
completed with status SUCCESS
. |
TRANSFER_OPERATION_FAILED
|
TransferOperation
completed with status FAILED
. |
TRANSFER_OPERATION_ABORTED
|
TransferOperation
completed with status ABORTED
. |
PayloadFormat
Enum for specifying the format of a notification message's payload.
Enums | |
---|---|
PAYLOAD_FORMAT_UNSPECIFIED
|
Illegal value, to avoid allowing a default. |
NONE
|
No payload is included with the notification. |
JSON
|
TransferOperation
is formatted as a JSON response
, in application/json. |
ObjectConditions
Conditions that determine which objects are transferred. Applies only to Cloud Data Sources such as S3, Azure, and Cloud Storage.
The "last modification time" refers to the time of the last change to the object's content or metadata — specifically, this is the updated
property of Cloud Storage objects, the LastModified
field of S3 objects, and the Last-Modified
header of Azure blobs.
For S3 objects, the LastModified
value is the time the object begins uploading. If the object meets your "last modification time" criteria, but has not finished uploading, the object is not transferred. See Transfer from Amazon S3 to Cloud Storage
for more information.
Transfers with a PosixFilesystem
source or destination don't support ObjectConditions
.
min_time_elapsed_since_last_modification
Ensures that objects are not transferred until a specific minimum time has elapsed after the "last modification time". When a TransferOperation
begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time
of the TransferOperation
and the "last modification time" of the object is equal to or greater than the value of min_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
max_time_elapsed_since_last_modification
Ensures that objects are not transferred if a specific maximum time has elapsed since the "last modification time". When a TransferOperation
begins, objects with a "last modification time" are transferred only if the elapsed time between the start_time
of the TransferOperation
and the "last modification time" of the object is less than the value of max_time_elapsed_since_last_modification`. Objects that do not have a "last modification time" are also transferred.
include_prefixes[]
string
If you specify include_prefixes
, Storage Transfer Service uses the items in the include_prefixes
array to determine which objects to include in a transfer. Objects must start with one of the matching include_prefixes
for inclusion in the transfer. If exclude_prefixes
is specified, objects must not start with any of the exclude_prefixes
specified for inclusion in the transfer.
The following are requirements of include_prefixes
:
-
Each include-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported.
-
Each include-prefix must omit the leading slash. For example, to include the object
s3://my-aws-bucket/logs/y=2015/requests.gz
, specify the include-prefix aslogs/y=2015/requests.gz
. -
None of the include-prefix values can be empty, if specified.
-
Each include-prefix must include a distinct portion of the object namespace. No include-prefix may be a prefix of another include-prefix.
The max size of include_prefixes
is 1000.
For more information, see Filtering objects from transfers .
exclude_prefixes[]
string
If you specify exclude_prefixes
, Storage Transfer Service uses the items in the exclude_prefixes
array to determine which objects to exclude from a transfer. Objects must not start with one of the matching exclude_prefixes
for inclusion in a transfer.
The following are requirements of exclude_prefixes
:
-
Each exclude-prefix can contain any sequence of Unicode characters, to a max length of 1024 bytes when UTF8-encoded, and must not contain Carriage Return or Line Feed characters. Wildcard matching and regular expression matching are not supported.
-
Each exclude-prefix must omit the leading slash. For example, to exclude the object
s3://my-aws-bucket/logs/y=2015/requests.gz
, specify the exclude-prefix aslogs/y=2015/requests.gz
. -
None of the exclude-prefix values can be empty, if specified.
-
Each exclude-prefix must exclude a distinct portion of the object namespace. No exclude-prefix may be a prefix of another exclude-prefix.
-
If
include_prefixes
is specified, then each exclude-prefix must start with the value of a path explicitly included byinclude_prefixes
.
The max size of exclude_prefixes
is 1000.
For more information, see Filtering objects from transfers .
last_modified_since
If specified, only objects with a "last modification time" on or after this timestamp and objects that don't have a "last modification time" are transferred.
The last_modified_since
and last_modified_before
fields can be used together for chunked data processing. For example, consider a script that processes each day's worth of data at a time. For that you'd set each of the fields as follows:
-
last_modified_since
to the start of the day -
last_modified_before
to the end of the day
last_modified_before
If specified, only objects with a "last modification time" before this timestamp and objects that don't have a "last modification time" are transferred.
PauseTransferOperationRequest
Request passed to PauseTransferOperation.
name
string
Required. The name of the transfer operation.
Authorization requires the following IAM
permission on the specified resource name
:
-
storagetransfer.operations.pause
PosixFilesystem
A POSIX filesystem resource.
Fields | |
---|---|
root_directory
|
Root directory path to the filesystem. |
ReplicationSpec
Specifies the configuration for a cross-bucket replication job. Cross-bucket replication copies new or updated objects from a source Cloud Storage bucket to a destination Cloud Storage bucket. Existing objects in the source bucket are not copied by a new cross-bucket replication job.
object_conditions
Object conditions that determine which objects are transferred. For replication jobs, only include_prefixes
and exclude_prefixes
are supported.
data_source
. The data source to be replicated. data_source
can be only one of the following:gcs_data_source
The Cloud Storage bucket from which to replicate objects.
data_sink
. The destination for replicated objects. data_sink
can be only one of the following:gcs_data_sink
The Cloud Storage bucket to which to replicate objects.
ResumeTransferOperationRequest
Request passed to ResumeTransferOperation.
name
string
Required. The name of the transfer operation.
Authorization requires the following IAM
permission on the specified resource name
:
-
storagetransfer.operations.resume
RunTransferJobRequest
Request passed to RunTransferJob.
job_name
string
Required. The name of the transfer job.
Authorization requires the following IAM
permission on the specified resource jobName
:
-
storagetransfer.jobs.run
project_id
string
Required. The ID of the Google Cloud project that owns the transfer job.
S3CompatibleMetadata
S3CompatibleMetadata contains the metadata fields that apply to the basic types of S3-compatible data providers.
Fields | |
---|---|
auth_method
|
Specifies the authentication and authorization method used by the storage service. When not specified, Transfer Service will attempt to determine right auth method to use. |
request_model
|
Specifies the API request model used to call the storage service. When not specified, the default value of RequestModel REQUEST_MODEL_VIRTUAL_HOSTED_STYLE is used. |
protocol
|
Specifies the network protocol of the agent. When not specified, the default value of NetworkProtocol NETWORK_PROTOCOL_HTTPS is used. |
list_api
|
The Listing API to use for discovering objects. When not specified, Transfer Service will attempt to determine the right API to use. |
AuthMethod
The authentication and authorization method used by the storage service.
Enums | |
---|---|
AUTH_METHOD_UNSPECIFIED
|
AuthMethod is not specified. |
AUTH_METHOD_AWS_SIGNATURE_V4
|
Auth requests with AWS SigV4. |
AUTH_METHOD_AWS_SIGNATURE_V2
|
Auth requests with AWS SigV2. |
ListApi
The Listing API to use for discovering objects.
Enums | |
---|---|
LIST_API_UNSPECIFIED
|
ListApi is not specified. |
LIST_OBJECTS_V2
|
Perform listing using ListObjectsV2 API. |
LIST_OBJECTS
|
Legacy ListObjects API. |
NetworkProtocol
The agent network protocol to access the storage service.
Enums | |
---|---|
NETWORK_PROTOCOL_UNSPECIFIED
|
NetworkProtocol is not specified. |
NETWORK_PROTOCOL_HTTPS
|
Perform requests using HTTPS. |
NETWORK_PROTOCOL_HTTP
|
Not recommended: This sends data in clear-text. This is only appropriate within a closed network or for publicly available data. Perform requests using HTTP. |
RequestModel
The request model of the API.
Enums | |
---|---|
REQUEST_MODEL_UNSPECIFIED
|
RequestModel is not specified. |
REQUEST_MODEL_VIRTUAL_HOSTED_STYLE
|
Perform requests using Virtual Hosted Style. Example: https://bucket-name.s3.region.amazonaws.com/key-name |
REQUEST_MODEL_PATH_STYLE
|
Perform requests using Path Style. Example: https://s3.region.amazonaws.com/bucket-name/key-name |
Schedule
Transfers can be scheduled to recur or to run just once.
schedule_start_date
Required. The start date of a transfer. Date boundaries are determined relative to UTC time. If schedule_start_date
and start_time_of_day
are in the past relative to the job's creation time, the transfer starts the day after you schedule the transfer request.
Note:When starting jobs at or near midnight UTC it is possible that a job starts later than expected. For example, if you send an outbound request on June 1 one millisecond prior to midnight UTC and the Storage Transfer Service server receives the request on June 2, then it creates a TransferJob with schedule_start_date
set to June 2 and a start_time_of_day
set to midnight UTC. The first scheduled TransferOperation
takes place on June 3 at midnight UTC.
schedule_end_date
The last day a transfer runs. Date boundaries are determined relative to UTC time. A job runs once per 24 hours within the following guidelines:
- If
schedule_end_date
andschedule_start_date
are the same and in the future relative to UTC, the transfer is executed only one time. - If
schedule_end_date
is later thanschedule_start_date
andschedule_end_date
is in the future relative to UTC, the job runs each day atstart_time_of_day
throughschedule_end_date
.
start_time_of_day
The time in UTC that a transfer job is scheduled to run. Transfers may start later than this time.
If start_time_of_day
is not specified:
- One-time transfers run immediately.
- Recurring transfers run immediately, and each day at midnight UTC, through
schedule_end_date
.
If start_time_of_day
is specified:
- One-time transfers run at the specified time.
- Recurring transfers run at the specified time each day, through
schedule_end_date
.
end_time_of_day
The time in UTC that no further transfer operations are scheduled. Combined with schedule_end_date
, end_time_of_day
specifies the end date and time for starting new transfer operations. This field must be greater than or equal to the timestamp corresponding to the combination of schedule_start_date
and start_time_of_day
, and is subject to the following:
-
If
end_time_of_day
is not set andschedule_end_date
is set, then a default value of23:59:59
is used forend_time_of_day
. -
If
end_time_of_day
is set andschedule_end_date
is not set, thenINVALID_ARGUMENT
is returned.
repeat_interval
Interval between the start of each scheduled TransferOperation. If unspecified, the default value is 24 hours. This value may not be less than 1 hour.
TransferCounters
A collection of counters that report the progress of a transfer operation.
Fields | |
---|---|
objects_found_from_source
|
Objects found in the data source that are scheduled to be transferred, excluding any that are filtered based on object conditions or skipped due to sync. |
bytes_found_from_source
|
Bytes found in the data source that are scheduled to be transferred, excluding any that are filtered based on object conditions or skipped due to sync. |
objects_found_only_from_sink
|
Objects found only in the data sink that are scheduled to be deleted. |
bytes_found_only_from_sink
|
Bytes found only in the data sink that are scheduled to be deleted. |
objects_from_source_skipped_by_sync
|
Objects in the data source that are not transferred because they already exist in the data sink. |
bytes_from_source_skipped_by_sync
|
Bytes in the data source that are not transferred because they already exist in the data sink. |
objects_copied_to_sink
|
Objects that are copied to the data sink. |
bytes_copied_to_sink
|
Bytes that are copied to the data sink. |
objects_deleted_from_source
|
Objects that are deleted from the data source. |
bytes_deleted_from_source
|
Bytes that are deleted from the data source. |
objects_deleted_from_sink
|
Objects that are deleted from the data sink. |
bytes_deleted_from_sink
|
Bytes that are deleted from the data sink. |
objects_from_source_failed
|
Objects in the data source that failed to be transferred or that failed to be deleted after being transferred. |
bytes_from_source_failed
|
Bytes in the data source that failed to be transferred or that failed to be deleted after being transferred. |
objects_failed_to_delete_from_sink
|
Objects that failed to be deleted from the data sink. |
bytes_failed_to_delete_from_sink
|
Bytes that failed to be deleted from the data sink. |
directories_found_from_source
|
For transfers involving PosixFilesystem only. Number of directories found while listing. For example, if the root directory of the transfer is |
directories_failed_to_list_from_source
|
For transfers involving PosixFilesystem only. Number of listing failures for each directory found at the source. Potential failures when listing a directory include permission failure or block failure. If listing a directory fails, no files in the directory are transferred. |
directories_successfully_listed_from_source
|
For transfers involving PosixFilesystem only. Number of successful listings for each directory found at the source. |
intermediate_objects_cleaned_up
|
Number of successfully cleaned up intermediate objects. |
intermediate_objects_failed_cleaned_up
|
Number of intermediate objects failed cleaned up. |
TransferJob
This resource represents the configuration of a transfer job that runs periodically.
Fields | |
---|---|
name
|
A unique name (within the transfer project) assigned when the job is created. If this field is empty in a CreateTransferJobRequest, Storage Transfer Service assigns a unique name. Otherwise, the specified name is used as the unique name for this job. If the specified name is in use by a job, the creation request fails with an This name must start with Non-PosixFilesystem example: PosixFilesystem example: Applications must not rely on the enforcement of naming requirements involving OPI. Invalid job names fail with an |
description
|
A description provided by the user for the job. Its max length is 1024 bytes when Unicode-encoded. |
project_id
|
The ID of the Google Cloud project that owns the job. |
service_account
|
Optional. The user-managed service account to which to delegate service agent permissions. You can grant Cloud Storage bucket permissions to this service account instead of to the Transfer Service service agent. Format is Either the service account email ( See https://cloud.google.com//storage-transfer/docs/delegate-service-agent-permissions for required permissions. |
transfer_spec
|
Transfer specification. |
replication_spec
|
Replication specification. |
notification_config
|
Notification configuration. |
logging_config
|
Logging configuration. |
schedule
|
Specifies schedule for the transfer job. This is an optional field. When the field is not set, the job never executes a transfer, unless you invoke RunTransferJob or update the job to have a non-empty schedule. |
event_stream
|
Specifies the event stream for the transfer job for event-driven transfers. When EventStream is specified, the Schedule fields are ignored. |
status
|
Status of the job. This value MUST be specified for Note:The effect of the new job status takes place during a subsequent job run. For example, if you change the job status from |
creation_time
|
Output only. The time that the transfer job was created. |
last_modification_time
|
Output only. The time that the transfer job was last modified. |
deletion_time
|
Output only. The time that the transfer job was deleted. |
latest_operation_name
|
The name of the most recently started TransferOperation of this JobConfig. Present if a TransferOperation has been created for this JobConfig. |
Status
The status of the transfer job.
Enums | |
---|---|
STATUS_UNSPECIFIED
|
Zero is an illegal value. |
ENABLED
|
New transfers are performed based on the schedule. |
DISABLED
|
New transfers are not scheduled. |
DELETED
|
This is a soft delete state. After a transfer job is set to this state, the job and all the transfer executions are subject to garbage collection. Transfer jobs become eligible for garbage collection 30 days after their status is set to DELETED
. |
TransferManifest
Specifies where the manifest is located.
Fields | |
---|---|
location
|
Specifies the path to the manifest in Cloud Storage. The Google-managed service account for the transfer must have |
TransferOperation
A description of the execution of a transfer.
Fields | |
---|---|
name
|
A globally unique ID assigned by the system. |
project_id
|
The ID of the Google Cloud project that owns the operation. |
transfer_spec
|
Transfer specification. |
notification_config
|
Notification configuration. |
logging_config
|
Cloud Logging configuration. |
start_time
|
Start time of this transfer execution. |
end_time
|
End time of this transfer execution. |
status
|
Status of the transfer operation. |
counters
|
Information about the progress of the transfer operation. |
error_breakdowns[]
|
Summarizes errors encountered with sample error log entries. |
transfer_job_name
|
The name of the transfer job that triggers this transfer operation. |
Status
The status of a TransferOperation.
Enums | |
---|---|
STATUS_UNSPECIFIED
|
Zero is an illegal value. |
IN_PROGRESS
|
In progress. |
PAUSED
|
Paused. |
SUCCESS
|
Completed successfully. |
FAILED
|
Terminated due to an unrecoverable failure. |
ABORTED
|
Aborted by the user. |
QUEUED
|
Temporarily delayed by the system. No user action is required. |
SUSPENDING
|
The operation is suspending and draining the ongoing work to completion. |
TransferOptions
TransferOptions define the actions to be performed on objects in a transfer.
Fields | |
---|---|
overwrite_objects_already_existing_in_sink
|
When to overwrite objects that already exist in the sink. The default is that only objects that are different from the source are overwritten. If true, all objects in the sink whose name matches an object in the source are overwritten with the source object. |
delete_objects_unique_in_sink
|
Whether objects that exist only in the sink should be deleted. Note:This option and |
delete_objects_from_source_after_transfer
|
Whether objects should be deleted from the source after they are transferred to the sink. Note:This option and |
overwrite_when
|
When to overwrite objects that already exist in the sink. If not set, overwrite behavior is determined by |
metadata_options
|
Represents the selected metadata options for a transfer job. |
OverwriteWhen
Specifies when to overwrite an object in the sink when an object with matching name is found in the source.
Enums | |
---|---|
OVERWRITE_WHEN_UNSPECIFIED
|
Overwrite behavior is unspecified. |
DIFFERENT
|
Overwrites destination objects with the source objects, only if the objects have the same name but different HTTP ETags or checksum values. |
NEVER
|
Never overwrites a destination object if a source object has the same name. In this case, the source object is not transferred. |
ALWAYS
|
Always overwrite the destination object with the source object, even if the HTTP Etags or checksum values are the same. |
TransferSpec
Configuration for running a transfer.
object_conditions
Only objects that satisfy these object conditions are included in the set of data source and data sink objects. Object conditions based on objects' "last modification time" do not exclude objects in a data sink.
transfer_manifest
A manifest file provides a list of objects to be transferred from the data source. This field points to the location of the manifest file. Otherwise, the entire source bucket is used. ObjectConditions still apply.
source_agent_pool_name
string
Specifies the agent pool name associated with the posix data source. When unspecified, the default name is used.
sink_agent_pool_name
string
Specifies the agent pool name associated with the posix data sink. When unspecified, the default name is used.
data_sink
. The write sink for the data. data_sink
can be only one of the following:gcs_data_sink
A Cloud Storage data sink.
posix_data_sink
A POSIX Filesystem data sink.
data_source
. The read source of the data. data_source
can be only one of the following:gcs_data_source
A Cloud Storage data source.
aws_s3_data_source
An AWS S3 data source.
http_data_source
An HTTP URL data source.
posix_data_source
A POSIX Filesystem data source.
azure_blob_storage_data_source
An Azure Blob Storage data source.
aws_s3_compatible_data_source
An AWS S3 compatible data source.
hdfs_data_source
An HDFS cluster data source.
Union field intermediate_data_location
.
intermediate_data_location
can be only one of the following:
gcs_intermediate_data_location
For transfers between file systems, specifies a Cloud Storage bucket to be used as an intermediate location through which to transfer data.
See Transfer data between file systems for more information.
UpdateAgentPoolRequest
Specifies the request passed to UpdateAgentPool.
agent_pool
Required. The agent pool to update. agent_pool
is expected to specify following fields:
-
bandwidth_limit
AnUpdateAgentPoolRequest
with any other fields is rejected with the errorINVALID_ARGUMENT
.
Authorization requires the following IAM
permission on the specified resource agentPool
:
-
storagetransfer.agentpools.update
update_mask
The field mask
of the fields in agentPool
to update in this request. The following agentPool
fields can be updated:
UpdateTransferJobRequest
Request passed to UpdateTransferJob.
job_name
string
Required. The name of job to update.
Authorization requires the following IAM
permission on the specified resource jobName
:
-
storagetransfer.jobs.update
project_id
string
Required. The ID of the Google Cloud project that owns the job.
transfer_job
Required. The job to update. transferJob
is expected to specify one or more of five fields: description
, transfer_spec
, notification_config
, logging_config
, and status
. An UpdateTransferJobRequest
that specifies other fields are rejected with the error INVALID_ARGUMENT
. Updating a job status to DELETED
requires storagetransfer.jobs.delete
permission.
update_transfer_job_field_mask
The field mask of the fields in transferJob
that are to be updated in this request. Fields in transferJob
that can be updated are: description
, transfer_spec
, notification_config
, logging_config
, and status
. To update the transfer_spec
of the job, a complete transfer specification must be provided. An incomplete specification missing any required fields is rejected with the error INVALID_ARGUMENT
.