Cloud Storage (GCS) - Package cloud.google.com/go/storage (v1.42.0)

Package storage provides an easy way to work with Google Cloud Storage. Google Cloud Storage stores data in named objects, which are grouped into buckets.

More information about Google Cloud Storage is available at https://cloud.google.com/storage/docs .

See https://pkg.go.dev/cloud.google.com/go for authentication, timeouts, connection pooling and similar aspects of this package.

Creating a Client

To start working with this package, create a Client :

 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

The client will use your default application credentials. Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

You may configure the client by passing in options from the google.golang.org/api/option package. You may also use options defined in this package, such as WithJSONReads .

If you only wish to access public data, you can create an unauthenticated client with

 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 option 
 . 
 WithoutAuthentication 
 ()) 

To use an emulator with this library, you can set the STORAGE_EMULATOR_HOST environment variable to the address at which your emulator is running. This will send requests to that address instead of to Cloud Storage. You can then create and use a client as usual:

 // Set STORAGE_EMULATOR_HOST environment variable. 
 err 
  
 := 
  
 os 
 . 
 Setenv 
 ( 
 "STORAGE_EMULATOR_HOST" 
 , 
  
 "localhost:9000" 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // Create client as usual. 
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // This request is now directed to http://localhost:9000/storage/v1/b 
 // instead of https://storage.googleapis.com/storage/v1/b 
 if 
  
 err 
  
 := 
  
 client 
 . 
 Bucket 
 ( 
 "my-bucket" 
 ). 
 Create 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 nil 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Please note that there is no official emulator for Cloud Storage.

Buckets

A Google Cloud Storage bucket is a collection of objects. To work with a bucket, make a bucket handle:

 bkt 
  
 := 
  
 client 
 . 
 Bucket 
 ( 
 bucketName 
 ) 

A handle is a reference to a bucket. You can have a handle even if the bucket doesn't exist yet. To create a bucket in Google Cloud Storage, call BucketHandle.Create :

 if 
  
 err 
  
 := 
  
 bkt 
 . 
 Create 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 nil 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 

Note that although buckets are associated with projects, bucket names are global across all projects.

Each bucket has associated metadata, represented in this package by BucketAttrs . The third argument to BucketHandle.Create allows you to set the initial BucketAttrs of a bucket. To retrieve a bucket's attributes, use BucketHandle.Attrs :

 attrs 
 , 
  
 err 
  
 := 
  
 bkt 
 . 
 Attrs 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 fmt 
 . 
 Printf 
 ( 
 "bucket %s, created at %s, is located in %s with storage class %s\n" 
 , 
  
 attrs 
 . 
 Name 
 , 
  
 attrs 
 . 
 Created 
 , 
  
 attrs 
 . 
 Location 
 , 
  
 attrs 
 . 
 StorageClass 
 ) 

Objects

An object holds arbitrary data as a sequence of bytes, like a file. You refer to objects using a handle, just as with buckets, but unlike buckets you don't explicitly create an object. Instead, the first time you write to an object it will be created. You can use the standard Go io.Reader and io.Writer interfaces to read and write object data:

 obj 
  
 := 
  
 bkt 
 . 
 Object 
 ( 
 "data" 
 ) 
 // Write something to obj. 
 // w implements io.Writer. 
 w 
  
 := 
  
 obj 
 . 
 NewWriter 
 ( 
 ctx 
 ) 
 // Write some text to obj. This will either create the object or overwrite whatever is there already. 
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 fmt 
 . 
 Fprintf 
 ( 
 w 
 , 
  
 "This object contains text.\n" 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // Close, just like writing a file. 
 if 
  
 err 
  
 := 
  
 w 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // Read it back. 
 r 
 , 
  
 err 
  
 := 
  
 obj 
 . 
 NewReader 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 defer 
  
 r 
 . 
 Close 
 () 
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 io 
 . 
 Copy 
 ( 
 os 
 . 
 Stdout 
 , 
  
 r 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // Prints "This object contains text." 

Objects also have attributes, which you can fetch with ObjectHandle.Attrs :

 objAttrs 
 , 
  
 err 
  
 := 
  
 obj 
 . 
 Attrs 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 fmt 
 . 
 Printf 
 ( 
 "object %s has size %d and can be read using %s\n" 
 , 
  
 objAttrs 
 . 
 Name 
 , 
  
 objAttrs 
 . 
 Size 
 , 
  
 objAttrs 
 . 
 MediaLink 
 ) 

Listing objects

Listing objects in a bucket is done with the BucketHandle.Objects method:

 query 
  
 := 
  
& storage 
 . 
 Query 
 { 
 Prefix 
 : 
  
 "" 
 } 
 var 
  
 names 
  
 [] 
 string 
 it 
  
 := 
  
 bkt 
 . 
 Objects 
 ( 
 ctx 
 , 
  
 query 
 ) 
 for 
  
 { 
  
 attrs 
 , 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 names 
  
 = 
  
 append 
 ( 
 names 
 , 
  
 attrs 
 . 
 Name 
 ) 
 } 

Objects are listed lexicographically by name. To filter objects lexicographically, [Query.StartOffset] and/or [Query.EndOffset] can be used:

 query 
  
 := 
  
& storage 
 . 
 Query 
 { 
  
 Prefix 
 : 
  
 "" 
 , 
  
 StartOffset 
 : 
  
 "bar/" 
 , 
  
 // Only list objects lexicographically >= "bar/" 
  
 EndOffset 
 : 
  
 "foo/" 
 , 
  
 // Only list objects lexicographically < "foo/" 
 } 
 // ... as before 

If only a subset of object attributes is needed when listing, specifying this subset using Query.SetAttrSelection may speed up the listing process:

 query 
  
 := 
  
& storage 
 . 
 Query 
 { 
 Prefix 
 : 
  
 "" 
 } 
 query 
 . 
 SetAttrSelection 
 ([] 
 string 
 { 
 "Name" 
 }) 
 // ... as before 

ACLs

Both objects and buckets have ACLs (Access Control Lists). An ACL is a list of ACLRules, each of which specifies the role of a user, group or project. ACLs are suitable for fine-grained control, but you may prefer using IAM to control access at the project level (see Cloud Storage IAM docs .

To list the ACLs of a bucket or object, obtain an ACLHandle and call ACLHandle.List :

 acls 
 , 
  
 err 
  
 := 
  
 obj 
 . 
 ACL 
 (). 
 List 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 for 
  
 _ 
 , 
  
 rule 
  
 := 
  
 range 
  
 acls 
  
 { 
  
 fmt 
 . 
 Printf 
 ( 
 "%s has role %s\n" 
 , 
  
 rule 
 . 
 Entity 
 , 
  
 rule 
 . 
 Role 
 ) 
 } 

You can also set and delete ACLs.

Conditions

Every object has a generation and a metageneration. The generation changes whenever the content changes, and the metageneration changes whenever the metadata changes. Conditions let you check these values before an operation; the operation only executes if the conditions match. You can use conditions to prevent race conditions in read-modify-write operations.

For example, say you've read an object's metadata into objAttrs. Now you want to write to that object, but only if its contents haven't changed since you read it. Here is how to express that:

 w 
  
 = 
  
 obj 
 . 
 If 
 ( 
 storage 
 . 
 Conditions 
 { 
 GenerationMatch 
 : 
  
 objAttrs 
 . 
 Generation 
 }). 
 NewWriter 
 ( 
 ctx 
 ) 
 // Proceed with writing as above. 

Signed URLs

You can obtain a URL that lets anyone read or write an object for a limited time. Signing a URL requires credentials authorized to sign a URL. To use the same authentication that was used when instantiating the Storage client, use BucketHandle.SignedURL .

 url 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Bucket 
 ( 
 bucketName 
 ). 
 SignedURL 
 ( 
 objectName 
 , 
  
 opts 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 fmt 
 . 
 Println 
 ( 
 url 
 ) 

You can also sign a URL without creating a client. See the documentation of SignedURL for details.

 url 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 SignedURL 
 ( 
 bucketName 
 , 
  
 "shared-object" 
 , 
  
 opts 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 fmt 
 . 
 Println 
 ( 
 url 
 ) 

Post Policy V4 Signed Request

A type of signed request that allows uploads through HTML forms directly to Cloud Storage with temporary permission. Conditions can be applied to restrict how the HTML form is used and exercised by a user.

For more information, please see the XML POST Object docs as well as the documentation of BucketHandle.GenerateSignedPostPolicyV4 .

 pv4 
 , 
  
 err 
  
 := 
  
 client 
 . 
 Bucket 
 ( 
 bucketName 
 ). 
 GenerateSignedPostPolicyV4 
 ( 
 objectName 
 , 
  
 opts 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 fmt 
 . 
 Printf 
 ( 
 "URL: %s\nFields; %v\n" 
 , 
  
 pv4 
 . 
 URL 
 , 
  
 pv4 
 . 
 Fields 
 ) 

Credential requirements for signing

If the GoogleAccessID and PrivateKey option fields are not provided, they will be automatically detected by BucketHandle.SignedURL and BucketHandle.GenerateSignedPostPolicyV4 if any of the following are true:

Detecting GoogleAccessID may not be possible if you are authenticated using a token source or using option.WithHTTPClient . In this case, you can provide a service account email for GoogleAccessID and the client will attempt to sign the URL or Post Policy using that service account.

To generate the signature, you must have:

  • iam.serviceAccounts.signBlob permissions on the GoogleAccessID service account, and
  • the IAM Service Account Credentials API enabled (unless authenticating with a downloaded private key).

Errors

Errors returned by this client are often of the type googleapi.Error . These errors can be introspected for more information by using errors.As with the richer googleapi.Error type. For example:

 var 
  
 e 
  
 * 
 googleapi 
 . 
 Error 
 if 
  
 ok 
  
 := 
  
 errors 
 . 
 As 
 ( 
 err 
 , 
  
& e 
 ); 
  
 ok 
  
 { 
  
 if 
  
 e 
 . 
 Code 
  
 == 
  
 409 
  
 { 
  
 ... 
  
 } 
 } 

Retrying failed requests

Methods in this package may retry calls that fail with transient errors. Retrying continues indefinitely unless the controlling context is canceled, the client is closed, or a non-transient error is received. To stop retries from continuing, use context timeouts or cancellation.

The retry strategy in this library follows best practices for Cloud Storage. By default, operations are retried only if they are idempotent, and exponential backoff with jitter is employed. In addition, errors are only retried if they are defined as transient by the service. See the Cloud Storage retry docs for more information.

Users can configure non-default retry behavior for a single library call (using BucketHandle.Retryer and ObjectHandle.Retryer ) or for all calls made by a client (using Client.SetRetry ). For example:

 o 
  
 := 
  
 client 
 . 
 Bucket 
 ( 
 bucket 
 ). 
 Object 
 ( 
 object 
 ). 
 Retryer 
 ( 
  
 // Use WithBackoff to change the timing of the exponential backoff. 
  
 storage 
 . 
 WithBackoff 
 ( 
 gax 
 . 
 Backoff 
 { 
  
 Initial 
 : 
  
 2 
  
 * 
  
 time 
 . 
 Second 
 , 
  
 }), 
  
 // Use WithPolicy to configure the idempotency policy. RetryAlways will 
  
 // retry the operation even if it is non-idempotent. 
  
 storage 
 . 
 WithPolicy 
 ( 
 storage 
 . 
 RetryAlways 
 ), 
 ) 
 // Use a context timeout to set an overall deadline on the call, including all 
 // potential retries. 
 ctx 
 , 
  
 cancel 
  
 := 
  
 context 
 . 
 WithTimeout 
 ( 
 ctx 
 , 
  
 5 
 * 
 time 
 . 
 Second 
 ) 
 defer 
  
 cancel 
 () 
 // Delete an object using the specified strategy and timeout. 
 if 
  
 err 
  
 := 
  
 o 
 . 
 Delete 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // Handle err. 
 } 

Sending Custom Headers

You can add custom headers to any API call made by this package by using callctx.SetHeaders on the context which is passed to the method. For example, to add a custom audit logging header:

 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
 ctx 
  
 = 
  
 callctx 
 . 
 SetHeaders 
 ( 
 ctx 
 , 
  
 "x-goog-custom-audit-<key>" 
 , 
  
 "<value>" 
 ) 
 // Use client as usual with the context and the additional headers will be sent. 
 client 
 . 
 Bucket 
 ( 
 "my-bucket" 
 ). 
 Attrs 
 ( 
 ctx 
 ) 

Experimental gRPC API

This package includes support for the Cloud Storage gRPC API, which is currently in preview. This implementation uses gRPC rather than the current JSON & XML APIs to make requests to Cloud Storage. Kindly contact the Google Cloud Storage gRPC team at gcs-grpc-contact@google.com with a list of GCS buckets you would like to allowlist to access this API. The Go Storage gRPC library is not yet generally available, so it may be subject to breaking changes.

To create a client which will use gRPC, use the alternate constructor:

 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewGRPCClient 
 ( 
 ctx 
 ) 
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
 } 
 // Use client as usual. 

If the application is running within GCP, users may get better performance by enabling Direct Google Access (enabling requests to skip some proxy steps). To enable, set the environment variable GOOGLE_CLOUD_ENABLE_DIRECT_PATH_XDS=true and add the following side-effect imports to your application:

 import 
  
 ( 
  
 _ 
  
 "google.golang.org/grpc/balancer/rls" 
  
 _ 
  
 "google.golang.org/grpc/xds/googledirectpath" 
 ) 

Storage Control API

Certain control plane and long-running operations for Cloud Storage (including Folder and Managed Folder operations) are supported via the autogenerated Storage Control client, which is available as a subpackage in this module. See package docs at cloud.google.com/go/storage/control/apiv2 or reference the Storage Control API docs.

Constants

DeleteAction, SetStorageClassAction, AbortIncompleteMPUAction

  const 
  
 ( 
  
 // DeleteAction is a lifecycle action that deletes a live and/or archived 
  
 // objects. Takes precedence over SetStorageClass actions. 
  
 DeleteAction 
  
 = 
  
 "Delete" 
  
 // SetStorageClassAction changes the storage class of live and/or archived 
  
 // objects. 
  
 SetStorageClassAction 
  
 = 
  
 "SetStorageClass" 
  
 // AbortIncompleteMPUAction is a lifecycle action that aborts an incomplete 
  
 // multipart upload when the multipart upload meets the conditions specified 
  
 // in the lifecycle rule. The AgeInDays condition is the only allowed 
  
 // condition for this action. AgeInDays is measured from the time the 
  
 // multipart upload was created. 
  
 AbortIncompleteMPUAction 
  
 = 
  
 "AbortIncompleteMultipartUpload" 
 ) 
 

NoPayload, JSONPayload

  const 
  
 ( 
  
 // Send no payload with notification messages. 
  
 NoPayload 
  
 = 
  
 "NONE" 
  
 // Send object metadata as JSON with notification messages. 
  
 JSONPayload 
  
 = 
  
 "JSON_API_V1" 
 ) 
 

Values for Notification.PayloadFormat.

ObjectFinalizeEvent, ObjectMetadataUpdateEvent, ObjectDeleteEvent, ObjectArchiveEvent

  const 
  
 ( 
  
 // Event that occurs when an object is successfully created. 
  
 ObjectFinalizeEvent 
  
 = 
  
 "OBJECT_FINALIZE" 
  
 // Event that occurs when the metadata of an existing object changes. 
  
 ObjectMetadataUpdateEvent 
  
 = 
  
 "OBJECT_METADATA_UPDATE" 
  
 // Event that occurs when an object is permanently deleted. 
  
 ObjectDeleteEvent 
  
 = 
  
 "OBJECT_DELETE" 
  
 // Event that occurs when the live version of an object becomes an 
  
 // archived version. 
  
 ObjectArchiveEvent 
  
 = 
  
 "OBJECT_ARCHIVE" 
 ) 
 

Values for Notification.EventTypes.

ScopeFullControl, ScopeReadOnly, ScopeReadWrite

  const 
  
 ( 
  
 // ScopeFullControl grants permissions to manage your 
  
 // data and permissions in Google Cloud Storage. 
  
 ScopeFullControl 
  
 = 
  
  raw 
 
 . 
  DevstorageFullControlScope 
 
  
 // ScopeReadOnly grants permissions to 
  
 // view your data in Google Cloud Storage. 
  
 ScopeReadOnly 
  
 = 
  
  raw 
 
 . 
  DevstorageReadOnlyScope 
 
  
 // ScopeReadWrite grants permissions to manage your 
  
 // data in Google Cloud Storage. 
  
 ScopeReadWrite 
  
 = 
  
  raw 
 
 . 
  DevstorageReadWriteScope 
 
 ) 
 

Variables

ErrBucketNotExist, ErrObjectNotExist

  var 
  
 ( 
  
 // ErrBucketNotExist indicates that the bucket does not exist. 
  
 ErrBucketNotExist 
  
 = 
  
  errors 
 
 . 
  New 
 
 ( 
 "storage: bucket doesn't exist" 
 ) 
  
 // ErrObjectNotExist indicates that the object does not exist. 
  
 ErrObjectNotExist 
  
 = 
  
  errors 
 
 . 
  New 
 
 ( 
 "storage: object doesn't exist" 
 ) 
 ) 
 

Functions

func ShouldRetry

  func 
  
 ShouldRetry 
 ( 
 err 
  
  error 
 
 ) 
  
  bool 
 
 

ShouldRetry returns true if an error is retryable, based on best practice guidance from GCS. See https://cloud.google.com/storage/docs/retry-strategy#go for more information on what errors are considered retryable.

If you would like to customize retryable errors, use the WithErrorFunc to supply a RetryOption to your library calls. For example, to retry additional errors, you can write a custom func that wraps ShouldRetry and also specifies additional errors that should return true.

func SignedURL

  func 
  
 SignedURL 
 ( 
 bucket 
 , 
  
 object 
  
  string 
 
 , 
  
 opts 
  
 * 
  SignedURLOptions 
 
 ) 
  
 ( 
  string 
 
 , 
  
  error 
 
 ) 
 

SignedURL returns a URL for the specified object. Signed URLs allow anyone access to a restricted resource for a limited time without needing a Google account or signing in. For more information about signed URLs, see https://cloud.google.com/storage/docs/accesscontrol#signed_urls_query_string_authentication If initializing a Storage Client, instead use the Bucket.SignedURL method which uses the Client's credentials to handle authentication.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "fmt" 
  
 "io/ioutil" 
  
 "time" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 pkey 
 , 
  
 err 
  
 := 
  
 ioutil 
 . 
 ReadFile 
 ( 
 "my-private-key.pem" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 url 
 , 
  
 err 
  
 := 
  
 storage 
 . 
  SignedURL 
 
 ( 
 "my-bucket" 
 , 
  
 "my-object" 
 , 
  
& storage 
 . 
  SignedURLOptions 
 
 { 
  
 GoogleAccessID 
 : 
  
 "xxx@developer.gserviceaccount.com" 
 , 
  
 PrivateKey 
 : 
  
 pkey 
 , 
  
 Method 
 : 
  
 "GET" 
 , 
  
 Expires 
 : 
  
 time 
 . 
 Now 
 (). 
  Add 
 
 ( 
 48 
  
 * 
  
 time 
 . 
 Hour 
 ), 
  
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 url 
 ) 
 } 
 

func WithJSONReads

  func 
  
 WithJSONReads 
 () 
  
  option 
 
 . 
  ClientOption 
 
 

WithJSONReads is an option that may be passed to [NewClient]. It sets the client to use the Cloud Storage JSON API for object reads. Currently, the default API used for reads is XML, but JSON will become the default in a future release.

Setting this option is required to use the GenerationNotMatch condition. We also recommend using JSON reads to ensure consistency with other client operations (all of which use JSON by default).

Note that when this option is set, reads will return a zero date for [ReaderObjectAttrs].LastModified and may return a different value for [ReaderObjectAttrs].CacheControl.

func WithXMLReads

  func 
  
 WithXMLReads 
 () 
  
  option 
 
 . 
  ClientOption 
 
 

WithXMLReads is an option that may be passed to [NewClient]. It sets the client to use the Cloud Storage XML API for object reads.

This is the current default, but the default will switch to JSON in a future release.

ACLEntity

  type 
  
 ACLEntity 
  
  string 
 
 

ACLEntity refers to a user or group. They are sometimes referred to as grantees.

It could be in the form of: "user-

Or one of the predefined constants: AllUsers, AllAuthenticatedUsers.

AllUsers, AllAuthenticatedUsers

  const 
  
 ( 
  
 AllUsers 
  
  ACLEntity 
 
  
 = 
  
 "allUsers" 
  
 AllAuthenticatedUsers 
  
  ACLEntity 
 
  
 = 
  
 "allAuthenticatedUsers" 
 ) 
 

ACLHandle

  type 
  
 ACLHandle 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

ACLHandle provides operations on an access control list for a Google Cloud Storage bucket or object. ACLHandle on an object operates on the latest generation of that object by default. Selecting a specific generation of an object is not currently supported by the client.

func (*ACLHandle) Delete

  func 
  
 ( 
 a 
  
 * 
  ACLHandle 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 entity 
  
  ACLEntity 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete permanently deletes the ACL entry for the given entity.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // No longer grant access to the bucket to everyone on the Internet. 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
 ACL 
 (). 
 Delete 
 ( 
 ctx 
 , 
  
 storage 
 . 
  AllUsers 
 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*ACLHandle) List

  func 
  
 ( 
 a 
  
 * 
  ACLHandle 
 
 ) 
  
 List 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 rules 
  
 [] 
  ACLRule 
 
 , 
  
 err 
  
  error 
 
 ) 
 

List retrieves ACL entries.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // List the default object ACLs for my-bucket. 
  
 aclRules 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  DefaultObjectACL 
 
 (). 
  List 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 aclRules 
 ) 
 } 
 

func (*ACLHandle) Set

  func 
  
 ( 
 a 
  
 * 
  ACLHandle 
 
 ) 
  
 Set 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 entity 
  
  ACLEntity 
 
 , 
  
 role 
  
  ACLRole 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Set sets the role for the given entity.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Let any authenticated user read my-bucket/my-object. 
  
 obj 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ) 
  
 if 
  
 err 
  
 := 
  
 obj 
 . 
 ACL 
 (). 
  Set 
 
 ( 
 ctx 
 , 
  
 storage 
 . 
  AllAuthenticatedUsers 
 
 , 
  
 storage 
 . 
  RoleReader 
 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

ACLRole

  type 
  
 ACLRole 
  
  string 
 
 

ACLRole is the level of access to grant.

RoleOwner, RoleReader, RoleWriter

  const 
  
 ( 
  
 RoleOwner 
  
  ACLRole 
 
  
 = 
  
 "OWNER" 
  
 RoleReader 
  
  ACLRole 
 
  
 = 
  
 "READER" 
  
 RoleWriter 
  
  ACLRole 
 
  
 = 
  
 "WRITER" 
 ) 
 

ACLRule

  type 
  
 ACLRule 
  
 struct 
  
 { 
  
 Entity 
  
  ACLEntity 
 
  
 EntityID 
  
  string 
 
  
 Role 
  
  ACLRole 
 
  
 Domain 
  
  string 
 
  
 Email 
  
  string 
 
  
 ProjectTeam 
  
 * 
  ProjectTeam 
 
 } 
 

ACLRule represents a grant for a role to an entity (user, group or team) for a Google Cloud Storage object or bucket.

Autoclass

  type 
  
 Autoclass 
  
 struct 
  
 { 
  
 // Enabled specifies whether the autoclass feature is enabled 
  
 // on the bucket. 
  
 Enabled 
  
  bool 
 
  
 // ToggleTime is the time from which Autoclass was last toggled. 
  
 // If Autoclass is enabled when the bucket is created, the ToggleTime 
  
 // is set to the bucket creation time. This field is read-only. 
  
 ToggleTime 
  
  time 
 
 . 
  Time 
 
  
 // TerminalStorageClass: The storage class that objects in the bucket 
  
 // eventually transition to if they are not read for a certain length of 
  
 // time. Valid values are NEARLINE and ARCHIVE. 
  
 // To modify TerminalStorageClass, Enabled must be set to true. 
  
 TerminalStorageClass 
  
  string 
 
  
 // TerminalStorageClassUpdateTime represents the time of the most recent 
  
 // update to "TerminalStorageClass". 
  
 TerminalStorageClassUpdateTime 
  
  time 
 
 . 
  Time 
 
 } 
 

Autoclass holds the bucket's autoclass configuration. If enabled, allows for the automatic selection of the best storage class based on object access patterns. See https://cloud.google.com/storage/docs/using-autoclass for more information.

BucketAttrs

  type 
  
 BucketAttrs 
  
 struct 
  
 { 
  
 // Name is the name of the bucket. 
  
 // This field is read-only. 
  
 Name 
  
  string 
 
  
 // ACL is the list of access control rules on the bucket. 
  
 ACL 
  
 [] 
  ACLRule 
 
  
 // BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of 
  
 // UniformBucketLevelAccess is recommended above the use of this field. 
  
 // Setting BucketPolicyOnly.Enabled OR UniformBucketLevelAccess.Enabled to 
  
 // true, will enable UniformBucketLevelAccess. 
  
 BucketPolicyOnly 
  
  BucketPolicyOnly 
 
  
 // UniformBucketLevelAccess configures access checks to use only bucket-level IAM 
  
 // policies and ignore any ACL rules for the bucket. 
  
 // See https://cloud.google.com/storage/docs/uniform-bucket-level-access 
  
 // for more information. 
  
 UniformBucketLevelAccess 
  
  UniformBucketLevelAccess 
 
  
 // PublicAccessPrevention is the setting for the bucket's 
  
 // PublicAccessPrevention policy, which can be used to prevent public access 
  
 // of data in the bucket. See 
  
 // https://cloud.google.com/storage/docs/public-access-prevention for more 
  
 // information. 
  
 PublicAccessPrevention 
  
  PublicAccessPrevention 
 
  
 // DefaultObjectACL is the list of access controls to 
  
 // apply to new objects when no object ACL is provided. 
  
 DefaultObjectACL 
  
 [] 
  ACLRule 
 
  
 // DefaultEventBasedHold is the default value for event-based hold on 
  
 // newly created objects in this bucket. It defaults to false. 
  
 DefaultEventBasedHold 
  
  bool 
 
  
 // If not empty, applies a predefined set of access controls. It should be set 
  
 // only when creating a bucket. 
  
 // It is always empty for BucketAttrs returned from the service. 
  
 // See https://cloud.google.com/storage/docs/json_api/v1/buckets/insert 
  
 // for valid values. 
  
 PredefinedACL 
  
  string 
 
  
 // If not empty, applies a predefined set of default object access controls. 
  
 // It should be set only when creating a bucket. 
  
 // It is always empty for BucketAttrs returned from the service. 
  
 // See https://cloud.google.com/storage/docs/json_api/v1/buckets/insert 
  
 // for valid values. 
  
 PredefinedDefaultObjectACL 
  
  string 
 
  
 // Location is the location of the bucket. It defaults to "US". 
  
 // If specifying a dual-region, CustomPlacementConfig should be set in conjunction. 
  
 Location 
  
  string 
 
  
 // The bucket's custom placement configuration that holds a list of 
  
 // regional locations for custom dual regions. 
  
 CustomPlacementConfig 
  
 * 
  CustomPlacementConfig 
 
  
 // MetaGeneration is the metadata generation of the bucket. 
  
 // This field is read-only. 
  
 MetaGeneration 
  
  int64 
 
  
 // StorageClass is the default storage class of the bucket. This defines 
  
 // how objects in the bucket are stored and determines the SLA 
  
 // and the cost of storage. Typical values are "STANDARD", "NEARLINE", 
  
 // "COLDLINE" and "ARCHIVE". Defaults to "STANDARD". 
  
 // See https://cloud.google.com/storage/docs/storage-classes for all 
  
 // valid values. 
  
 StorageClass 
  
  string 
 
  
 // Created is the creation time of the bucket. 
  
 // This field is read-only. 
  
 Created 
  
  time 
 
 . 
  Time 
 
  
 // VersioningEnabled reports whether this bucket has versioning enabled. 
  
 VersioningEnabled 
  
  bool 
 
  
 // Labels are the bucket's labels. 
  
 Labels 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // RequesterPays reports whether the bucket is a Requester Pays bucket. 
  
 // Clients performing operations on Requester Pays buckets must provide 
  
 // a user project (see BucketHandle.UserProject), which will be billed 
  
 // for the operations. 
  
 RequesterPays 
  
  bool 
 
  
 // Lifecycle is the lifecycle configuration for objects in the bucket. 
  
 Lifecycle 
  
  Lifecycle 
 
  
 // Retention policy enforces a minimum retention time for all objects 
  
 // contained in the bucket. A RetentionPolicy of nil implies the bucket 
  
 // has no minimum data retention. 
  
 // 
  
 // This feature is in private alpha release. It is not currently available to 
  
 // most customers. It might be changed in backwards-incompatible ways and is not 
  
 // subject to any SLA or deprecation policy. 
  
 RetentionPolicy 
  
 * 
  RetentionPolicy 
 
  
 // The bucket's Cross-Origin Resource Sharing (CORS) configuration. 
  
 CORS 
  
 [] 
  CORS 
 
  
 // The encryption configuration used by default for newly inserted objects. 
  
 Encryption 
  
 * 
  BucketEncryption 
 
  
 // The logging configuration. 
  
 Logging 
  
 * 
  BucketLogging 
 
  
 // The website configuration. 
  
 Website 
  
 * 
  BucketWebsite 
 
  
 // Etag is the HTTP/1.1 Entity tag for the bucket. 
  
 // This field is read-only. 
  
 Etag 
  
  string 
 
  
 // LocationType describes how data is stored and replicated. 
  
 // Typical values are "multi-region", "region" and "dual-region". 
  
 // This field is read-only. 
  
 LocationType 
  
  string 
 
  
 // The project number of the project the bucket belongs to. 
  
 // This field is read-only. 
  
 ProjectNumber 
  
  uint64 
 
  
 // RPO configures the Recovery Point Objective (RPO) policy of the bucket. 
  
 // Set to RPOAsyncTurbo to turn on Turbo Replication for a bucket. 
  
 // See https://cloud.google.com/storage/docs/managing-turbo-replication for 
  
 // more information. 
  
 RPO 
  
  RPO 
 
  
 // Autoclass holds the bucket's autoclass configuration. If enabled, 
  
 // allows for the automatic selection of the best storage class 
  
 // based on object access patterns. 
  
 Autoclass 
  
 * 
  Autoclass 
 
  
 // ObjectRetentionMode reports whether individual objects in the bucket can 
  
 // be configured with a retention policy. An empty value means that object 
  
 // retention is disabled. 
  
 // This field is read-only. Object retention can be enabled only by creating 
  
 // a bucket with SetObjectRetention set to true on the BucketHandle. It 
  
 // cannot be modified once the bucket is created. 
  
 // ObjectRetention cannot be configured or reported through the gRPC API. 
  
 ObjectRetentionMode 
  
  string 
 
  
 // SoftDeletePolicy contains the bucket's soft delete policy, which defines 
  
 // the period of time that soft-deleted objects will be retained, and cannot 
  
 // be permanently deleted. By default, new buckets will be created with a 
  
 // 7 day retention duration. In order to fully disable soft delete, you need 
  
 // to set a policy with a RetentionDuration of 0. 
  
 SoftDeletePolicy 
  
 * 
  SoftDeletePolicy 
 
  
 // HierarchicalNamespace contains the bucket's hierarchical namespace 
  
 // configuration. Hierarchical namespace enabled buckets can contain 
  
 // [cloud.google.com/go/storage/control/apiv2/controlpb.Folder] resources. 
  
 // It cannot be modified after bucket creation time. 
  
 // UniformBucketLevelAccess must also also be enabled on the bucket. 
  
 HierarchicalNamespace 
  
 * 
  HierarchicalNamespace 
 
 } 
 

BucketAttrs represents the metadata for a Google Cloud Storage bucket. Read-only fields are ignored by BucketHandle.Create.

BucketAttrsToUpdate

  type 
  
 BucketAttrsToUpdate 
  
 struct 
  
 { 
  
 // If set, updates whether the bucket uses versioning. 
  
 VersioningEnabled 
  
  optional 
 
 . 
  Bool 
 
  
 // If set, updates whether the bucket is a Requester Pays bucket. 
  
 RequesterPays 
  
  optional 
 
 . 
  Bool 
 
  
 // DefaultEventBasedHold is the default value for event-based hold on 
  
 // newly created objects in this bucket. 
  
 DefaultEventBasedHold 
  
  optional 
 
 . 
  Bool 
 
  
 // BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of 
  
 // UniformBucketLevelAccess is recommended above the use of this field. 
  
 // Setting BucketPolicyOnly.Enabled OR UniformBucketLevelAccess.Enabled to 
  
 // true, will enable UniformBucketLevelAccess. If both BucketPolicyOnly and 
  
 // UniformBucketLevelAccess are set, the value of UniformBucketLevelAccess 
  
 // will take precedence. 
  
 BucketPolicyOnly 
  
 * 
  BucketPolicyOnly 
 
  
 // UniformBucketLevelAccess configures access checks to use only bucket-level IAM 
  
 // policies and ignore any ACL rules for the bucket. 
  
 // See https://cloud.google.com/storage/docs/uniform-bucket-level-access 
  
 // for more information. 
  
 UniformBucketLevelAccess 
  
 * 
  UniformBucketLevelAccess 
 
  
 // PublicAccessPrevention is the setting for the bucket's 
  
 // PublicAccessPrevention policy, which can be used to prevent public access 
  
 // of data in the bucket. See 
  
 // https://cloud.google.com/storage/docs/public-access-prevention for more 
  
 // information. 
  
 PublicAccessPrevention 
  
  PublicAccessPrevention 
 
  
 // StorageClass is the default storage class of the bucket. This defines 
  
 // how objects in the bucket are stored and determines the SLA 
  
 // and the cost of storage. Typical values are "STANDARD", "NEARLINE", 
  
 // "COLDLINE" and "ARCHIVE". Defaults to "STANDARD". 
  
 // See https://cloud.google.com/storage/docs/storage-classes for all 
  
 // valid values. 
  
 StorageClass 
  
  string 
 
  
 // If set, updates the retention policy of the bucket. Using 
  
 // RetentionPolicy.RetentionPeriod = 0 will delete the existing policy. 
  
 // 
  
 // This feature is in private alpha release. It is not currently available to 
  
 // most customers. It might be changed in backwards-incompatible ways and is not 
  
 // subject to any SLA or deprecation policy. 
  
 RetentionPolicy 
  
 * 
  RetentionPolicy 
 
  
 // If set, replaces the CORS configuration with a new configuration. 
  
 // An empty (rather than nil) slice causes all CORS policies to be removed. 
  
 CORS 
  
 [] 
  CORS 
 
  
 // If set, replaces the encryption configuration of the bucket. Using 
  
 // BucketEncryption.DefaultKMSKeyName = "" will delete the existing 
  
 // configuration. 
  
 Encryption 
  
 * 
  BucketEncryption 
 
  
 // If set, replaces the lifecycle configuration of the bucket. 
  
 Lifecycle 
  
 * 
  Lifecycle 
 
  
 // If set, replaces the logging configuration of the bucket. 
  
 Logging 
  
 * 
  BucketLogging 
 
  
 // If set, replaces the website configuration of the bucket. 
  
 Website 
  
 * 
  BucketWebsite 
 
  
 // If not empty, applies a predefined set of access controls. 
  
 // See https://cloud.google.com/storage/docs/json_api/v1/buckets/patch. 
  
 PredefinedACL 
  
  string 
 
  
 // If not empty, applies a predefined set of default object access controls. 
  
 // See https://cloud.google.com/storage/docs/json_api/v1/buckets/patch. 
  
 PredefinedDefaultObjectACL 
  
  string 
 
  
 // RPO configures the Recovery Point Objective (RPO) policy of the bucket. 
  
 // Set to RPOAsyncTurbo to turn on Turbo Replication for a bucket. 
  
 // See https://cloud.google.com/storage/docs/managing-turbo-replication for 
  
 // more information. 
  
 RPO 
  
  RPO 
 
  
 // If set, updates the autoclass configuration of the bucket. 
  
 // To disable autoclass on the bucket, set to an empty &Autoclass{}. 
  
 // To update the configuration for Autoclass.TerminalStorageClass, 
  
 // Autoclass.Enabled must also be set to true. 
  
 // See https://cloud.google.com/storage/docs/using-autoclass for more information. 
  
 Autoclass 
  
 * 
  Autoclass 
 
  
 // If set, updates the soft delete policy of the bucket. 
  
 SoftDeletePolicy 
  
 * 
  SoftDeletePolicy 
 
  
 // contains filtered or unexported fields 
 } 
 

BucketAttrsToUpdate define the attributes to update during an Update call.

func (*BucketAttrsToUpdate) DeleteLabel

  func 
  
 ( 
 ua 
  
 * 
  BucketAttrsToUpdate 
 
 ) 
  
 DeleteLabel 
 ( 
 name 
  
  string 
 
 ) 
 

DeleteLabel causes a label to be deleted when ua is used in a call to Bucket.Update.

func (*BucketAttrsToUpdate) SetLabel

  func 
  
 ( 
 ua 
  
 * 
  BucketAttrsToUpdate 
 
 ) 
  
 SetLabel 
 ( 
 name 
 , 
  
 value 
  
  string 
 
 ) 
 

SetLabel causes a label to be added or modified when ua is used in a call to Bucket.Update.

BucketConditions

  type 
  
 BucketConditions 
  
 struct 
  
 { 
  
 // MetagenerationMatch specifies that the bucket must have the given 
  
 // metageneration for the operation to occur. 
  
 // If MetagenerationMatch is zero, it has no effect. 
  
 MetagenerationMatch 
  
  int64 
 
  
 // MetagenerationNotMatch specifies that the bucket must not have the given 
  
 // metageneration for the operation to occur. 
  
 // If MetagenerationNotMatch is zero, it has no effect. 
  
 MetagenerationNotMatch 
  
  int64 
 
 } 
 

BucketConditions constrain bucket methods to act on specific metagenerations.

The zero value is an empty set of constraints.

BucketEncryption

  type 
  
 BucketEncryption 
  
 struct 
  
 { 
  
 // A Cloud KMS key name, in the form 
  
 // projects/P/locations/L/keyRings/R/cryptoKeys/K, that will be used to encrypt 
  
 // objects inserted into this bucket, if no encryption method is specified. 
  
 // The key's location must be the same as the bucket's. 
  
 DefaultKMSKeyName 
  
  string 
 
 } 
 

BucketEncryption is a bucket's encryption configuration.

BucketHandle

  type 
  
 BucketHandle 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

BucketHandle provides operations on a Google Cloud Storage bucket. Use Client.Bucket to get a handle.

Example

exists

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 attrs 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 == 
  
 storage 
 . 
  ErrBucketNotExist 
 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 "The bucket does not exist" 
 ) 
  
 return 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Printf 
 ( 
 "The bucket exists and has attributes: %#v\n" 
 , 
  
 attrs 
 ) 
 } 
 

func (*BucketHandle) ACL

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 ACL 
 () 
  
 * 
  ACLHandle 
 
 

ACL returns an ACLHandle, which provides access to the bucket's access control list. This controls who can list, create or overwrite the objects in a bucket. This call does not perform any network operations.

func (*BucketHandle) AddNotification

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 AddNotification 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 n 
  
 * 
  Notification 
 
 ) 
  
 ( 
 ret 
  
 * 
  Notification 
 
 , 
  
 err 
  
  error 
 
 ) 
 

AddNotification adds a notification to b. You must set n's TopicProjectID, TopicID and PayloadFormat, and must not set its ID. The other fields are all optional. The returned Notification's ID can be used to refer to it.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 b 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ) 
  
 n 
 , 
  
 err 
  
 := 
  
 b 
 . 
  AddNotification 
 
 ( 
 ctx 
 , 
  
& storage 
 . 
  Notification 
 
 { 
  
 TopicProjectID 
 : 
  
 "my-project" 
 , 
  
 TopicID 
 : 
  
 "my-topic" 
 , 
  
 PayloadFormat 
 : 
  
 storage 
 . 
  JSONPayload 
 
 , 
  
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 n 
 . 
 ID 
 ) 
 } 
 

func (*BucketHandle) Attrs

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Attrs 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 attrs 
  
 * 
  BucketAttrs 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Attrs returns the metadata for the bucket.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 attrs 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
 } 
 

func (*BucketHandle) BucketName

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 BucketName 
 () 
  
  string 
 
 

BucketName returns the name of the bucket.

func (*BucketHandle) Create

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Create 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
  
  string 
 
 , 
  
 attrs 
  
 * 
  BucketAttrs 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Create creates the Bucket in the project. If attrs is nil the API defaults will be used.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Create 
 
 ( 
 ctx 
 , 
  
 "my-project" 
 , 
  
 nil 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*BucketHandle) DefaultObjectACL

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 DefaultObjectACL 
 () 
  
 * 
  ACLHandle 
 
 

DefaultObjectACL returns an ACLHandle, which provides access to the bucket's default object ACLs. These ACLs are applied to newly created objects in this bucket that do not have a defined ACL. This call does not perform any network operations.

func (*BucketHandle) Delete

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

Delete deletes the Bucket.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
 Delete 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*BucketHandle) DeleteNotification

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 DeleteNotification 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 id 
  
  string 
 
 ) 
  
 ( 
 err 
  
  error 
 
 ) 
 

DeleteNotification deletes the notification with the given ID.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 var 
  
 notificationID 
  
 string 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 b 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ) 
  
 // TODO: Obtain notificationID from BucketHandle.AddNotification 
  
 // or BucketHandle.Notifications. 
  
 err 
  
 = 
  
 b 
 . 
  DeleteNotification 
 
 ( 
 ctx 
 , 
  
 notificationID 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*BucketHandle) GenerateSignedPostPolicyV4

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 GenerateSignedPostPolicyV4 
 ( 
 object 
  
  string 
 
 , 
  
 opts 
  
 * 
  PostPolicyV4Options 
 
 ) 
  
 ( 
 * 
  PostPolicyV4 
 
 , 
  
  error 
 
 ) 
 

GenerateSignedPostPolicyV4 generates a PostPolicyV4 value from bucket, object and opts. The generated URL and fields will then allow an unauthenticated client to perform multipart uploads.

This method requires the Expires field in the specified PostPolicyV4Options to be non-nil. You may need to set the GoogleAccessID and PrivateKey fields in some cases. Read more on the automatic detection of credentials for this method.

func (*BucketHandle) IAM

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 IAM 
 () 
  
 * 
  iam 
 
 . 
  Handle 
 
 

IAM provides access to IAM access control for the bucket.

func (*BucketHandle) If

If returns a new BucketHandle that applies a set of preconditions. Preconditions already set on the BucketHandle are ignored. The supplied BucketConditions must have exactly one field set to a non-zero value; otherwise an error will be returned from any operation on the BucketHandle. Operations on the new handle will return an error if the preconditions are not satisfied. The only valid preconditions for buckets are MetagenerationMatch and MetagenerationNotMatch.

func (*BucketHandle) LockRetentionPolicy

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 LockRetentionPolicy 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
  error 
 
 

LockRetentionPolicy locks a bucket's retention policy until a previously-configured RetentionPeriod past the EffectiveTime. Note that if RetentionPeriod is set to less than a day, the retention policy is treated as a development configuration and locking will have no effect. The BucketHandle must have a metageneration condition that matches the bucket's metageneration. See BucketHandle.If.

This feature is in private alpha release. It is not currently available to most customers. It might be changed in backwards-incompatible ways and is not subject to any SLA or deprecation policy.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 b 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ) 
  
 attrs 
 , 
  
 err 
  
 := 
  
 b 
 . 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Note that locking the bucket without first attaching a RetentionPolicy 
  
 // that's at least 1 day is a no-op 
  
 err 
  
 = 
  
 b 
 . 
 If 
 ( 
 storage 
 . 
  BucketConditions 
 
 { 
 MetagenerationMatch 
 : 
  
 attrs 
 . 
 MetaGeneration 
 }). 
  LockRetentionPolicy 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle err 
  
 } 
 } 
 

func (*BucketHandle) Notifications

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Notifications 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 n 
  
 map 
 [ 
  string 
 
 ] 
 * 
  Notification 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Notifications returns all the Notifications configured for this bucket, as a map indexed by notification ID.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 b 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ) 
  
 ns 
 , 
  
 err 
  
 := 
  
 b 
 . 
  Notifications 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 for 
  
 id 
 , 
  
 n 
  
 := 
  
 range 
  
 ns 
  
 { 
  
 fmt 
 . 
 Printf 
 ( 
 "%s: %+v\n" 
 , 
  
 id 
 , 
  
 n 
 ) 
  
 } 
 } 
 

func (*BucketHandle) Object

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Object 
 ( 
 name 
  
  string 
 
 ) 
  
 * 
  ObjectHandle 
 
 

Object returns an ObjectHandle, which provides operations on the named object. This call does not perform any network operations such as fetching the object or verifying its existence. Use methods on ObjectHandle to perform network operations.

name must consist entirely of valid UTF-8-encoded runes. The full specification for valid object names can be found at:

 https://cloud.google.com/storage/docs/naming-objects 

func (*BucketHandle) Objects

Objects returns an iterator over the objects in the bucket that match the Query q. If q is nil, no filtering is done. Objects will be iterated over lexicographically by name.

Note: The returned iterator is not safe for concurrent operations without explicit synchronization.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Objects 
 
 ( 
 ctx 
 , 
  
 nil 
 ) 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*BucketHandle) Retryer

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Retryer 
 ( 
 opts 
  
 ... 
  RetryOption 
 
 ) 
  
 * 
  BucketHandle 
 
 

Retryer returns a bucket handle that is configured with custom retry behavior as specified by the options that are passed to it. All operations on the new handle will use the customized retry configuration. Retry options set on a object handle will take precedence over options set on the bucket handle. These retry options will merge with the client's retry configuration (if set) for the returned handle. Options passed into this method will take precedence over retry options on the client. Note that you must explicitly pass in each option you want to override.

func (*BucketHandle) SetObjectRetention

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 SetObjectRetention 
 ( 
 enable 
  
  bool 
 
 ) 
  
 * 
  BucketHandle 
 
 

SetObjectRetention returns a new BucketHandle that will enable object retention on bucket creation. To enable object retention, you must use the returned handle to create the bucket. This has no effect on an already existing bucket. ObjectRetention is not enabled by default. ObjectRetention cannot be configured through the gRPC API.

func (*BucketHandle) SignedURL

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 SignedURL 
 ( 
 object 
  
  string 
 
 , 
  
 opts 
  
 * 
  SignedURLOptions 
 
 ) 
  
 ( 
  string 
 
 , 
  
  error 
 
 ) 
 

SignedURL returns a URL for the specified object. Signed URLs allow anyone access to a restricted resource for a limited time without needing a Google account or signing in. For more information about signed URLs, see " Overview of access control ."

This method requires the Method and Expires fields in the specified SignedURLOptions to be non-nil. You may need to set the GoogleAccessID and PrivateKey fields in some cases. Read more on the automatic detection of credentials for this method.

func (*BucketHandle) Update

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 Update 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 uattrs 
  
  BucketAttrsToUpdate 
 
 ) 
  
 ( 
 attrs 
  
 * 
  BucketAttrs 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Update updates a bucket's attributes.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Enable versioning in the bucket, regardless of its previous value. 
  
 attrs 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
 Update 
 ( 
 ctx 
 , 
  
 storage 
 . 
  BucketAttrsToUpdate 
 
 { 
 VersioningEnabled 
 : 
  
 true 
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
 } 
 
readModifyWrite
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 b 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ) 
  
 attrs 
 , 
  
 err 
  
 := 
  
 b 
 . 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 var 
  
 au 
  
 storage 
 . 
  BucketAttrsToUpdate 
 
  
 au 
 . 
  SetLabel 
 
 ( 
 "lab" 
 , 
  
 attrs 
 . 
 Labels 
 [ 
 "lab" 
 ] 
 + 
 "-more" 
 ) 
  
 if 
  
 attrs 
 . 
 Labels 
 [ 
 "delete-me" 
 ] 
  
 == 
  
 "yes" 
  
 { 
  
 au 
 . 
  DeleteLabel 
 
 ( 
 "delete-me" 
 ) 
  
 } 
  
 attrs 
 , 
  
 err 
  
 = 
  
 b 
 . 
  
 If 
 ( 
 storage 
 . 
  BucketConditions 
 
 { 
 MetagenerationMatch 
 : 
  
 attrs 
 . 
 MetaGeneration 
 }). 
  
 Update 
 ( 
 ctx 
 , 
  
 au 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
 } 
 

func (*BucketHandle) UserProject

  func 
  
 ( 
 b 
  
 * 
  BucketHandle 
 
 ) 
  
 UserProject 
 ( 
 projectID 
  
  string 
 
 ) 
  
 * 
  BucketHandle 
 
 

UserProject returns a new BucketHandle that passes the project ID as the user project for all subsequent calls. Calls with a user project will be billed to that project rather than to the bucket's owning project.

A user project is required for all operations on Requester Pays buckets.

BucketIterator

  type 
  
 BucketIterator 
  
 struct 
  
 { 
  
 // Prefix restricts the iterator to buckets whose names begin with it. 
  
 Prefix 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

A BucketIterator is an iterator over BucketAttrs.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

func (*BucketIterator) Next

  func 
  
 ( 
 it 
  
 * 
  BucketIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  BucketAttrs 
 
 , 
  
  error 
 
 ) 
 

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

Note: This method is not safe for concurrent operations without explicit synchronization.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Buckets 
 
 ( 
 ctx 
 , 
  
 "my-project" 
 ) 
  
 for 
  
 { 
  
 bucketAttrs 
 , 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 bucketAttrs 
 ) 
  
 } 
 } 
 

func (*BucketIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  BucketIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

Note: This method is not safe for concurrent operations without explicit synchronization.

BucketLogging

  type 
  
 BucketLogging 
  
 struct 
  
 { 
  
 // The destination bucket where the current bucket's logs 
  
 // should be placed. 
  
 LogBucket 
  
  string 
 
  
 // A prefix for log object names. 
  
 LogObjectPrefix 
  
  string 
 
 } 
 

BucketLogging holds the bucket's logging configuration, which defines the destination bucket and optional name prefix for the current bucket's logs.

BucketPolicyOnly

  type 
  
 BucketPolicyOnly 
  
 struct 
  
 { 
  
 // Enabled specifies whether access checks use only bucket-level IAM 
  
 // policies. Enabled may be disabled until the locked time. 
  
 Enabled 
  
  bool 
 
  
 // LockedTime specifies the deadline for changing Enabled from true to 
  
 // false. 
  
 LockedTime 
  
  time 
 
 . 
  Time 
 
 } 
 

BucketPolicyOnly is an alias for UniformBucketLevelAccess. Use of UniformBucketLevelAccess is preferred above BucketPolicyOnly.

BucketWebsite

  type 
  
 BucketWebsite 
  
 struct 
  
 { 
  
 // If the requested object path is missing, the service will ensure the path has 
  
 // a trailing '/', append this suffix, and attempt to retrieve the resulting 
  
 // object. This allows the creation of index.html objects to represent directory 
  
 // pages. 
  
 MainPageSuffix 
  
  string 
 
  
 // If the requested object path is missing, and any mainPageSuffix object is 
  
 // missing, if applicable, the service will return the named object from this 
  
 // bucket as the content for a 404 Not Found result. 
  
 NotFoundPage 
  
  string 
 
 } 
 

BucketWebsite holds the bucket's website configuration, controlling how the service behaves when accessing bucket contents as a web site. See https://cloud.google.com/storage/docs/static-website for more information.

CORS

  type 
  
 CORS 
  
 struct 
  
 { 
  
 // MaxAge is the value to return in the Access-Control-Max-Age 
  
 // header used in preflight responses. 
  
 MaxAge 
  
  time 
 
 . 
  Duration 
 
  
 // Methods is the list of HTTP methods on which to include CORS response 
  
 // headers, (GET, OPTIONS, POST, etc) Note: "*" is permitted in the list 
  
 // of methods, and means "any method". 
  
 Methods 
  
 [] 
  string 
 
  
 // Origins is the list of Origins eligible to receive CORS response 
  
 // headers. Note: "*" is permitted in the list of origins, and means 
  
 // "any Origin". 
  
 Origins 
  
 [] 
  string 
 
  
 // ResponseHeaders is the list of HTTP headers other than the simple 
  
 // response headers to give permission for the user-agent to share 
  
 // across domains. 
  
 ResponseHeaders 
  
 [] 
  string 
 
 } 
 

CORS is the bucket's Cross-Origin Resource Sharing (CORS) configuration.

Client

  type 
  
 Client 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

Client is a client for interacting with Google Cloud Storage.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

func NewClient

  func 
  
 NewClient 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 opts 
  
 ... 
  option 
 
 . 
  ClientOption 
 
 ) 
  
 ( 
 * 
  Client 
 
 , 
  
  error 
 
 ) 
 

NewClient creates a new Google Cloud Storage client using the HTTP transport. The default scope is ScopeFullControl. To use a different scope, like ScopeReadOnly, use option.WithScopes.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

You may configure the client by passing in options from the [google.golang.org/api/option] package. You may also use options defined in this package, such as [WithJSONReads].

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 // Use Google Application Default Credentials to authorize and authenticate the client. 
  
 // More information about Application Default Credentials and how to enable is at 
  
 // https://developers.google.com/identity/protocols/application-default-credentials. 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Use the client. 
  
 // Close the client when finished. 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 
unauthenticated
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/option" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 , 
  
 option 
 . 
 WithoutAuthentication 
 ()) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Use the client. 
  
 // Close the client when finished. 
  
 if 
  
 err 
  
 := 
  
 client 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func NewGRPCClient

  func 
  
 NewGRPCClient 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 opts 
  
 ... 
  option 
 
 . 
  ClientOption 
 
 ) 
  
 ( 
 * 
  Client 
 
 , 
  
  error 
 
 ) 
 

NewGRPCClient creates a new Storage client using the gRPC transport and API. Client methods which have not been implemented in gRPC will return an error. In particular, methods for Cloud Pub/Sub notifications are not supported. Using a non-default universe domain is also not supported with the Storage gRPC client.

The storage gRPC API is still in preview and not yet publicly available. If you would like to use the API, please first contact your GCP account rep to request access. The API may be subject to breaking changes.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines.

You may configure the client by passing in options from the [google.golang.org/api/option] package.

func (*Client) Bucket

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Bucket 
 ( 
 name 
  
  string 
 
 ) 
  
 * 
  BucketHandle 
 
 

Bucket returns a BucketHandle, which provides operations on the named bucket. This call does not perform any network operations.

The supplied name must contain only lowercase letters, numbers, dashes, underscores, and dots. The full specification for valid bucket names can be found at:

 https://cloud.google.com/storage/docs/bucket-naming 

func (*Client) Buckets

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Buckets 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
  
  string 
 
 ) 
  
 * 
  BucketIterator 
 
 

Buckets returns an iterator over the buckets in the project. You may optionally set the iterator's Prefix field to restrict the list to buckets whose names begin with the prefix. By default, all buckets in the project are returned.

Note: The returned iterator is not safe for concurrent operations without explicit synchronization.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Buckets 
 
 ( 
 ctx 
 , 
  
 "my-project" 
 ) 
  
 _ 
  
 = 
  
 it 
  
 // TODO: iterate using Next or iterator.Pager. 
 } 
 

func (*Client) Close

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Close 
 () 
  
  error 
 
 

Close closes the Client.

Close need not be called at program exit.

func (*Client) CreateHMACKey

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 CreateHMACKey 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
 , 
  
 serviceAccountEmail 
  
  string 
 
 , 
  
 opts 
  
 ... 
  HMACKeyOption 
 
 ) 
  
 ( 
 * 
  HMACKey 
 
 , 
  
  error 
 
 ) 
 

CreateHMACKey invokes an RPC for Google Cloud Storage to create a new HMACKey.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 hkey 
 , 
  
 err 
  
 := 
  
 client 
 . 
  CreateHMACKey 
 
 ( 
 ctx 
 , 
  
 "project-id" 
 , 
  
 "service-account-email" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 hkey 
  
 // TODO: Use the HMAC Key. 
 } 
 

func (*Client) HMACKeyHandle

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 HMACKeyHandle 
 ( 
 projectID 
 , 
  
 accessID 
  
  string 
 
 ) 
  
 * 
  HMACKeyHandle 
 
 

HMACKeyHandle creates a handle that will be used for HMACKey operations.

func (*Client) ListHMACKeys

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 ListHMACKeys 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
  
  string 
 
 , 
  
 opts 
  
 ... 
  HMACKeyOption 
 
 ) 
  
 * 
  HMACKeysIterator 
 
 

ListHMACKeys returns an iterator for listing HMACKeys.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 iter 
  
 := 
  
 client 
 . 
  ListHMACKeys 
 
 ( 
 ctx 
 , 
  
 "project-id" 
 ) 
  
 for 
  
 { 
  
 key 
 , 
  
 err 
  
 := 
  
 iter 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 key 
  
 // TODO: Use the key. 
  
 } 
 } 
 
forServiceAccountEmail
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 iter 
  
 := 
  
 client 
 . 
  ListHMACKeys 
 
 ( 
 ctx 
 , 
  
 "project-id" 
 , 
  
 storage 
 . 
  ForHMACKeyServiceAccountEmail 
 
 ( 
 "service@account.email" 
 )) 
  
 for 
  
 { 
  
 key 
 , 
  
 err 
  
 := 
  
 iter 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 key 
  
 // TODO: Use the key. 
  
 } 
 } 
 
showDeletedKeys
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 iter 
  
 := 
  
 client 
 . 
  ListHMACKeys 
 
 ( 
 ctx 
 , 
  
 "project-id" 
 , 
  
 storage 
 . 
  ShowDeletedHMACKeys 
 
 ()) 
  
 for 
  
 { 
  
 key 
 , 
  
 err 
  
 := 
  
 iter 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 key 
  
 // TODO: Use the key. 
  
 } 
 } 
 

func (*Client) ServiceAccount

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 ServiceAccount 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
  
  string 
 
 ) 
  
 ( 
  string 
 
 , 
  
  error 
 
 ) 
 

ServiceAccount fetches the email address of the given project's Google Cloud Storage service account.

func (*Client) SetRetry

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 SetRetry 
 ( 
 opts 
  
 ... 
  RetryOption 
 
 ) 
 

SetRetry configures the client with custom retry behavior as specified by the options that are passed to it. All operations using this client will use the customized retry configuration. This should be called once before using the client for network operations, as there could be indeterminate behaviour with operations in progress. Retry options set on a bucket or object handle will take precedence over these options.

Composer

  type 
  
 Composer 
  
 struct 
  
 { 
  
 // ObjectAttrs are optional attributes to set on the destination object. 
  
 // Any attributes must be initialized before any calls on the Composer. Nil 
  
 // or zero-valued attributes are ignored. 
  
  ObjectAttrs 
 
  
 // SendCRC specifies whether to transmit a CRC32C field. It should be set 
  
 // to true in addition to setting the Composer's CRC32C field, because zero 
  
 // is a valid CRC and normally a zero would not be transmitted. 
  
 // If a CRC32C is sent, and the data in the destination object does not match 
  
 // the checksum, the compose will be rejected. 
  
 SendCRC32C 
  
  bool 
 
  
 // contains filtered or unexported fields 
 } 
 

A Composer composes source objects into a destination object.

For Requester Pays buckets, the user project of dst is billed.

func (*Composer) Run

  func 
  
 ( 
 c 
  
 * 
  Composer 
 
 ) 
  
 Run 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 attrs 
  
 * 
  ObjectAttrs 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Run performs the compose operation.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 bkt 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ) 
  
 src1 
  
 := 
  
 bkt 
 . 
  Object 
 
 ( 
 "o1" 
 ) 
  
 src2 
  
 := 
  
 bkt 
 . 
  Object 
 
 ( 
 "o2" 
 ) 
  
 dst 
  
 := 
  
 bkt 
 . 
  Object 
 
 ( 
 "o3" 
 ) 
  
 // Compose and modify metadata. 
  
 c 
  
 := 
  
 dst 
 . 
  ComposerFrom 
 
 ( 
 src1 
 , 
  
 src2 
 ) 
  
 c 
 . 
  ContentType 
 
  
 = 
  
 "text/plain" 
  
 // Set the expected checksum for the destination object to be validated by 
  
 // the backend (if desired). 
  
 c 
 . 
 CRC32C 
  
 = 
  
 42 
  
 c 
 . 
 SendCRC32C 
  
 = 
  
 true 
  
 attrs 
 , 
  
 err 
  
 := 
  
 c 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
  
 // Just compose. 
  
 attrs 
 , 
  
 err 
  
 = 
  
 dst 
 . 
  ComposerFrom 
 
 ( 
 src1 
 , 
  
 src2 
 ). 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
 } 
 

Conditions

  type 
  
 Conditions 
  
 struct 
  
 { 
  
 // GenerationMatch specifies that the object must have the given generation 
  
 // for the operation to occur. 
  
 // If GenerationMatch is zero, it has no effect. 
  
 // Use DoesNotExist to specify that the object does not exist in the bucket. 
  
 GenerationMatch 
  
  int64 
 
  
 // GenerationNotMatch specifies that the object must not have the given 
  
 // generation for the operation to occur. 
  
 // If GenerationNotMatch is zero, it has no effect. 
  
 // This condition only works for object reads if the WithJSONReads client 
  
 // option is set. 
  
 GenerationNotMatch 
  
  int64 
 
  
 // DoesNotExist specifies that the object must not exist in the bucket for 
  
 // the operation to occur. 
  
 // If DoesNotExist is false, it has no effect. 
  
 DoesNotExist 
  
  bool 
 
  
 // MetagenerationMatch specifies that the object must have the given 
  
 // metageneration for the operation to occur. 
  
 // If MetagenerationMatch is zero, it has no effect. 
  
 MetagenerationMatch 
  
  int64 
 
  
 // MetagenerationNotMatch specifies that the object must not have the given 
  
 // metageneration for the operation to occur. 
  
 // If MetagenerationNotMatch is zero, it has no effect. 
  
 // This condition only works for object reads if the WithJSONReads client 
  
 // option is set. 
  
 MetagenerationNotMatch 
  
  int64 
 
 } 
 

Conditions constrain methods to act on specific generations of objects.

The zero value is an empty set of constraints. Not all conditions or combinations of conditions are applicable to all methods. See https://cloud.google.com/storage/docs/generations-preconditions for details on how these operate.

Copier

  type 
  
 Copier 
  
 struct 
  
 { 
  
 // ObjectAttrs are optional attributes to set on the destination object. 
  
 // Any attributes must be initialized before any calls on the Copier. Nil 
  
 // or zero-valued attributes are ignored. 
  
  ObjectAttrs 
 
  
 // RewriteToken can be set before calling Run to resume a copy 
  
 // operation. After Run returns a non-nil error, RewriteToken will 
  
 // have been updated to contain the value needed to resume the copy. 
  
 RewriteToken 
  
  string 
 
  
 // ProgressFunc can be used to monitor the progress of a multi-RPC copy 
  
 // operation. If ProgressFunc is not nil and copying requires multiple 
  
 // calls to the underlying service (see 
  
 // https://cloud.google.com/storage/docs/json_api/v1/objects/rewrite), then 
  
 // ProgressFunc will be invoked after each call with the number of bytes of 
  
 // content copied so far and the total size in bytes of the source object. 
  
 // 
  
 // ProgressFunc is intended to make upload progress available to the 
  
 // application. For example, the implementation of ProgressFunc may update 
  
 // a progress bar in the application's UI, or log the result of 
  
 // float64(copiedBytes)/float64(totalBytes). 
  
 // 
  
 // ProgressFunc should return quickly without blocking. 
  
 ProgressFunc 
  
 func 
 ( 
 copiedBytes 
 , 
  
 totalBytes 
  
  uint64 
 
 ) 
  
 // The Cloud KMS key, in the form projects/P/locations/L/keyRings/R/cryptoKeys/K, 
  
 // that will be used to encrypt the object. Overrides the object's KMSKeyName, if 
  
 // any. 
  
 // 
  
 // Providing both a DestinationKMSKeyName and a customer-supplied encryption key 
  
 // (via ObjectHandle.Key) on the destination object will result in an error when 
  
 // Run is called. 
  
 DestinationKMSKeyName 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

A Copier copies a source object to a destination.

func (*Copier) Run

  func 
  
 ( 
 c 
  
 * 
  Copier 
 
 ) 
  
 Run 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 attrs 
  
 * 
  ObjectAttrs 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Run performs the copy.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 src 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "file1" 
 ) 
  
 dst 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "another-bucketname" 
 ). 
  Object 
 
 ( 
 "file2" 
 ) 
  
 // Copy content and modify metadata. 
  
 copier 
  
 := 
  
 dst 
 . 
  CopierFrom 
 
 ( 
 src 
 ) 
  
 copier 
 . 
  ContentType 
 
  
 = 
  
 "text/plain" 
  
 attrs 
 , 
  
 err 
  
 := 
  
 copier 
 . 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error, possibly resuming with copier.RewriteToken. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
  
 // Just copy content. 
  
 attrs 
 , 
  
 err 
  
 = 
  
 dst 
 . 
  CopierFrom 
 
 ( 
 src 
 ). 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. No way to resume. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 attrs 
 ) 
 } 
 
progress
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "log" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 // Display progress across multiple rewrite RPCs. 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 src 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "file1" 
 ) 
  
 dst 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "another-bucketname" 
 ). 
  Object 
 
 ( 
 "file2" 
 ) 
  
 copier 
  
 := 
  
 dst 
 . 
  CopierFrom 
 
 ( 
 src 
 ) 
  
 copier 
 . 
 ProgressFunc 
  
 = 
  
 func 
 ( 
 copiedBytes 
 , 
  
 totalBytes 
  
 uint64 
 ) 
  
 { 
  
 log 
 . 
 Printf 
 ( 
 "copy %.1f%% done" 
 , 
  
 float64 
 ( 
 copiedBytes 
 ) 
 / 
 float64 
 ( 
 totalBytes 
 ) 
 * 
 100 
 ) 
  
 } 
  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 copier 
 . 
 Run 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

CustomPlacementConfig

  type 
  
 CustomPlacementConfig 
  
 struct 
  
 { 
  
 // The list of regional locations in which data is placed. 
  
 // Custom Dual Regions require exactly 2 regional locations. 
  
 DataLocations 
  
 [] 
  string 
 
 } 
 

CustomPlacementConfig holds the bucket's custom placement configuration for Custom Dual Regions. See https://cloud.google.com/storage/docs/locations#location-dr for more information.

HMACKey

  type 
  
 HMACKey 
  
 struct 
  
 { 
  
 // The HMAC's secret key. 
  
 Secret 
  
  string 
 
  
 // AccessID is the ID of the HMAC key. 
  
 AccessID 
  
  string 
 
  
 // Etag is the HTTP/1.1 Entity tag. 
  
 Etag 
  
  string 
 
  
 // ID is the ID of the HMAC key, including the ProjectID and AccessID. 
  
 ID 
  
  string 
 
  
 // ProjectID is the ID of the project that owns the 
  
 // service account to which the key authenticates. 
  
 ProjectID 
  
  string 
 
  
 // ServiceAccountEmail is the email address 
  
 // of the key's associated service account. 
  
 ServiceAccountEmail 
  
  string 
 
  
 // CreatedTime is the creation time of the HMAC key. 
  
 CreatedTime 
  
  time 
 
 . 
  Time 
 
  
 // UpdatedTime is the last modification time of the HMAC key metadata. 
  
 UpdatedTime 
  
  time 
 
 . 
  Time 
 
  
 // State is the state of the HMAC key. 
  
 // It can be one of StateActive, StateInactive or StateDeleted. 
  
 State 
  
  HMACState 
 
 } 
 

HMACKey is the representation of a Google Cloud Storage HMAC key.

HMAC keys are used to authenticate signed access to objects. To enable HMAC key authentication, please visit https://cloud.google.com/storage/docs/migrating .

HMACKeyAttrsToUpdate

  type 
  
 HMACKeyAttrsToUpdate 
  
 struct 
  
 { 
  
 // State is required and must be either StateActive or StateInactive. 
  
 State 
  
  HMACState 
 
  
 // Etag is an optional field and it is the HTTP/1.1 Entity tag. 
  
 Etag 
  
  string 
 
 } 
 

HMACKeyAttrsToUpdate defines the attributes of an HMACKey that will be updated.

HMACKeyHandle

  type 
  
 HMACKeyHandle 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

HMACKeyHandle helps provide access and management for HMAC keys.

func (*HMACKeyHandle) Delete

  func 
  
 ( 
 hkh 
  
 * 
  HMACKeyHandle 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 opts 
  
 ... 
  HMACKeyOption 
 
 ) 
  
  error 
 
 

Delete invokes an RPC to delete the key referenced by accessID, on Google Cloud Storage. Only inactive HMAC keys can be deleted. After deletion, a key cannot be used to authenticate requests.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 hkh 
  
 := 
  
 client 
 . 
 HMACKeyHandle 
 ( 
 "project-id" 
 , 
  
 "access-key-id" 
 ) 
  
 // Make sure that the HMACKey being deleted has a status of inactive. 
  
 if 
  
 err 
  
 := 
  
 hkh 
 . 
 Delete 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*HMACKeyHandle) Get

  func 
  
 ( 
 hkh 
  
 * 
  HMACKeyHandle 
 
 ) 
  
 Get 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 opts 
  
 ... 
  HMACKeyOption 
 
 ) 
  
 ( 
 * 
  HMACKey 
 
 , 
  
  error 
 
 ) 
 

Get invokes an RPC to retrieve the HMAC key referenced by the HMACKeyHandle's accessID.

Options such as UserProjectForHMACKeys can be used to set the userProject to be billed against for operations.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 hkh 
  
 := 
  
 client 
 . 
 HMACKeyHandle 
 ( 
 "project-id" 
 , 
  
 "access-key-id" 
 ) 
  
 hkey 
 , 
  
 err 
  
 := 
  
 hkh 
 . 
  Get 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 hkey 
  
 // TODO: Use the HMAC Key. 
 } 
 

func (*HMACKeyHandle) Update

Update mutates the HMACKey referred to by accessID.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 hkh 
  
 := 
  
 client 
 . 
 HMACKeyHandle 
 ( 
 "project-id" 
 , 
  
 "access-key-id" 
 ) 
  
 ukey 
 , 
  
 err 
  
 := 
  
 hkh 
 . 
 Update 
 ( 
 ctx 
 , 
  
 storage 
 . 
  HMACKeyAttrsToUpdate 
 
 { 
  
 State 
 : 
  
 storage 
 . 
  Inactive 
 
 , 
  
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 ukey 
  
 // TODO: Use the HMAC Key. 
 } 
 

HMACKeyOption

  type 
  
 HMACKeyOption 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

HMACKeyOption configures the behavior of HMACKey related methods and actions.

func ForHMACKeyServiceAccountEmail

  func 
  
 ForHMACKeyServiceAccountEmail 
 ( 
 serviceAccountEmail 
  
  string 
 
 ) 
  
  HMACKeyOption 
 
 

ForHMACKeyServiceAccountEmail returns HMAC Keys that are associated with the email address of a service account in the project.

Only one service account email can be used as a filter, so if multiple of these options are applied, the last email to be set will be used.

func ShowDeletedHMACKeys

  func 
  
 ShowDeletedHMACKeys 
 () 
  
  HMACKeyOption 
 
 

ShowDeletedHMACKeys will also list keys whose state is "DELETED".

func UserProjectForHMACKeys

  func 
  
 UserProjectForHMACKeys 
 ( 
 userProjectID 
  
  string 
 
 ) 
  
  HMACKeyOption 
 
 

UserProjectForHMACKeys will bill the request against userProjectID if userProjectID is non-empty.

Note: This is a noop right now and only provided for API compatibility.

HMACKeysIterator

  type 
  
 HMACKeysIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

An HMACKeysIterator is an iterator over HMACKeys.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

func (*HMACKeysIterator) Next

  func 
  
 ( 
 it 
  
 * 
  HMACKeysIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  HMACKey 
 
 , 
  
  error 
 
 ) 
 

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

func (*HMACKeysIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  HMACKeysIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

HMACState

  type 
  
 HMACState 
  
  string 
 
 

HMACState is the state of the HMAC key.

Active, Inactive, Deleted

  const 
  
 ( 
  
 // Active is the status for an active key that can be used to sign 
  
 // requests. 
  
 Active 
  
  HMACState 
 
  
 = 
  
 "ACTIVE" 
  
 // Inactive is the status for an inactive key thus requests signed by 
  
 // this key will be denied. 
  
 Inactive 
  
  HMACState 
 
  
 = 
  
 "INACTIVE" 
  
 // Deleted is the status for a key that is deleted. 
  
 // Once in this state the key cannot key cannot be recovered 
  
 // and does not count towards key limits. Deleted keys will be cleaned 
  
 // up later. 
  
 Deleted 
  
  HMACState 
 
  
 = 
  
 "DELETED" 
 ) 
 

HierarchicalNamespace

  type 
  
 HierarchicalNamespace 
  
 struct 
  
 { 
  
 // Enabled indicates whether hierarchical namespace features are enabled on 
  
 // the bucket. This can only be set at bucket creation time currently. 
  
 Enabled 
  
  bool 
 
 } 
 

HierarchicalNamespace contains the bucket's hierarchical namespace configuration. Hierarchical namespace enabled buckets can contain [cloud.google.com/go/storage/control/apiv2/controlpb.Folder] resources.

Lifecycle

  type 
  
 Lifecycle 
  
 struct 
  
 { 
  
 Rules 
  
 [] 
  LifecycleRule 
 
 } 
 

Lifecycle is the lifecycle configuration for objects in the bucket.

LifecycleAction

  type 
  
 LifecycleAction 
  
 struct 
  
 { 
  
 // Type is the type of action to take on matching objects. 
  
 // 
  
 // Acceptable values are storage.DeleteAction, storage.SetStorageClassAction, 
  
 // and storage.AbortIncompleteMPUAction. 
  
 Type 
  
  string 
 
  
 // StorageClass is the storage class to set on matching objects if the Action 
  
 // is "SetStorageClass". 
  
 StorageClass 
  
  string 
 
 } 
 

LifecycleAction is a lifecycle configuration action.

LifecycleCondition

  type 
  
 LifecycleCondition 
  
 struct 
  
 { 
  
 // AllObjects is used to select all objects in a bucket by 
  
 // setting AgeInDays to 0. 
  
 AllObjects 
  
  bool 
 
  
 // AgeInDays is the age of the object in days. 
  
 // If you want to set AgeInDays to `0` use AllObjects set to `true`. 
  
 AgeInDays 
  
  int64 
 
  
 // CreatedBefore is the time the object was created. 
  
 // 
  
 // This condition is satisfied when an object is created before midnight of 
  
 // the specified date in UTC. 
  
 CreatedBefore 
  
  time 
 
 . 
  Time 
 
  
 // CustomTimeBefore is the CustomTime metadata field of the object. This 
  
 // condition is satisfied when an object's CustomTime timestamp is before 
  
 // midnight of the specified date in UTC. 
  
 // 
  
 // This condition can only be satisfied if CustomTime has been set. 
  
 CustomTimeBefore 
  
  time 
 
 . 
  Time 
 
  
 // DaysSinceCustomTime is the days elapsed since the CustomTime date of the 
  
 // object. This condition can only be satisfied if CustomTime has been set. 
  
 // Note: Using `0` as the value will be ignored by the library and not sent to the API. 
  
 DaysSinceCustomTime 
  
  int64 
 
  
 // DaysSinceNoncurrentTime is the days elapsed since the noncurrent timestamp 
  
 // of the object. This condition is relevant only for versioned objects. 
  
 // Note: Using `0` as the value will be ignored by the library and not sent to the API. 
  
 DaysSinceNoncurrentTime 
  
  int64 
 
  
 // Liveness specifies the object's liveness. Relevant only for versioned objects 
  
 Liveness 
  
  Liveness 
 
  
 // MatchesPrefix is the condition matching an object if any of the 
  
 // matches_prefix strings are an exact prefix of the object's name. 
  
 MatchesPrefix 
  
 [] 
  string 
 
  
 // MatchesStorageClasses is the condition matching the object's storage 
  
 // class. 
  
 // 
  
 // Values include "STANDARD", "NEARLINE", "COLDLINE" and "ARCHIVE". 
  
 MatchesStorageClasses 
  
 [] 
  string 
 
  
 // MatchesSuffix is the condition matching an object if any of the 
  
 // matches_suffix strings are an exact suffix of the object's name. 
  
 MatchesSuffix 
  
 [] 
  string 
 
  
 // NoncurrentTimeBefore is the noncurrent timestamp of the object. This 
  
 // condition is satisfied when an object's noncurrent timestamp is before 
  
 // midnight of the specified date in UTC. 
  
 // 
  
 // This condition is relevant only for versioned objects. 
  
 NoncurrentTimeBefore 
  
  time 
 
 . 
  Time 
 
  
 // NumNewerVersions is the condition matching objects with a number of newer versions. 
  
 // 
  
 // If the value is N, this condition is satisfied when there are at least N 
  
 // versions (including the live version) newer than this version of the 
  
 // object. 
  
 // Note: Using `0` as the value will be ignored by the library and not sent to the API. 
  
 NumNewerVersions 
  
  int64 
 
 } 
 

LifecycleCondition is a set of conditions used to match objects and take an action automatically.

All configured conditions must be met for the associated action to be taken.

LifecycleRule

  type 
  
 LifecycleRule 
  
 struct 
  
 { 
  
 // Action is the action to take when all of the associated conditions are 
  
 // met. 
  
 Action 
  
  LifecycleAction 
 
  
 // Condition is the set of conditions that must be met for the associated 
  
 // action to be taken. 
  
 Condition 
  
  LifecycleCondition 
 
 } 
 

LifecycleRule is a lifecycle configuration rule.

When all the configured conditions are met by an object in the bucket, the configured action will automatically be taken on that object.

Liveness

  type 
  
 Liveness 
  
  int 
 
 

Liveness specifies whether the object is live or not.

LiveAndArchived, Live, Archived

  const 
  
 ( 
  
 // LiveAndArchived includes both live and archived objects. 
  
 LiveAndArchived 
  
  Liveness 
 
  
 = 
  
  iota 
 
  
 // Live specifies that the object is still live. 
  
 Live 
  
 // Archived specifies that the object is archived. 
  
 Archived 
 ) 
 

Notification

  type 
  
 Notification 
  
 struct 
  
 { 
  
 //The ID of the notification. 
  
 ID 
  
  string 
 
  
 // The ID of the topic to which this subscription publishes. 
  
 TopicID 
  
  string 
 
  
 // The ID of the project to which the topic belongs. 
  
 TopicProjectID 
  
  string 
 
  
 // Only send notifications about listed event types. If empty, send notifications 
  
 // for all event types. 
  
 // See https://cloud.google.com/storage/docs/pubsub-notifications#events. 
  
 EventTypes 
  
 [] 
  string 
 
  
 // If present, only apply this notification configuration to object names that 
  
 // begin with this prefix. 
  
 ObjectNamePrefix 
  
  string 
 
  
 // An optional list of additional attributes to attach to each Cloud PubSub 
  
 // message published for this notification subscription. 
  
 CustomAttributes 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // The contents of the message payload. 
  
 // See https://cloud.google.com/storage/docs/pubsub-notifications#payload. 
  
 PayloadFormat 
  
  string 
 
 } 
 

A Notification describes how to send Cloud PubSub messages when certain events occur in a bucket.

ObjectAttrs

  type 
  
 ObjectAttrs 
  
 struct 
  
 { 
  
 // Bucket is the name of the bucket containing this GCS object. 
  
 // This field is read-only. 
  
 Bucket 
  
  string 
 
  
 // Name is the name of the object within the bucket. 
  
 // This field is read-only. 
  
 Name 
  
  string 
 
  
 // ContentType is the MIME type of the object's content. 
  
 ContentType 
  
  string 
 
  
 // ContentLanguage is the content language of the object's content. 
  
 ContentLanguage 
  
  string 
 
  
 // CacheControl is the Cache-Control header to be sent in the response 
  
 // headers when serving the object data. 
  
 CacheControl 
  
  string 
 
  
 // EventBasedHold specifies whether an object is under event-based hold. New 
  
 // objects created in a bucket whose DefaultEventBasedHold is set will 
  
 // default to that value. 
  
 EventBasedHold 
  
  bool 
 
  
 // TemporaryHold specifies whether an object is under temporary hold. While 
  
 // this flag is set to true, the object is protected against deletion and 
  
 // overwrites. 
  
 TemporaryHold 
  
  bool 
 
  
 // RetentionExpirationTime is a server-determined value that specifies the 
  
 // earliest time that the object's retention period expires. 
  
 // This is a read-only field. 
  
 RetentionExpirationTime 
  
  time 
 
 . 
  Time 
 
  
 // ACL is the list of access control rules for the object. 
  
 ACL 
  
 [] 
  ACLRule 
 
  
 // If not empty, applies a predefined set of access controls. It should be set 
  
 // only when writing, copying or composing an object. When copying or composing, 
  
 // it acts as the destinationPredefinedAcl parameter. 
  
 // PredefinedACL is always empty for ObjectAttrs returned from the service. 
  
 // See https://cloud.google.com/storage/docs/json_api/v1/objects/insert 
  
 // for valid values. 
  
 PredefinedACL 
  
  string 
 
  
 // Owner is the owner of the object. This field is read-only. 
  
 // 
  
 // If non-zero, it is in the form of "user- 
 

ObjectAttrs represents the metadata for a Google Cloud Storage (GCS) object.

ObjectAttrsToUpdate

  type 
  
 ObjectAttrsToUpdate 
  
 struct 
  
 { 
  
 EventBasedHold 
  
  optional 
 
 . 
  Bool 
 
  
 TemporaryHold 
  
  optional 
 
 . 
  Bool 
 
  
 ContentType 
  
  optional 
 
 . 
  String 
 
  
 ContentLanguage 
  
  optional 
 
 . 
  String 
 
  
 ContentEncoding 
  
  optional 
 
 . 
  String 
 
  
 ContentDisposition 
  
  optional 
 
 . 
  String 
 
  
 CacheControl 
  
  optional 
 
 . 
  String 
 
  
 CustomTime 
  
  time 
 
 . 
  Time 
 
  
 // Cannot be deleted or backdated from its current value. 
  
 Metadata 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // Set to map[string]string{} to delete. 
  
 ACL 
  
 [] 
  ACLRule 
 
  
 // If not empty, applies a predefined set of access controls. ACL must be nil. 
  
 // See https://cloud.google.com/storage/docs/json_api/v1/objects/patch. 
  
 PredefinedACL 
  
  string 
 
  
 // Retention contains the retention configuration for this object. 
  
 // Operations other than setting the retention for the first time or 
  
 // extending the RetainUntil time on the object retention must be done 
  
 // on an ObjectHandle with OverrideUnlockedRetention set to true. 
  
 Retention 
  
 * 
  ObjectRetention 
 
 } 
 

ObjectAttrsToUpdate is used to update the attributes of an object. Only fields set to non-nil values will be updated. For all fields except CustomTime and Retention, set the field to its zero value to delete it. CustomTime cannot be deleted or changed to an earlier time once set. Retention can be deleted (only if the Mode is Unlocked) by setting it to an empty value (not nil).

For example, to change ContentType and delete ContentEncoding, Metadata and Retention, use:

 ObjectAttrsToUpdate{
    ContentType: "text/html",
    ContentEncoding: "",
    Metadata: map[string]string{},
    Retention: &ObjectRetention{},
} 

ObjectHandle

  type 
  
 ObjectHandle 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

ObjectHandle provides operations on an object in a Google Cloud Storage bucket. Use BucketHandle.Object to get a handle.

Example

exists

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 attrs 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ). 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 == 
  
 storage 
 . 
  ErrObjectNotExist 
 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 "The object does not exist" 
 ) 
  
 return 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Printf 
 ( 
 "The object exists and has attributes: %#v\n" 
 , 
  
 attrs 
 ) 
 } 
 

func (*ObjectHandle) ACL

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 ACL 
 () 
  
 * 
  ACLHandle 
 
 

ACL provides access to the object's access control list. This controls who can read and write this object. This call does not perform any network operations.

func (*ObjectHandle) Attrs

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 Attrs 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 attrs 
  
 * 
  ObjectAttrs 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Attrs returns meta information about the object. ErrObjectNotExist will be returned if the object is not found.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 objAttrs 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ). 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 objAttrs 
 ) 
 } 
 
withConditions
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "time" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 obj 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ) 
  
 // Read the object. 
  
 objAttrs1 
 , 
  
 err 
  
 := 
  
 obj 
 . 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Do something else for a while. 
  
 time 
 . 
 Sleep 
 ( 
 5 
  
 * 
  
 time 
 . 
 Minute 
 ) 
  
 // Now read the same contents, even if the object has been written since the last read. 
  
 objAttrs2 
 , 
  
 err 
  
 := 
  
 obj 
 . 
  Generation 
 
 ( 
 objAttrs1 
 . 
  Generation 
 
 ). 
 Attrs 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 objAttrs1 
 , 
  
 objAttrs2 
 ) 
 } 
 

func (*ObjectHandle) BucketName

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 BucketName 
 () 
  
  string 
 
 

BucketName returns the name of the bucket.

func (*ObjectHandle) ComposerFrom

  func 
  
 ( 
 dst 
  
 * 
  ObjectHandle 
 
 ) 
  
 ComposerFrom 
 ( 
 srcs 
  
 ...* 
  ObjectHandle 
 
 ) 
  
 * 
  Composer 
 
 

ComposerFrom creates a Composer that can compose srcs into dst. You can immediately call Run on the returned Composer, or you can configure it first.

The encryption key for the destination object will be used to decrypt all source objects and encrypt the destination object. It is an error to specify an encryption key for any of the source objects.

func (*ObjectHandle) CopierFrom

  func 
  
 ( 
 dst 
  
 * 
  ObjectHandle 
 
 ) 
  
 CopierFrom 
 ( 
 src 
  
 * 
  ObjectHandle 
 
 ) 
  
 * 
  Copier 
 
 

CopierFrom creates a Copier that can copy src to dst. You can immediately call Run on the returned Copier, or you can configure it first.

For Requester Pays buckets, the user project of dst is billed, unless it is empty, in which case the user project of src is billed.

Example

rotateEncryptionKeys
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 var 
  
 key1 
 , 
  
 key2 
  
 [] 
 byte 
 func 
  
 main 
 () 
  
 { 
  
 // To rotate the encryption key on an object, copy it onto itself. 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 obj 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "obj" 
 ) 
  
 // Assume obj is encrypted with key1, and we want to change to key2. 
  
 _ 
 , 
  
 err 
  
 = 
  
 obj 
 . 
  Key 
 
 ( 
 key2 
 ). 
  CopierFrom 
 
 ( 
 obj 
 . 
  Key 
 
 ( 
 key1 
 )). 
 Run 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*ObjectHandle) Delete

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 Delete 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
  error 
 
 

Delete deletes the single specified object.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // To delete multiple objects in a bucket, list them with an 
  
 // ObjectIterator, then Delete them. 
  
 // If you are using this package on the App Engine Flex runtime, 
  
 // you can init a bucket client with your app's default bucket name. 
  
 // See http://godoc.org/google.golang.org/appengine/file#DefaultBucketName. 
  
 bucket 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ) 
  
 it 
  
 := 
  
 bucket 
 . 
  Objects 
 
 ( 
 ctx 
 , 
  
 nil 
 ) 
  
 for 
  
 { 
  
 objAttrs 
 , 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 != 
  
 nil 
 && 
 err 
  
 != 
  
 iterator 
 . 
 Done 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 := 
  
 bucket 
 . 
  Object 
 
 ( 
 objAttrs 
 . 
 Name 
 ). 
 Delete 
 ( 
 ctx 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "deleted all object items in the bucket specified." 
 ) 
 } 
 

func (*ObjectHandle) Generation

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 Generation 
 ( 
 gen 
  
  int64 
 
 ) 
  
 * 
  ObjectHandle 
 
 

Generation returns a new ObjectHandle that operates on a specific generation of the object. By default, the handle operates on the latest generation. Not all operations work when given a specific generation; check the API endpoints at https://cloud.google.com/storage/docs/json_api/ for details.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "io" 
  
 "os" 
  
 "cloud.google.com/go/storage" 
 ) 
 var 
  
 gen 
  
 int64 
 func 
  
 main 
 () 
  
 { 
  
 // Read an object's contents from generation gen, regardless of the 
  
 // current generation of the object. 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 obj 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ) 
  
 rc 
 , 
  
 err 
  
 := 
  
 obj 
 . 
  Generation 
 
 ( 
 gen 
 ). 
  NewReader 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 defer 
  
 rc 
 . 
 Close 
 () 
  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 io 
 . 
 Copy 
 ( 
 os 
 . 
 Stdout 
 , 
  
 rc 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*ObjectHandle) If

If returns a new ObjectHandle that applies a set of preconditions. Preconditions already set on the ObjectHandle are ignored. The supplied Conditions must have at least one field set to a non-default value; otherwise an error will be returned from any operation on the ObjectHandle. Operations on the new handle will return an error if the preconditions are not satisfied. See https://cloud.google.com/storage/docs/generations-preconditions for more details.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "io" 
  
 "net/http" 
  
 "os" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/googleapi" 
 ) 
 var 
  
 gen 
  
 int64 
 func 
  
 main 
 () 
  
 { 
  
 // Read from an object only if the current generation is gen. 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 obj 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ) 
  
 rc 
 , 
  
 err 
  
 := 
  
 obj 
 . 
 If 
 ( 
 storage 
 . 
  Conditions 
 
 { 
 GenerationMatch 
 : 
  
 gen 
 }). 
  NewReader 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 io 
 . 
 Copy 
 ( 
 os 
 . 
 Stdout 
 , 
  
 rc 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 rc 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 switch 
  
 ee 
  
 := 
  
 err 
 .( 
 type 
 ) 
  
 { 
  
 case 
  
 * 
 googleapi 
 . 
 Error 
 : 
  
 if 
  
 ee 
 . 
 Code 
  
 == 
  
 http 
 . 
 StatusPreconditionFailed 
  
 { 
  
 // The condition presented in the If failed. 
  
 // TODO: handle error. 
  
 } 
  
 // TODO: handle other status codes here. 
  
 default 
 : 
  
 // TODO: handle error. 
  
 } 
  
 } 
 } 
 

func (*ObjectHandle) Key

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 Key 
 ( 
 encryptionKey 
  
 [] 
  byte 
 
 ) 
  
 * 
  ObjectHandle 
 
 

Key returns a new ObjectHandle that uses the supplied encryption key to encrypt and decrypt the object's contents.

Encryption key must be a 32-byte AES-256 key. See https://cloud.google.com/storage/docs/encryption for details.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 var 
  
 secretKey 
  
 [] 
 byte 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 obj 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ) 
  
 // Encrypt the object's contents. 
  
 w 
  
 := 
  
 obj 
 . 
  Key 
 
 ( 
 secretKey 
 ). 
  NewWriter 
 
 ( 
 ctx 
 ) 
  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 w 
 . 
  Write 
 
 ([] 
 byte 
 ( 
 "top secret" 
 )); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 w 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
 } 
 

func (*ObjectHandle) NewRangeReader

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 NewRangeReader 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 offset 
 , 
  
 length 
  
  int64 
 
 ) 
  
 ( 
 r 
  
 * 
  Reader 
 
 , 
  
 err 
  
  error 
 
 ) 
 

NewRangeReader reads part of an object, reading at most length bytes starting at the given offset. If length is negative, the object is read until the end. If offset is negative, the object is read abs(offset) bytes from the end, and length must also be negative to indicate all remaining bytes will be read.

If the object's metadata property "Content-Encoding" is set to "gzip" or satisfies decompressive transcoding per https://cloud.google.com/storage/docs/transcoding that file will be served back whole, regardless of the requested range as Google Cloud Storage dictates.

By default, reads are made using the Cloud Storage XML API. We recommend using the JSON API instead, which can be done by setting [WithJSONReads] when calling [NewClient]. This ensures consistency with other client operations, which all use JSON. JSON will become the default in a future release.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "io/ioutil" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Read only the first 64K. 
  
 rc 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewRangeReader 
 
 ( 
 ctx 
 , 
  
 0 
 , 
  
 64 
 * 
 1024 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 defer 
  
 rc 
 . 
 Close 
 () 
  
 slurp 
 , 
  
 err 
  
 := 
  
 ioutil 
 . 
 ReadAll 
 ( 
 rc 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Printf 
 ( 
 "first 64K of file contents:\n%s\n" 
 , 
  
 slurp 
 ) 
 } 
 
lastNBytes
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "io/ioutil" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Read only the last 10 bytes until the end of the file. 
  
 rc 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewRangeReader 
 
 ( 
 ctx 
 , 
  
 - 
 10 
 , 
  
 - 
 1 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 defer 
  
 rc 
 . 
 Close 
 () 
  
 slurp 
 , 
  
 err 
  
 := 
  
 ioutil 
 . 
 ReadAll 
 ( 
 rc 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Printf 
 ( 
 "Last 10 bytes from the end of the file:\n%s\n" 
 , 
  
 slurp 
 ) 
 } 
 
untilEnd
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "io/ioutil" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Read from the 101st byte until the end of the file. 
  
 rc 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewRangeReader 
 
 ( 
 ctx 
 , 
  
 100 
 , 
  
 - 
 1 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 defer 
  
 rc 
 . 
 Close 
 () 
  
 slurp 
 , 
  
 err 
  
 := 
  
 ioutil 
 . 
 ReadAll 
 ( 
 rc 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Printf 
 ( 
 "From 101st byte until the end:\n%s\n" 
 , 
  
 slurp 
 ) 
 } 
 

func (*ObjectHandle) NewReader

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 NewReader 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 ( 
 * 
  Reader 
 
 , 
  
  error 
 
 ) 
 

NewReader creates a new Reader to read the contents of the object. ErrObjectNotExist will be returned if the object is not found.

The caller must call Close on the returned Reader when done reading.

By default, reads are made using the Cloud Storage XML API. We recommend using the JSON API instead, which can be done by setting [WithJSONReads] when calling [NewClient]. This ensures consistency with other client operations, which all use JSON. JSON will become the default in a future release.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "io/ioutil" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 rc 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ). 
  NewReader 
 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 slurp 
 , 
  
 err 
  
 := 
  
 ioutil 
 . 
 ReadAll 
 ( 
 rc 
 ) 
  
 rc 
 . 
 Close 
 () 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "file contents:" 
 , 
  
 slurp 
 ) 
 } 
 

func (*ObjectHandle) NewWriter

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 NewWriter 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  Writer 
 
 

NewWriter returns a storage Writer that writes to the GCS object associated with this ObjectHandle.

A new object will be created unless an object with this name already exists. Otherwise any previous object with the same name will be replaced. The object will not be available (and any previous object will remain) until Close has been called.

Attributes can be set on the object by modifying the returned Writer's ObjectAttrs field before the first call to Write. If no ContentType attribute is specified, the content type will be automatically sniffed using net/http.DetectContentType.

Note that each Writer allocates an internal buffer of size Writer.ChunkSize. See the ChunkSize docs for more information.

It is the caller's responsibility to call Close when writing is done. To stop writing without saving the data, cancel the context.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 wc 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewWriter 
 
 ( 
 ctx 
 ) 
  
 _ 
  
 = 
  
 wc 
  
 // TODO: Use the Writer. 
 } 
 

func (*ObjectHandle) ObjectName

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 ObjectName 
 () 
  
  string 
 
 

ObjectName returns the name of the object.

func (*ObjectHandle) OverrideUnlockedRetention

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 OverrideUnlockedRetention 
 ( 
 override 
  
  bool 
 
 ) 
  
 * 
  ObjectHandle 
 
 

OverrideUnlockedRetention provides an option for overriding an Unlocked Retention policy. This must be set to true in order to change a policy from Unlocked to Locked, to set it to null, or to reduce its RetainUntil attribute. It is not required for setting the ObjectRetention for the first time nor for extending the RetainUntil time.

func (*ObjectHandle) ReadCompressed

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 ReadCompressed 
 ( 
 compressed 
  
  bool 
 
 ) 
  
 * 
  ObjectHandle 
 
 

ReadCompressed when true causes the read to happen without decompressing.

func (*ObjectHandle) Restore

Restore will restore a soft-deleted object to a live object. Note that you must specify a generation to use this method.

func (*ObjectHandle) Retryer

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 Retryer 
 ( 
 opts 
  
 ... 
  RetryOption 
 
 ) 
  
 * 
  ObjectHandle 
 
 

Retryer returns an object handle that is configured with custom retry behavior as specified by the options that are passed to it. All operations on the new handle will use the customized retry configuration. These retry options will merge with the bucket's retryer (if set) for the returned handle. Options passed into this method will take precedence over retry options on the bucket and client. Note that you must explicitly pass in each option you want to override.

func (*ObjectHandle) SoftDeleted

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 SoftDeleted 
 () 
  
 * 
  ObjectHandle 
 
 

SoftDeleted returns an object handle that can be used to get an object that has been soft deleted. To get a soft deleted object, the generation must be set on the object using ObjectHandle.Generation. Note that an error will be returned if a live object is queried using this.

func (*ObjectHandle) Update

  func 
  
 ( 
 o 
  
 * 
  ObjectHandle 
 
 ) 
  
 Update 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 uattrs 
  
  ObjectAttrsToUpdate 
 
 ) 
  
 ( 
 oa 
  
 * 
  ObjectAttrs 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Update updates an object with the provided attributes. See ObjectAttrsToUpdate docs for details on treatment of zero values. ErrObjectNotExist will be returned if the object is not found.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Change only the content type of the object. 
  
 objAttrs 
 , 
  
 err 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Object 
 
 ( 
 "my-object" 
 ). 
 Update 
 ( 
 ctx 
 , 
  
 storage 
 . 
  ObjectAttrsToUpdate 
 
 { 
  
 ContentType 
 : 
  
 "text/html" 
 , 
  
 ContentDisposition 
 : 
  
 "" 
 , 
  
 // delete ContentDisposition 
  
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 objAttrs 
 ) 
 } 
 

ObjectIterator

  type 
  
 ObjectIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

An ObjectIterator is an iterator over ObjectAttrs.

Note: This iterator is not safe for concurrent operations without explicit synchronization.

func (*ObjectIterator) Next

  func 
  
 ( 
 it 
  
 * 
  ObjectIterator 
 
 ) 
  
 Next 
 () 
  
 ( 
 * 
  ObjectAttrs 
 
 , 
  
  error 
 
 ) 
 

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns iterator.Done, all subsequent calls will return iterator.Done.

In addition, if Next returns an error other than iterator.Done, all subsequent calls will return the same error. To continue iteration, a new ObjectIterator must be created. Since objects are ordered lexicographically by name, Query.StartOffset can be used to create a new iterator which will start at the desired place. See https://pkg.go.dev/cloud.google.com/go/storage?tab=doc#hdr-Listing_objects .

If Query.Delimiter is non-empty, some of the ObjectAttrs returned by Next will have a non-empty Prefix field, and a zero value for all other fields. These represent prefixes.

Note: This method is not safe for concurrent operations without explicit synchronization.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
  
 "google.golang.org/api/iterator" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 it 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "my-bucket" 
 ). 
  Objects 
 
 ( 
 ctx 
 , 
  
 nil 
 ) 
  
 for 
  
 { 
  
 objAttrs 
 , 
  
 err 
  
 := 
  
 it 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: Handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 objAttrs 
 ) 
  
 } 
 } 
 

func (*ObjectIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  ObjectIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
  iterator 
 
 . 
  PageInfo 
 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

Note: This method is not safe for concurrent operations without explicit synchronization.

ObjectRetention

  type 
  
 ObjectRetention 
  
 struct 
  
 { 
  
 // Mode is the retention policy's mode on this object. Valid values are 
  
 // "Locked" and "Unlocked". 
  
 // Locked retention policies cannot be changed. Unlocked policies require an 
  
 // override to change. 
  
 Mode 
  
  string 
 
  
 // RetainUntil is the time this object will be retained until. 
  
 RetainUntil 
  
  time 
 
 . 
  Time 
 
 } 
 

ObjectRetention contains the retention configuration for this object.

PolicyV4Fields

  type 
  
 PolicyV4Fields 
  
 struct 
  
 { 
  
 // ACL specifies the access control permissions for the object. 
  
 // Optional. 
  
 ACL 
  
  string 
 
  
 // CacheControl specifies the caching directives for the object. 
  
 // Optional. 
  
 CacheControl 
  
  string 
 
  
 // ContentType specifies the media type of the object. 
  
 // Optional. 
  
 ContentType 
  
  string 
 
  
 // ContentDisposition specifies how the file will be served back to requesters. 
  
 // Optional. 
  
 ContentDisposition 
  
  string 
 
  
 // ContentEncoding specifies the decompressive transcoding that the object. 
  
 // This field is complementary to ContentType in that the file could be 
  
 // compressed but ContentType specifies the file's original media type. 
  
 // Optional. 
  
 ContentEncoding 
  
  string 
 
  
 // Metadata specifies custom metadata for the object. 
  
 // If any key doesn't begin with "x-goog-meta-", an error will be returned. 
  
 // Optional. 
  
 Metadata 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
  
 // StatusCodeOnSuccess when set, specifies the status code that Cloud Storage 
  
 // will serve back on successful upload of the object. 
  
 // Optional. 
  
 StatusCodeOnSuccess 
  
  int 
 
  
 // RedirectToURLOnSuccess when set, specifies the URL that Cloud Storage 
  
 // will serve back on successful upload of the object. 
  
 // Optional. 
  
 RedirectToURLOnSuccess 
  
  string 
 
 } 
 

PolicyV4Fields describes the attributes for a PostPolicyV4 request.

PostPolicyV4

  type 
  
 PostPolicyV4 
  
 struct 
  
 { 
  
 // URL is the generated URL that the file upload will be made to. 
  
 URL 
  
  string 
 
  
 // Fields specifies the generated key-values that the file uploader 
  
 // must include in their multipart upload form. 
  
 Fields 
  
 map 
 [ 
  string 
 
 ] 
  string 
 
 } 
 

PostPolicyV4 describes the URL and respective form fields for a generated PostPolicyV4 request.

func GenerateSignedPostPolicyV4

  func 
  
 GenerateSignedPostPolicyV4 
 ( 
 bucket 
 , 
  
 object 
  
  string 
 
 , 
  
 opts 
  
 * 
  PostPolicyV4Options 
 
 ) 
  
 ( 
 * 
  PostPolicyV4 
 
 , 
  
  error 
 
 ) 
 

GenerateSignedPostPolicyV4 generates a PostPolicyV4 value from bucket, object and opts. The generated URL and fields will then allow an unauthenticated client to perform multipart uploads. If initializing a Storage Client, instead use the Bucket.GenerateSignedPostPolicyV4 method which uses the Client's credentials to handle authentication.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "bytes" 
  
 "io" 
  
 "mime/multipart" 
  
 "net/http" 
  
 "time" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 pv4 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 GenerateSignedPostPolicyV4 
 ( 
 "my-bucket" 
 , 
  
 "my-object.txt" 
 , 
  
& storage 
 . 
  PostPolicyV4Options 
 
 { 
  
 GoogleAccessID 
 : 
  
 "my-access-id" 
 , 
  
 PrivateKey 
 : 
  
 [] 
 byte 
 ( 
 "my-private-key" 
 ), 
  
 // The upload expires in 2hours. 
  
 Expires 
 : 
  
 time 
 . 
 Now 
 (). 
  Add 
 
 ( 
 2 
  
 * 
  
 time 
 . 
 Hour 
 ), 
  
 Fields 
 : 
  
& storage 
 . 
  PolicyV4Fields 
 
 { 
  
 StatusCodeOnSuccess 
 : 
  
 200 
 , 
  
 RedirectToURLOnSuccess 
 : 
  
 "https://example.org/" 
 , 
  
 // It MUST only be a text file. 
  
 ContentType 
 : 
  
 "text/plain" 
 , 
  
 }, 
  
 // The conditions that the uploaded file will be expected to conform to. 
  
 Conditions 
 : 
  
 [] 
 storage 
 . 
  PostPolicyV4Condition 
 
 { 
  
 // Make the file a maximum of 10mB. 
  
 storage 
 . 
  ConditionContentLengthRange 
 
 ( 
 0 
 , 
  
 10<<20 
 ), 
  
 }, 
  
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Now you can upload your file using the generated post policy 
  
 // with a plain HTTP client or even the browser. 
  
 formBuf 
  
 := 
  
 new 
 ( 
 bytes 
 . 
 Buffer 
 ) 
  
 mw 
  
 := 
  
 multipart 
 . 
  NewWriter 
 
 ( 
 formBuf 
 ) 
  
 for 
  
 fieldName 
 , 
  
 value 
  
 := 
  
 range 
  
 pv4 
 . 
 Fields 
  
 { 
  
 if 
  
 err 
  
 := 
  
 mw 
 . 
 WriteField 
 ( 
 fieldName 
 , 
  
 value 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 } 
  
 file 
  
 := 
  
 bytes 
 . 
  NewReader 
 
 ( 
 bytes 
 . 
 Repeat 
 ([] 
 byte 
 ( 
 "a" 
 ), 
  
 100 
 )) 
  
 mf 
 , 
  
 err 
  
 := 
  
 mw 
 . 
 CreateFormFile 
 ( 
 "file" 
 , 
  
 "myfile.txt" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 io 
 . 
 Copy 
 ( 
 mf 
 , 
  
 file 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 mw 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Compose the request. 
  
 req 
 , 
  
 err 
  
 := 
  
 http 
 . 
 NewRequest 
 ( 
 "POST" 
 , 
  
 pv4 
 . 
 URL 
 , 
  
 formBuf 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 // Ensure the Content-Type is derived from the multipart writer. 
  
 req 
 . 
 Header 
 . 
  Set 
 
 ( 
 "Content-Type" 
 , 
  
 mw 
 . 
 FormDataContentType 
 ()) 
  
 res 
 , 
  
 err 
  
 := 
  
 http 
 . 
 DefaultClient 
 . 
 Do 
 ( 
 req 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 _ 
  
 = 
  
 res 
 } 
 

PostPolicyV4Condition

  type 
  
 PostPolicyV4Condition 
  
 interface 
  
 { 
  
  json 
 
 . 
  Marshaler 
 
  
 // contains filtered or unexported methods 
 } 
 

PostPolicyV4Condition describes the constraints that the subsequent object upload's multipart form fields will be expected to conform to.

func ConditionContentLengthRange

  func 
  
 ConditionContentLengthRange 
 ( 
 start 
 , 
  
 end 
  
  uint64 
 
 ) 
  
  PostPolicyV4Condition 
 
 

ConditionContentLengthRange constraints the limits that the multipart upload's range header will be expected to be within.

func ConditionStartsWith

  func 
  
 ConditionStartsWith 
 ( 
 key 
 , 
  
 value 
  
  string 
 
 ) 
  
  PostPolicyV4Condition 
 
 

ConditionStartsWith checks that an attributes starts with value. An empty value will cause this condition to be ignored.

PostPolicyV4Options

  type 
  
 PostPolicyV4Options 
  
 struct 
  
 { 
  
 // GoogleAccessID represents the authorizer of the signed post policy generation. 
  
 // It is typically the Google service account client email address from 
  
 // the Google Developers Console in the form of "xxx@developer.gserviceaccount.com". 
  
 // Required. 
  
 GoogleAccessID 
  
  string 
 
  
 // PrivateKey is the Google service account private key. It is obtainable 
  
 // from the Google Developers Console. 
  
 // At https://console.developers.google.com/project/ 
 

PostPolicyV4Options are used to construct a signed post policy. Please see https://cloud.google.com/storage/docs/xml-api/post-object for reference about the fields.

ProjectTeam

  type 
  
 ProjectTeam 
  
 struct 
  
 { 
  
 ProjectNumber 
  
  string 
 
  
 Team 
  
  string 
 
 } 
 

ProjectTeam is the project team associated with the entity, if any.

Projection

  type 
  
 Projection 
  
  int 
 
 

Projection is enumerated type for Query.Projection.

ProjectionDefault, ProjectionFull, ProjectionNoACL

  const 
  
 ( 
  
 // ProjectionDefault returns all fields of objects. 
  
 ProjectionDefault 
  
  Projection 
 
  
 = 
  
  iota 
 
  
 // ProjectionFull returns all fields of objects. 
  
 ProjectionFull 
  
 // ProjectionNoACL returns all fields of objects except for Owner and ACL. 
  
 ProjectionNoACL 
 ) 
 

func (Projection) String

  func 
  
 ( 
 p 
  
  Projection 
 
 ) 
  
 String 
 () 
  
  string 
 
 

PublicAccessPrevention

  type 
  
 PublicAccessPrevention 
  
  int 
 
 

PublicAccessPrevention configures the Public Access Prevention feature, which can be used to disallow public access to any data in a bucket. See https://cloud.google.com/storage/docs/public-access-prevention for more information.

PublicAccessPreventionUnknown, PublicAccessPreventionUnspecified, PublicAccessPreventionEnforced, PublicAccessPreventionInherited

  const 
  
 ( 
  
 // PublicAccessPreventionUnknown is a zero value, used only if this field is 
  
 // not set in a call to GCS. 
  
 PublicAccessPreventionUnknown 
  
  PublicAccessPrevention 
 
  
 = 
  
  iota 
 
  
 // PublicAccessPreventionUnspecified corresponds to a value of "unspecified". 
  
 // Deprecated: use PublicAccessPreventionInherited 
  
 PublicAccessPreventionUnspecified 
  
 // PublicAccessPreventionEnforced corresponds to a value of "enforced". This 
  
 // enforces Public Access Prevention on the bucket. 
  
 PublicAccessPreventionEnforced 
  
 // PublicAccessPreventionInherited corresponds to a value of "inherited" 
  
 // and is the default for buckets. 
  
 PublicAccessPreventionInherited 
 ) 
 

func (PublicAccessPrevention) String

Query

  type 
  
 Query 
  
 struct 
  
 { 
  
 // Delimiter returns results in a directory-like fashion. 
  
 // Results will contain only objects whose names, aside from the 
  
 // prefix, do not contain delimiter. Objects whose names, 
  
 // aside from the prefix, contain delimiter will have their name, 
  
 // truncated after the delimiter, returned in prefixes. 
  
 // Duplicate prefixes are omitted. 
  
 // Must be set to / when used with the MatchGlob parameter to filter results 
  
 // in a directory-like mode. 
  
 // Optional. 
  
 Delimiter 
  
  string 
 
  
 // Prefix is the prefix filter to query objects 
  
 // whose names begin with this prefix. 
  
 // Optional. 
  
 Prefix 
  
  string 
 
  
 // Versions indicates whether multiple versions of the same 
  
 // object will be included in the results. 
  
 Versions 
  
  bool 
 
  
 // StartOffset is used to filter results to objects whose names are 
  
 // lexicographically equal to or after startOffset. If endOffset is also set, 
  
 // the objects listed will have names between startOffset (inclusive) and 
  
 // endOffset (exclusive). 
  
 StartOffset 
  
  string 
 
  
 // EndOffset is used to filter results to objects whose names are 
  
 // lexicographically before endOffset. If startOffset is also set, the objects 
  
 // listed will have names between startOffset (inclusive) and endOffset (exclusive). 
  
 EndOffset 
  
  string 
 
  
 // Projection defines the set of properties to return. It will default to ProjectionFull, 
  
 // which returns all properties. Passing ProjectionNoACL will omit Owner and ACL, 
  
 // which may improve performance when listing many objects. 
  
 Projection 
  
  Projection 
 
  
 // IncludeTrailingDelimiter controls how objects which end in a single 
  
 // instance of Delimiter (for example, if Query.Delimiter = "/" and the 
  
 // object name is "foo/bar/") are included in the results. By default, these 
  
 // objects only show up as prefixes. If IncludeTrailingDelimiter is set to 
  
 // true, they will also be included as objects and their metadata will be 
  
 // populated in the returned ObjectAttrs. 
  
 IncludeTrailingDelimiter 
  
  bool 
 
  
 // MatchGlob is a glob pattern used to filter results (for example, foo*bar). See 
  
 // https://cloud.google.com/storage/docs/json_api/v1/objects/list#list-object-glob 
  
 // for syntax details. When Delimiter is set in conjunction with MatchGlob, 
  
 // it must be set to /. 
  
 MatchGlob 
  
  string 
 
  
 // IncludeFoldersAsPrefixes includes Folders and Managed Folders in the set of 
  
 // prefixes returned by the query. Only applicable if Delimiter is set to /. 
  
 // IncludeFoldersAsPrefixes is not yet implemented in the gRPC API. 
  
 IncludeFoldersAsPrefixes 
  
  bool 
 
  
 // SoftDeleted indicates whether to list soft-deleted objects. 
  
 // If true, only objects that have been soft-deleted will be listed. 
  
 // By default, soft-deleted objects are not listed. 
  
 SoftDeleted 
  
  bool 
 
  
 // contains filtered or unexported fields 
 } 
 

Query represents a query to filter objects from a bucket.

func (*Query) SetAttrSelection

  func 
  
 ( 
 q 
  
 * 
  Query 
 
 ) 
  
 SetAttrSelection 
 ( 
 attrs 
  
 [] 
  string 
 
 ) 
  
  error 
 
 

SetAttrSelection makes the query populate only specific attributes of objects. When iterating over objects, if you only need each object's name and size, pass []string{"Name", "Size"} to this method. Only these fields will be fetched for each object across the network; the other fields of ObjectAttr will remain at their default values. This is a performance optimization; for more information, see https://cloud.google.com/storage/docs/json_api/v1/how-tos/performance

RPO

  type 
  
 RPO 
  
  int 
 
 

RPO (Recovery Point Objective) configures the turbo replication feature. See https://cloud.google.com/storage/docs/managing-turbo-replication for more information.

RPOUnknown, RPODefault, RPOAsyncTurbo

  const 
  
 ( 
  
 // RPOUnknown is a zero value. It may be returned from bucket.Attrs() if RPO 
  
 // is not present in the bucket metadata, that is, the bucket is not dual-region. 
  
 // This value is also used if the RPO field is not set in a call to GCS. 
  
 RPOUnknown 
  
  RPO 
 
  
 = 
  
  iota 
 
  
 // RPODefault represents default replication. It is used to reset RPO on an 
  
 // existing bucket  that has this field set to RPOAsyncTurbo. Otherwise it 
  
 // is equivalent to RPOUnknown, and is always ignored. This value is valid 
  
 // for dual- or multi-region buckets. 
  
 RPODefault 
  
 // RPOAsyncTurbo represents turbo replication and is used to enable Turbo 
  
 // Replication on a bucket. This value is only valid for dual-region buckets. 
  
 RPOAsyncTurbo 
 ) 
 

func (RPO) String

  func 
  
 ( 
 rpo 
  
  RPO 
 
 ) 
  
 String 
 () 
  
  string 
 
 

Reader

  type 
  
 Reader 
  
 struct 
  
 { 
  
 Attrs 
  
  ReaderObjectAttrs 
 
  
 // contains filtered or unexported fields 
 } 
 

Reader reads a Cloud Storage object. It implements io.Reader.

Typically, a Reader computes the CRC of the downloaded content and compares it to the stored CRC, returning an error from Read if there is a mismatch. This integrity check is skipped if transcoding occurs. See https://cloud.google.com/storage/docs/transcoding .

func (*Reader) CacheControl (deprecated)

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 CacheControl 
 () 
  
  string 
 
 

CacheControl returns the cache control of the object.

Deprecated: use Reader.Attrs.CacheControl.

func (*Reader) Close

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 Close 
 () 
  
  error 
 
 

Close closes the Reader. It must be called when done reading.

func (*Reader) ContentEncoding (deprecated)

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 ContentEncoding 
 () 
  
  string 
 
 

ContentEncoding returns the content encoding of the object.

Deprecated: use Reader.Attrs.ContentEncoding.

func (*Reader) ContentType (deprecated)

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 ContentType 
 () 
  
  string 
 
 

ContentType returns the content type of the object.

Deprecated: use Reader.Attrs.ContentType.

func (*Reader) LastModified (deprecated)

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 LastModified 
 () 
  
 ( 
  time 
 
 . 
  Time 
 
 , 
  
  error 
 
 ) 
 

LastModified returns the value of the Last-Modified header.

Deprecated: use Reader.Attrs.LastModified.

func (*Reader) Read

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 Read 
 ( 
 p 
  
 [] 
  byte 
 
 ) 
  
 ( 
  int 
 
 , 
  
  error 
 
 ) 
 

func (*Reader) Remain

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 Remain 
 () 
  
  int64 
 
 

Remain returns the number of bytes left to read, or -1 if unknown.

func (*Reader) Size (deprecated)

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 Size 
 () 
  
  int64 
 
 

Size returns the size of the object in bytes. The returned value is always the same and is not affected by calls to Read or Close.

Deprecated: use Reader.Attrs.Size.

func (*Reader) WriteTo

  func 
  
 ( 
 r 
  
 * 
  Reader 
 
 ) 
  
 WriteTo 
 ( 
 w 
  
  io 
 
 . 
  Writer 
 
 ) 
  
 ( 
  int64 
 
 , 
  
  error 
 
 ) 
 

WriteTo writes all the data from the Reader to w. Fulfills the io.WriterTo interface. This is called implicitly when calling io.Copy on a Reader.

ReaderObjectAttrs

  type 
  
 ReaderObjectAttrs 
  
 struct 
  
 { 
  
 // Size is the length of the object's content. 
  
 Size 
  
  int64 
 
  
 // StartOffset is the byte offset within the object 
  
 // from which reading begins. 
  
 // This value is only non-zero for range requests. 
  
 StartOffset 
  
  int64 
 
  
 // ContentType is the MIME type of the object's content. 
  
 ContentType 
  
  string 
 
  
 // ContentEncoding is the encoding of the object's content. 
  
 ContentEncoding 
  
  string 
 
  
 // CacheControl specifies whether and for how long browser and Internet 
  
 // caches are allowed to cache your objects. 
  
 CacheControl 
  
  string 
 
  
 // LastModified is the time that the object was last modified. 
  
 LastModified 
  
  time 
 
 . 
  Time 
 
  
 // Generation is the generation number of the object's content. 
  
 Generation 
  
  int64 
 
  
 // Metageneration is the version of the metadata for this object at 
  
 // this generation. This field is used for preconditions and for 
  
 // detecting changes in metadata. A metageneration number is only 
  
 // meaningful in the context of a particular generation of a 
  
 // particular object. 
  
 Metageneration 
  
  int64 
 
 } 
 

ReaderObjectAttrs are attributes about the object being read. These are populated during the New call. This struct only holds a subset of object attributes: to get the full set of attributes, use ObjectHandle.Attrs.

Each field is read-only.

RestoreOptions

  type 
  
 RestoreOptions 
  
 struct 
  
 { 
  
 /// CopySourceACL indicates whether the restored object should copy the 
  
 // access controls of the source object. Only valid for buckets with 
  
 // fine-grained access. If uniform bucket-level access is enabled, setting 
  
 // CopySourceACL will cause an error. 
  
 CopySourceACL 
  
  bool 
 
 } 
 

RestoreOptions allows you to set options when restoring an object.

RetentionPolicy

  type 
  
 RetentionPolicy 
  
 struct 
  
 { 
  
 // RetentionPeriod specifies the duration that objects need to be 
  
 // retained. Retention duration must be greater than zero and less than 
  
 // 100 years. Note that enforcement of retention periods less than a day 
  
 // is not guaranteed. Such periods should only be used for testing 
  
 // purposes. 
  
 RetentionPeriod 
  
  time 
 
 . 
  Duration 
 
  
 // EffectiveTime is the time from which the policy was enforced and 
  
 // effective. This field is read-only. 
  
 EffectiveTime 
  
  time 
 
 . 
  Time 
 
  
 // IsLocked describes whether the bucket is locked. Once locked, an 
  
 // object retention policy cannot be modified. 
  
 // This field is read-only. 
  
 IsLocked 
  
  bool 
 
 } 
 

RetentionPolicy enforces a minimum retention time for all objects contained in the bucket.

Any attempt to overwrite or delete objects younger than the retention period will result in an error. An unlocked retention policy can be modified or removed from the bucket via the Update method. A locked retention policy cannot be removed or shortened in duration for the lifetime of the bucket.

This feature is in private alpha release. It is not currently available to most customers. It might be changed in backwards-incompatible ways and is not subject to any SLA or deprecation policy.

RetryOption

  type 
  
 RetryOption 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

RetryOption allows users to configure non-default retry behavior for API calls made to GCS.

func WithBackoff

  func 
  
 WithBackoff 
 ( 
 backoff 
  
  gax 
 
 . 
  Backoff 
 
 ) 
  
  RetryOption 
 
 

WithBackoff allows configuration of the backoff timing used for retries. Available configuration options (Initial, Max and Multiplier) are described at https://pkg.go.dev/github.com/googleapis/gax-go/v2#Backoff . If any fields are not supplied by the user, gax default values will be used.

func WithErrorFunc

  func 
  
 WithErrorFunc 
 ( 
 shouldRetry 
  
 func 
 ( 
 err 
  
  error 
 
 ) 
  
  bool 
 
 ) 
  
  RetryOption 
 
 

WithErrorFunc allows users to pass a custom function to the retryer. Errors will be retried if and only if shouldRetry(err) returns true. By default, the following errors are retried (see ShouldRetry for the default function):

  • HTTP responses with codes 408, 429, 502, 503, and 504.

  • Transient network errors such as connection reset and io.ErrUnexpectedEOF.

  • Errors which are considered transient using the Temporary() interface.

  • Wrapped versions of these errors.

This option can be used to retry on a different set of errors than the default. Users can use the default ShouldRetry function inside their custom function if they only want to make minor modifications to default behavior.

func WithMaxAttempts

  func 
  
 WithMaxAttempts 
 ( 
 maxAttempts 
  
  int 
 
 ) 
  
  RetryOption 
 
 

WithMaxAttempts configures the maximum number of times an API call can be made in the case of retryable errors. For example, if you set WithMaxAttempts(5), the operation will be attempted up to 5 times total (initial call plus 4 retries). Without this setting, operations will continue retrying indefinitely until either the context is canceled or a deadline is reached.

func WithPolicy

  func 
  
 WithPolicy 
 ( 
 policy 
  
  RetryPolicy 
 
 ) 
  
  RetryOption 
 
 

WithPolicy allows the configuration of which operations should be performed with retries for transient errors.

RetryPolicy

  type 
  
 RetryPolicy 
  
  int 
 
 

RetryPolicy describes the available policies for which operations should be retried. The default is RetryIdempotent .

RetryIdempotent, RetryAlways, RetryNever

  const 
  
 ( 
  
 // RetryIdempotent causes only idempotent operations to be retried when the 
  
 // service returns a transient error. Using this policy, fully idempotent 
  
 // operations (such as `ObjectHandle.Attrs()`) will always be retried. 
  
 // Conditionally idempotent operations (for example `ObjectHandle.Update()`) 
  
 // will be retried only if the necessary conditions have been supplied (in 
  
 // the case of `ObjectHandle.Update()` this would mean supplying a 
  
 // `Conditions.MetagenerationMatch` condition is required). 
  
 RetryIdempotent 
  
  RetryPolicy 
 
  
 = 
  
  iota 
 
  
 // RetryAlways causes all operations to be retried when the service returns a 
  
 // transient error, regardless of idempotency considerations. 
  
 RetryAlways 
  
 // RetryNever causes the client to not perform retries on failed operations. 
  
 RetryNever 
 ) 
 

SignedURLOptions

  type 
  
 SignedURLOptions 
  
 struct 
  
 { 
  
 // GoogleAccessID represents the authorizer of the signed URL generation. 
  
 // It is typically the Google service account client email address from 
  
 // the Google Developers Console in the form of "xxx@developer.gserviceaccount.com". 
  
 // Required. 
  
 GoogleAccessID 
  
  string 
 
  
 // PrivateKey is the Google service account private key. It is obtainable 
  
 // from the Google Developers Console. 
  
 // At https://console.developers.google.com/project/ 
 

SignedURLOptions allows you to restrict the access to the signed URL.

SigningScheme

  type 
  
 SigningScheme 
  
  int 
 
 

SigningScheme determines the API version to use when signing URLs.

SigningSchemeDefault, SigningSchemeV2, SigningSchemeV4

  const 
  
 ( 
  
 // SigningSchemeDefault is presently V2 and will change to V4 in the future. 
  
 SigningSchemeDefault 
  
  SigningScheme 
 
  
 = 
  
  iota 
 
  
 // SigningSchemeV2 uses the V2 scheme to sign URLs. 
  
 SigningSchemeV2 
  
 // SigningSchemeV4 uses the V4 scheme to sign URLs. 
  
 SigningSchemeV4 
 ) 
 

SoftDeletePolicy

  type 
  
 SoftDeletePolicy 
  
 struct 
  
 { 
  
 // EffectiveTime indicates the time from which the policy, or one with a 
  
 // greater retention, was effective. This field is read-only. 
  
 EffectiveTime 
  
  time 
 
 . 
  Time 
 
  
 // RetentionDuration is the amount of time that soft-deleted objects in the 
  
 // bucket will be retained and cannot be permanently deleted. 
  
 RetentionDuration 
  
  time 
 
 . 
  Duration 
 
 } 
 

SoftDeletePolicy contains the bucket's soft delete policy, which defines the period of time that soft-deleted objects will be retained, and cannot be permanently deleted.

URLStyle

  type 
  
 URLStyle 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

URLStyle determines the style to use for the signed URL. PathStyle is the default. All non-default options work with V4 scheme only. See https://cloud.google.com/storage/docs/request-endpoints for details.

func BucketBoundHostname

  func 
  
 BucketBoundHostname 
 ( 
 hostname 
  
  string 
 
 ) 
  
  URLStyle 
 
 

BucketBoundHostname generates a URL with a custom hostname tied to a specific GCS bucket. The desired hostname should be passed in using the hostname argument. Generated urls will be of the form "

func PathStyle

  func 
  
 PathStyle 
 () 
  
  URLStyle 
 
 

PathStyle is the default style, and will generate a URL of the form "

func VirtualHostedStyle

  func 
  
 VirtualHostedStyle 
 () 
  
  URLStyle 
 
 

VirtualHostedStyle generates a URL relative to the bucket's virtual hostname, e.g. "

UniformBucketLevelAccess

  type 
  
 UniformBucketLevelAccess 
  
 struct 
  
 { 
  
 // Enabled specifies whether access checks use only bucket-level IAM 
  
 // policies. Enabled may be disabled until the locked time. 
  
 Enabled 
  
  bool 
 
  
 // LockedTime specifies the deadline for changing Enabled from true to 
  
 // false. 
  
 LockedTime 
  
  time 
 
 . 
  Time 
 
 } 
 

UniformBucketLevelAccess configures access checks to use only bucket-level IAM policies.

Writer

  type 
  
 Writer 
  
 struct 
  
 { 
  
 // ObjectAttrs are optional attributes to set on the object. Any attributes 
  
 // must be initialized before the first Write call. Nil or zero-valued 
  
 // attributes are ignored. 
  
  ObjectAttrs 
 
  
 // SendCRC32C specifies whether to transmit a CRC32C field. It should be set 
  
 // to true in addition to setting the Writer's CRC32C field, because zero 
  
 // is a valid CRC and normally a zero would not be transmitted. 
  
 // If a CRC32C is sent, and the data written does not match the checksum, 
  
 // the write will be rejected. 
  
 // 
  
 // Note: SendCRC32C must be set to true BEFORE the first call to 
  
 // Writer.Write() in order to send the checksum. If it is set after that 
  
 // point, the checksum will be ignored. 
  
 SendCRC32C 
  
  bool 
 
  
 // ChunkSize controls the maximum number of bytes of the object that the 
  
 // Writer will attempt to send to the server in a single request. Objects 
  
 // smaller than the size will be sent in a single request, while larger 
  
 // objects will be split over multiple requests. The value will be rounded up 
  
 // to the nearest multiple of 256K. The default ChunkSize is 16MiB. 
  
 // 
  
 // Each Writer will internally allocate a buffer of size ChunkSize. This is 
  
 // used to buffer input data and allow for the input to be sent again if a 
  
 // request must be retried. 
  
 // 
  
 // If you upload small objects (< 16mib),="" you="" should="" set="" chunksize="" to="" a="" value="" slightly="" larger="" than="" the="" objects'="" sizes="" to="" avoid="" memory="" bloat.="" this="" is="" especially="" important="" if="" you="" are="" uploading="" many="" small="" objects="" concurrently.="" see="" https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload#size="" for="" more="" information="" about="" performance="" trade-offs="" related="" to="" chunksize.="" if="" chunksize="" is="" set="" to="" zero,="" chunking="" will="" be="" disabled="" and="" the="" object="" will="" be="" uploaded="" in="" a="" single="" request="" without="" the="" use="" of="" a="" buffer.="" this="" will="" further="" reduce="" memory="" used="" during="" uploads,="" but="" will="" also="" prevent="" the="" writer="" from="" retrying="" in="" case="" of="" a="" transient="" error="" from="" the="" server="" or="" resuming="" an="" upload="" that="" fails="" midway="" through,="" since="" the="" buffer="" is="" required="" in="" order="" to="" retry="" the="" failed="" request.="" chunksize="" must="" be="" set="" before="" the="" first="" write="" call.="" chunksize=""> int 
 
  
 // ChunkRetryDeadline sets a per-chunk retry deadline for multi-chunk 
  
 // resumable uploads. 
  
 // 
  
 // For uploads of larger files, the Writer will attempt to retry if the 
  
 // request to upload a particular chunk fails with a transient error. 
  
 // If a single chunk has been attempting to upload for longer than this 
  
 // deadline and the request fails, it will no longer be retried, and the error 
  
 // will be returned to the caller. This is only applicable for files which are 
  
 // large enough to require a multi-chunk resumable upload. The default value 
  
 // is 32s. Users may want to pick a longer deadline if they are using larger 
  
 // values for ChunkSize or if they expect to have a slow or unreliable 
  
 // internet connection. 
  
 // 
  
 // To set a deadline on the entire upload, use context timeout or 
  
 // cancellation. 
  
 ChunkRetryDeadline 
  
  time 
 
 . 
  Duration 
 
  
 // ForceEmptyContentType is an optional parameter that is used to disable 
  
 // auto-detection of Content-Type. By default, if a blank Content-Type 
  
 // is provided, then gax.DetermineContentType is called to sniff the type. 
  
 ForceEmptyContentType 
  
  bool 
 
  
 // ProgressFunc can be used to monitor the progress of a large write 
  
 // operation. If ProgressFunc is not nil and writing requires multiple 
  
 // calls to the underlying service (see 
  
 // https://cloud.google.com/storage/docs/json_api/v1/how-tos/resumable-upload), 
  
 // then ProgressFunc will be invoked after each call with the number of bytes of 
  
 // content copied so far. 
  
 // 
  
 // ProgressFunc should return quickly without blocking. 
  
 ProgressFunc 
  
 func 
 ( 
  int64 
 
 ) 
  
 // contains filtered or unexported fields 
 } 
 

A Writer writes a Cloud Storage object.

func (*Writer) Attrs

  func 
  
 ( 
 w 
  
 * 
  Writer 
 
 ) 
  
 Attrs 
 () 
  
 * 
  ObjectAttrs 
 
 

Attrs returns metadata about a successfully-written object. It's only valid to call it after Close returns nil.

func (*Writer) Close

  func 
  
 ( 
 w 
  
 * 
  Writer 
 
 ) 
  
 Close 
 () 
  
  error 
 
 

Close completes the write operation and flushes any buffered data. If Close doesn't return an error, metadata about the written object can be retrieved by calling Attrs.

func (*Writer) CloseWithError (deprecated)

  func 
  
 ( 
 w 
  
 * 
  Writer 
 
 ) 
  
 CloseWithError 
 ( 
 err 
  
  error 
 
 ) 
  
  error 
 
 

CloseWithError aborts the write operation with the provided error. CloseWithError always returns nil.

Deprecated: cancel the context passed to NewWriter instead.

func (*Writer) Write

  func 
  
 ( 
 w 
  
 * 
  Writer 
 
 ) 
  
 Write 
 ( 
 p 
  
 [] 
  byte 
 
 ) 
  
 ( 
 n 
  
  int 
 
 , 
  
 err 
  
  error 
 
 ) 
 

Write appends to w. It implements the io.Writer interface.

Since writes happen asynchronously, Write may return a nil error even though the write failed (or will fail). Always use the error returned from Writer.Close to determine if the upload was successful.

Writes will be retried on transient errors from the server, unless Writer.ChunkSize has been set to zero.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 wc 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewWriter 
 
 ( 
 ctx 
 ) 
  
 wc 
 . 
  ContentType 
 
  
 = 
  
 "text/plain" 
  
 wc 
 . 
 ACL 
  
 = 
  
 [] 
 storage 
 . 
 ACLRule 
{{Entity: storage.AllUsers, Role: storage.RoleReader}}  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 wc 
 . 
  Write 
 
 ([] 
 byte 
 ( 
 "hello world" 
 )); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 // Note that Write may return nil in some error situations, 
  
 // so always check the error from Close. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 wc 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "updated object:" 
 , 
  
 wc 
 . 
 Attrs 
 ()) 
 } 
 
checksum
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "hash/crc32" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 data 
  
 := 
  
 [] 
 byte 
 ( 
 "verify me" 
 ) 
  
 wc 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewWriter 
 
 ( 
 ctx 
 ) 
  
 wc 
 . 
 CRC32C 
  
 = 
  
 crc32 
 . 
 Checksum 
 ( 
 data 
 , 
  
 crc32 
 . 
 MakeTable 
 ( 
 crc32 
 . 
 Castagnoli 
 )) 
  
 wc 
 . 
 SendCRC32C 
  
 = 
  
 true 
  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 wc 
 . 
  Write 
 
 ([] 
 byte 
 ( 
 "hello world" 
 )); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 // Note that Write may return nil in some error situations, 
  
 // so always check the error from Close. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 wc 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "updated object:" 
 , 
  
 wc 
 . 
 Attrs 
 ()) 
 } 
 
timeout
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "time" 
  
 "cloud.google.com/go/storage" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 storage 
 . 
 NewClient 
 ( 
 ctx 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 tctx 
 , 
  
 cancel 
  
 := 
  
 context 
 . 
 WithTimeout 
 ( 
 ctx 
 , 
  
 30 
 * 
 time 
 . 
 Second 
 ) 
  
 defer 
  
 cancel 
 () 
  
 // Cancel when done, whether we time out or not. 
  
 wc 
  
 := 
  
 client 
 . 
  Bucket 
 
 ( 
 "bucketname" 
 ). 
  Object 
 
 ( 
 "filename1" 
 ). 
  NewWriter 
 
 ( 
 tctx 
 ) 
  
 wc 
 . 
  ContentType 
 
  
 = 
  
 "text/plain" 
  
 wc 
 . 
 ACL 
  
 = 
  
 [] 
 storage 
 . 
 ACLRule 
{{Entity: storage.AllUsers, Role: storage.RoleReader}}  
 if 
  
 _ 
 , 
  
 err 
  
 := 
  
 wc 
 . 
  Write 
 
 ([] 
 byte 
 ( 
 "hello world" 
 )); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 // Note that Write may return nil in some error situations, 
  
 // so always check the error from Close. 
  
 } 
  
 if 
  
 err 
  
 := 
  
 wc 
 . 
 Close 
 (); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 // TODO: handle error. 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "updated object:" 
 , 
  
 wc 
 . 
 Attrs 
 ()) 
 } 
 
Create a Mobile Website
View Site in Mobile | Classic
Share by: