Package cloud.google.com/go/vertexai/genai (v0.11.0)

Package genai is a client for the generative VertexAI model.

Functions

func Ptr

  func 
  
 Ptr 
 ( 
 t 
  
  T 
 
 ) 
  
 * 
  T 
 
 

Ptr returns a pointer to its argument. It can be used to initialize pointer fields:

 model.Temperature = genai.Ptr[float32](0.1) 

func WithREST

  func 
  
 WithREST 
 () 
  
 option 
 . 
 ClientOption 
 

WithREST is an option that enables REST transport for the client. The default transport (if this option isn't provided) is gRPC.

Blob

  type 
  
 Blob 
  
 struct 
  
 { 
  
 // Required. The IANA standard MIME type of the source data. 
  
 MIMEType 
  
  string 
 
  
 // Required. Raw bytes. 
  
 Data 
  
 [] 
  byte 
 
 } 
 

Blob contains binary data like images. Use [Text] for text.

func ImageData

  func 
  
 ImageData 
 ( 
 format 
  
  string 
 
 , 
  
 data 
  
 [] 
  byte 
 
 ) 
  
  Blob 
 
 

ImageData is a convenience function for creating an image Blob for input to a model. The format should be the second part of the MIME type, after "image/". For example, for a PNG image, pass "png".

BlockedError

  type 
  
 BlockedError 
  
 struct 
  
 { 
  
 // If non-nil, the model's response was blocked. 
  
 // Consult the Candidate and SafetyRatings fields for details. 
  
 Candidate 
  
 * 
  Candidate 
 
  
 // If non-nil, there was a problem with the prompt. 
  
 PromptFeedback 
  
 * 
  PromptFeedback 
 
 } 
 

A BlockedError indicates that the model's response was blocked. There can be two underlying causes: the prompt or a candidate response.

func (*BlockedError) Error

  func 
  
 ( 
 e 
  
 * 
  BlockedError 
 
 ) 
  
 Error 
 () 
  
  string 
 
 

BlockedReason

  type 
  
 BlockedReason 
  
  int32 
 
 

BlockedReason is blocked reason enumeration.

BlockedReasonUnspecified, BlockedReasonSafety, BlockedReasonOther, BlockedReasonBlocklist, BlockedReasonProhibitedContent

  const 
  
 ( 
  
 // BlockedReasonUnspecified means unspecified blocked reason. 
  
 BlockedReasonUnspecified 
  
  BlockedReason 
 
  
 = 
  
 0 
  
 // BlockedReasonSafety means candidates blocked due to safety. 
  
 BlockedReasonSafety 
  
  BlockedReason 
 
  
 = 
  
 1 
  
 // BlockedReasonOther means candidates blocked due to other reason. 
  
 BlockedReasonOther 
  
  BlockedReason 
 
  
 = 
  
 2 
  
 // BlockedReasonBlocklist means candidates blocked due to the terms which are included from the 
  
 // terminology blocklist. 
  
 BlockedReasonBlocklist 
  
  BlockedReason 
 
  
 = 
  
 3 
  
 // BlockedReasonProhibitedContent means candidates blocked due to prohibited content. 
  
 BlockedReasonProhibitedContent 
  
  BlockedReason 
 
  
 = 
  
 4 
 ) 
 

func (BlockedReason) String

  func 
  
 ( 
 v 
  
  BlockedReason 
 
 ) 
  
 String 
 () 
  
  string 
 
 

CachedContent

  type 
  
 CachedContent 
  
 struct 
  
 { 
  
 // Expiration time of the cached content. 
  
 // 
  
 // Types that are assignable to Expiration: 
  
 // 
  
 //	*CachedContent_ExpireTime 
  
 //	*CachedContent_Ttl 
  
 Expiration 
  
  ExpireTimeOrTTL 
 
  
 // Immutable. Identifier. The resource name of the cached content 
  
 // Format: 
  
 // projects/{project}/locations/{location}/cachedContents/{cached_content} 
  
 Name 
  
  string 
 
  
 // Immutable. The name of the publisher model to use for cached content. 
  
 // Format: 
  
 // projects/{project}/locations/{location}/publishers/{publisher}/models/{model} 
  
 Model 
  
  string 
 
  
 // Optional. Input only. Immutable. Developer set system instruction. 
  
 // Currently, text only 
  
 SystemInstruction 
  
 * 
  Content 
 
  
 // Optional. Input only. Immutable. The content to cache 
  
 Contents 
  
 [] 
 * 
  Content 
 
  
 // Optional. Input only. Immutable. A list of `Tools` the model may use to 
  
 // generate the next response 
  
 Tools 
  
 [] 
 * 
  Tool 
 
  
 // Optional. Input only. Immutable. Tool config. This config is shared for all 
  
 // tools 
  
 ToolConfig 
  
 * 
  ToolConfig 
 
  
 // Output only. Creatation time of the cache entry. 
  
 CreateTime 
  
  time 
 
 . 
  Time 
 
  
 // Output only. When the cache entry was last updated in UTC time. 
  
 UpdateTime 
  
  time 
 
 . 
  Time 
 
 } 
 

CachedContent is a resource used in LLM queries for users to explicitly specify what to cache and how to cache.

CachedContentIterator

  type 
  
 CachedContentIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

A CachedContentIterator iterates over CachedContents.

func (*CachedContentIterator) Next

Next returns the next result. Its second return value is iterator.Done if there are no more results. Once Next returns Done, all subsequent calls will return Done.

func (*CachedContentIterator) PageInfo

  func 
  
 ( 
 it 
  
 * 
  CachedContentIterator 
 
 ) 
  
 PageInfo 
 () 
  
 * 
 iterator 
 . 
 PageInfo 
 

PageInfo supports pagination. See the google.golang.org/api/iterator package for details.

CachedContentToUpdate

  type 
  
 CachedContentToUpdate 
  
 struct 
  
 { 
  
 // If non-nil, update the expire time or TTL. 
  
 Expiration 
  
 * 
  ExpireTimeOrTTL 
 
 } 
 

CachedContentToUpdate specifies which fields of a CachedContent to modify in a call to [Client.UpdateCachedContent].

Candidate

  type 
  
 Candidate 
  
 struct 
  
 { 
  
 // Output only. Index of the candidate. 
  
 Index 
  
  int32 
 
  
 // Output only. Content parts of the candidate. 
  
 Content 
  
 * 
  Content 
 
  
 // Output only. The reason why the model stopped generating tokens. 
  
 // If empty, the model has not stopped generating the tokens. 
  
 FinishReason 
  
  FinishReason 
 
  
 // Output only. List of ratings for the safety of a response candidate. 
  
 // 
  
 // There is at most one rating per category. 
  
 SafetyRatings 
  
 [] 
 * 
  SafetyRating 
 
  
 // Output only. Describes the reason the mode stopped generating tokens in 
  
 // more detail. This is only filled when `finish_reason` is set. 
  
 FinishMessage 
  
  string 
 
  
 // Output only. Source attribution of the generated content. 
  
 CitationMetadata 
  
 * 
  CitationMetadata 
 
 } 
 

Candidate is a response candidate generated from the model.

func (*Candidate) FunctionCalls

  func 
  
 ( 
 c 
  
 * 
  Candidate 
 
 ) 
  
 FunctionCalls 
 () 
  
 [] 
  FunctionCall 
 
 

FunctionCalls return all the FunctionCall parts in the candidate.

ChatSession

  type 
  
 ChatSession 
  
 struct 
  
 { 
  
 History 
  
 [] 
 * 
  Content 
 
  
 // contains filtered or unexported fields 
 } 
 

A ChatSession provides interactive chat.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "log" 
  
 "cloud.google.com/go/vertexai/genai" 
  
 "google.golang.org/api/iterator" 
 ) 
 // Your GCP project 
 const 
  
 projectID 
  
 = 
  
 "your-project" 
 // A GCP location like "us-central1"; if you're using standard Google-published 
 // models (like untuned Gemini models), you can keep location blank (""). 
 const 
  
 location 
  
 = 
  
 "some-gcp-location" 
 // A model name like "gemini-1.0-pro" 
 // For custom models from different publishers, prepent the full publisher 
 // prefix for the model, e.g.: 
 // 
 //	modelName = publishers/some-publisher/models/some-model-name 
 const 
  
 modelName 
  
 = 
  
 "some-model" 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 model 
  
 := 
  
 client 
 . 
 GenerativeModel 
 ( 
 modelName 
 ) 
  
 cs 
  
 := 
  
 model 
 . 
  StartChat 
 
 () 
  
 send 
  
 := 
  
 func 
 ( 
 msg 
  
 string 
 ) 
  
 * 
 genai 
 . 
  GenerateContentResponse 
 
  
 { 
  
 fmt 
 . 
 Printf 
 ( 
 "== Me: %s\n== Model:\n" 
 , 
  
 msg 
 ) 
  
 res 
 , 
  
 err 
  
 := 
  
 cs 
 . 
  SendMessage 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 msg 
 )) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 return 
  
 res 
  
 } 
  
 res 
  
 := 
  
 send 
 ( 
 "Can you name some brands of air fryer?" 
 ) 
  
 printResponse 
 ( 
 res 
 ) 
  
 iter 
  
 := 
  
 cs 
 . 
  SendMessageStream 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 "Which one of those do you recommend?" 
 )) 
  
 for 
  
 { 
  
 res 
 , 
  
 err 
  
 := 
  
 iter 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 printResponse 
 ( 
 res 
 ) 
  
 } 
  
 for 
  
 i 
 , 
  
 c 
  
 := 
  
 range 
  
 cs 
 . 
 History 
  
 { 
  
 log 
 . 
 Printf 
 ( 
 "    %d: %+v" 
 , 
  
 i 
 , 
  
 c 
 ) 
  
 } 
  
 res 
  
 = 
  
 send 
 ( 
 "Why do you like the Philips?" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 printResponse 
 ( 
 res 
 ) 
 } 
 func 
  
 printResponse 
 ( 
 resp 
  
 * 
 genai 
 . 
  GenerateContentResponse 
 
 ) 
  
 { 
  
 for 
  
 _ 
 , 
  
 cand 
  
 := 
  
 range 
  
 resp 
 . 
 Candidates 
  
 { 
  
 for 
  
 _ 
 , 
  
 part 
  
 := 
  
 range 
  
 cand 
 . 
  Content 
 
 . 
 Parts 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 part 
 ) 
  
 } 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "---" 
 ) 
 } 
 

func (*ChatSession) SendMessage

  func 
  
 ( 
 cs 
  
 * 
  ChatSession 
 
 ) 
  
 SendMessage 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 parts 
  
 ... 
  Part 
 
 ) 
  
 ( 
 * 
  GenerateContentResponse 
 
 , 
  
  error 
 
 ) 
 

SendMessage sends a request to the model as part of a chat session.

func (*ChatSession) SendMessageStream

  func 
  
 ( 
 cs 
  
 * 
  ChatSession 
 
 ) 
  
 SendMessageStream 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 parts 
  
 ... 
  Part 
 
 ) 
  
 * 
  GenerateContentResponseIterator 
 
 

SendMessageStream is like SendMessage, but with a streaming request.

Citation

  type 
  
 Citation 
  
 struct 
  
 { 
  
 // Output only. Start index into the content. 
  
 StartIndex 
  
  int32 
 
  
 // Output only. End index into the content. 
  
 EndIndex 
  
  int32 
 
  
 // Output only. Url reference of the attribution. 
  
 URI 
  
  string 
 
  
 // Output only. Title of the attribution. 
  
 Title 
  
  string 
 
  
 // Output only. License of the attribution. 
  
 License 
  
  string 
 
  
 // Output only. Publication date of the attribution. 
  
 PublicationDate 
  
  civil 
 
 . 
  Date 
 
 } 
 

Citation contains source attributions for content.

  type 
  
 CitationMetadata 
  
 struct 
  
 { 
  
 // Output only. List of citations. 
  
 Citations 
  
 [] 
 * 
  Citation 
 
 } 
 

CitationMetadata is a collection of source attributions for a piece of content.

Client

  type 
  
 Client 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

A Client is a Google Vertex AI client.

Example

Example

cachedContent

cachedContent

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "log" 
  
 "os" 
  
 "cloud.google.com/go/vertexai/genai" 
 ) 
 // Your GCP project 
 const 
  
 projectID 
  
 = 
  
 "your-project" 
 // A GCP location like "us-central1"; if you're using standard Google-published 
 // models (like untuned Gemini models), you can keep location blank (""). 
 const 
  
 location 
  
 = 
  
 "some-gcp-location" 
 // A model name like "gemini-1.0-pro" 
 // For custom models from different publishers, prepent the full publisher 
 // prefix for the model, e.g.: 
 // 
 //	modelName = publishers/some-publisher/models/some-model-name 
 const 
  
 modelName 
  
 = 
  
 "some-model" 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 file 
  
 := 
  
 genai 
 . 
  FileData 
 
 { 
 MIMEType 
 : 
  
 "application/pdf" 
 , 
  
 FileURI 
 : 
  
 "gs://my-bucket/my-doc.pdf" 
 } 
  
 cc 
 , 
  
 err 
  
 := 
  
 client 
 . 
  CreateCachedContent 
 
 ( 
 ctx 
 , 
  
& genai 
 . 
  CachedContent 
 
 { 
  
 Model 
 : 
  
 modelName 
 , 
  
 Contents 
 : 
  
 [] 
 * 
 genai 
 . 
 Content 
{{Parts: []genai.Part{file}} }, 
  
 }) 
  
 model 
  
 := 
  
 client 
 . 
  GenerativeModelFromCachedContent 
 
 ( 
 cc 
 ) 
  
 // Work with the model as usual in this program. 
  
 _ 
  
 = 
  
 model 
  
 // Store the CachedContent name for later use. 
  
 if 
  
 err 
  
 := 
  
 os 
 . 
 WriteFile 
 ( 
 "my-cached-content-name" 
 , 
  
 [] 
 byte 
 ( 
 cc 
 . 
  Name 
 
 ), 
  
 0 
 o644 
 ); 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 /////////////////////////////// 
  
 // Later, in another process... 
  
 bytes 
 , 
  
 err 
  
 := 
  
 os 
 . 
 ReadFile 
 ( 
 "my-cached-content-name" 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 ccName 
  
 := 
  
 string 
 ( 
 bytes 
 ) 
  
 // No need to call [Client.GetCachedContent]; the name is sufficient. 
  
 model 
  
 = 
  
 client 
 . 
 GenerativeModel 
 ( 
 modelName 
 ) 
  
 model 
 . 
 CachedContentName 
  
 = 
  
 ccName 
  
 // Proceed as usual. 
 } 
 

func NewClient

  func 
  
 NewClient 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 projectID 
 , 
  
 location 
  
  string 
 
 , 
  
 opts 
  
 ... 
 option 
 . 
 ClientOption 
 ) 
  
 ( 
 * 
  Client 
 
 , 
  
  error 
 
 ) 
 

NewClient creates a new Google Vertex AI client.

Clients should be reused instead of created as needed. The methods of Client are safe for concurrent use by multiple goroutines. projectID is your GCP project; location is GCP region/location per https://cloud.google.com/vertex-ai/docs/general/locations If location is empty, this function attempts to infer it from environment variables and falls back to a default location if unsuccessful.

You may configure the client by passing in options from the [google.golang.org/api/option] package. You may also use options defined in this package, such as [WithREST].

func (*Client) Close

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 Close 
 () 
  
  error 
 
 

Close closes the client.

func (*Client) CreateCachedContent

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 CreateCachedContent 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 cc 
  
 * 
  CachedContent 
 
 ) 
  
 ( 
 * 
  CachedContent 
 
 , 
  
  error 
 
 ) 
 

CreateCachedContent creates a new CachedContent. The argument should contain a model name and some data to be cached, which can include contents, a system instruction, tools and/or tool configuration. It can also include an expiration time or TTL. But it should not include a name; the system will generate one.

The return value will contain the name, which should be used to refer to the CachedContent in other API calls. It will also hold various metadata like expiration and creation time. It will not contain any of the actual content provided as input.

You can use the return value to create a model with [Client.GenerativeModelFromCachedContent]. Or you can set [GenerativeModel.CachedContentName] to the name of the CachedContent, in which case you must ensure that the model provided in this call matches the name in the [GenerativeModel].

func (*Client) DeleteCachedContent

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 DeleteCachedContent 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 name 
  
  string 
 
 ) 
  
  error 
 
 

DeleteCachedContent deletes the CachedContent with the given name.

func (*Client) GenerativeModel

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 GenerativeModel 
 ( 
 name 
  
  string 
 
 ) 
  
 * 
  GenerativeModel 
 
 

GenerativeModel creates a new instance of the named model. name is a string model name like "gemini-1.0-pro" or "models/gemini-1.0-pro" for Google-published models. See https://cloud.google.com/vertex-ai/generative-ai/docs/learn/model-versioning for details on model naming and versioning, and https://cloud.google.com/vertex-ai/generative-ai/docs/model-garden/explore-models for providing model garden names. The SDK isn't familiar with custom model garden models, and will pass your model name to the backend API server.

func (*Client) GenerativeModelFromCachedContent

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 GenerativeModelFromCachedContent 
 ( 
 cc 
  
 * 
  CachedContent 
 
 ) 
  
 * 
  GenerativeModel 
 
 

GenerativeModelFromCachedContent returns a [GenerativeModel] that uses the given [CachedContent]. The argument should come from a call to [Client.CreateCachedContent] or [Client.GetCachedContent].

func (*Client) GetCachedContent

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 GetCachedContent 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 name 
  
  string 
 
 ) 
  
 ( 
 * 
  CachedContent 
 
 , 
  
  error 
 
 ) 
 

GetCachedContent retrieves the CachedContent with the given name.

func (*Client) ListCachedContents

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 ListCachedContents 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 ) 
  
 * 
  CachedContentIterator 
 
 

ListCachedContents lists all the CachedContents associated with the project and location.

func (*Client) UpdateCachedContent

  func 
  
 ( 
 c 
  
 * 
  Client 
 
 ) 
  
 UpdateCachedContent 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 cc 
  
 * 
  CachedContent 
 
 , 
  
 ccu 
  
 * 
  CachedContentToUpdate 
 
 ) 
  
 ( 
 * 
  CachedContent 
 
 , 
  
  error 
 
 ) 
 

UpdateCachedContent modifies the [CachedContent] according to the values of the [CachedContentToUpdate] struct. It returns the modified CachedContent.

The argument CachedContent must have its Name field populated. If its UpdateTime field is non-zero, it will be compared with the update time of the stored CachedContent and the call will fail if they differ. This avoids a race condition when two updates are attempted concurrently. All other fields of the argument CachedContent are ignored.

Content

  type 
  
 Content 
  
 struct 
  
 { 
  
 // Optional. The producer of the content. Must be either 'user' or 'model'. 
  
 // 
  
 // Useful to set for multi-turn conversations, otherwise can be left blank 
  
 // or unset. 
  
 Role 
  
  string 
 
  
 // Required. Ordered `Parts` that constitute a single message. Parts may have 
  
 // different IANA MIME types. 
  
 Parts 
  
 [] 
  Part 
 
 } 
 

Content is the base structured datatype containing multi-part content of a message.

A Content includes a role field designating the producer of the Content and a parts field containing multi-part data that contains the content of the message turn.

CountTokensResponse

  type 
  
 CountTokensResponse 
  
 struct 
  
 { 
  
 // The total number of tokens counted across all instances from the request. 
  
 TotalTokens 
  
  int32 
 
  
 // The total number of billable characters counted across all instances from 
  
 // the request. 
  
 TotalBillableCharacters 
  
  int32 
 
 } 
 

CountTokensResponse is response message for [PredictionService.CountTokens][google.cloud.aiplatform.v1beta1.PredictionService.CountTokens].

ExpireTimeOrTTL

  type 
  
 ExpireTimeOrTTL 
  
 struct 
  
 { 
  
 ExpireTime 
  
  time 
 
 . 
  Time 
 
  
 TTL 
  
  time 
 
 . 
  Duration 
 
 } 
 

ExpireTimeOrTTL describes the time when a resource expires. If ExpireTime is non-zero, it is the expiration time. Otherwise, the expiration time is the value of TTL ("time to live") added to the current time.

FileData

  type 
  
 FileData 
  
 struct 
  
 { 
  
 // Required. The IANA standard MIME type of the source data. 
  
 MIMEType 
  
  string 
 
  
 // Required. URI. 
  
 FileURI 
  
  string 
 
 } 
 

FileData is URI based data.

FinishReason

  type 
  
 FinishReason 
  
  int32 
 
 

FinishReason is the reason why the model stopped generating tokens. If empty, the model has not stopped generating the tokens.

FinishReasonUnspecified, FinishReasonStop, FinishReasonMaxTokens, FinishReasonSafety, FinishReasonRecitation, FinishReasonOther, FinishReasonBlocklist, FinishReasonProhibitedContent, FinishReasonSpii

  const 
  
 ( 
  
 // FinishReasonUnspecified means the finish reason is unspecified. 
  
 FinishReasonUnspecified 
  
  FinishReason 
 
  
 = 
  
 0 
  
 // FinishReasonStop means natural stop point of the model or provided stop sequence. 
  
 FinishReasonStop 
  
  FinishReason 
 
  
 = 
  
 1 
  
 // FinishReasonMaxTokens means the maximum number of tokens as specified in the request was reached. 
  
 FinishReasonMaxTokens 
  
  FinishReason 
 
  
 = 
  
 2 
  
 // FinishReasonSafety means the token generation was stopped as the response was flagged for safety 
  
 // reasons. NOTE: When streaming the Candidate.content will be empty if 
  
 // content filters blocked the output. 
  
 FinishReasonSafety 
  
  FinishReason 
 
  
 = 
  
 3 
  
 // FinishReasonRecitation means the token generation was stopped as the response was flagged for 
  
 // unauthorized citations. 
  
 FinishReasonRecitation 
  
  FinishReason 
 
  
 = 
  
 4 
  
 // FinishReasonOther means all other reasons that stopped the token generation 
  
 FinishReasonOther 
  
  FinishReason 
 
  
 = 
  
 5 
  
 // FinishReasonBlocklist means the token generation was stopped as the response was flagged for the 
  
 // terms which are included from the terminology blocklist. 
  
 FinishReasonBlocklist 
  
  FinishReason 
 
  
 = 
  
 6 
  
 // FinishReasonProhibitedContent means the token generation was stopped as the response was flagged for 
  
 // the prohibited contents. 
  
 FinishReasonProhibitedContent 
  
  FinishReason 
 
  
 = 
  
 7 
  
 // FinishReasonSpii means the token generation was stopped as the response was flagged for 
  
 // Sensitive Personally Identifiable Information (SPII) contents. 
  
 FinishReasonSpii 
  
  FinishReason 
 
  
 = 
  
 8 
 ) 
 

func (FinishReason) String

  func 
  
 ( 
 v 
  
  FinishReason 
 
 ) 
  
 String 
 () 
  
  string 
 
 

FunctionCall

  type 
  
 FunctionCall 
  
 struct 
  
 { 
  
 // Required. The name of the function to call. 
  
 // Matches [FunctionDeclaration.name]. 
  
 Name 
  
  string 
 
  
 // Optional. Required. The function parameters and values in JSON object 
  
 // format. See [FunctionDeclaration.parameters] for parameter details. 
  
 Args 
  
 map 
 [ 
  string 
 
 ] 
  any 
 
 } 
 

FunctionCall is a predicted [FunctionCall] returned from the model that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing the parameters and their values.

FunctionCallingConfig

  type 
  
 FunctionCallingConfig 
  
 struct 
  
 { 
  
 // Optional. Function calling mode. 
  
 Mode 
  
  FunctionCallingMode 
 
  
 // Optional. Function names to call. Only set when the Mode is ANY. Function 
  
 // names should match [FunctionDeclaration.name]. With mode set to ANY, model 
  
 // will predict a function call from the set of function names provided. 
  
 AllowedFunctionNames 
  
 [] 
  string 
 
 } 
 

FunctionCallingConfig holds configuration for function calling.

FunctionCallingMode

  type 
  
 FunctionCallingMode 
  
  int32 
 
 

FunctionCallingMode is function calling mode.

FunctionCallingUnspecified, FunctionCallingAuto, FunctionCallingAny, FunctionCallingNone

  const 
  
 ( 
  
 // FunctionCallingUnspecified means unspecified function calling mode. This value should not be used. 
  
 FunctionCallingUnspecified 
  
  FunctionCallingMode 
 
  
 = 
  
 0 
  
 // FunctionCallingAuto means default model behavior, model decides to predict either a function call 
  
 // or a natural language repspose. 
  
 FunctionCallingAuto 
  
  FunctionCallingMode 
 
  
 = 
  
 1 
  
 // FunctionCallingAny means model is constrained to always predicting a function call only. 
  
 // If "allowed_function_names" are set, the predicted function call will be 
  
 // limited to any one of "allowed_function_names", else the predicted 
  
 // function call will be any one of the provided "function_declarations". 
  
 FunctionCallingAny 
  
  FunctionCallingMode 
 
  
 = 
  
 2 
  
 // FunctionCallingNone means model will not predict any function call. Model behavior is same as when 
  
 // not passing any function declarations. 
  
 FunctionCallingNone 
  
  FunctionCallingMode 
 
  
 = 
  
 3 
 ) 
 

func (FunctionCallingMode) String

  func 
  
 ( 
 v 
  
  FunctionCallingMode 
 
 ) 
  
 String 
 () 
  
  string 
 
 

FunctionDeclaration

  type 
  
 FunctionDeclaration 
  
 struct 
  
 { 
  
 // Required. The name of the function to call. 
  
 // Must start with a letter or an underscore. 
  
 // Must be a-z, A-Z, 0-9, or contain underscores, dots and dashes, with a 
  
 // maximum length of 64. 
  
 Name 
  
  string 
 
  
 // Optional. Description and purpose of the function. 
  
 // Model uses it to decide how and whether to call the function. 
  
 Description 
  
  string 
 
  
 // Optional. Describes the parameters to this function in JSON Schema Object 
  
 // format. Reflects the Open API 3.03 Parameter Object. string Key: the name 
  
 // of the parameter. Parameter names are case sensitive. Schema Value: the 
  
 // Schema defining the type used for the parameter. For function with no 
  
 // parameters, this can be left unset. Parameter names must start with a 
  
 // letter or an underscore and must only contain chars a-z, A-Z, 0-9, or 
  
 // underscores with a maximum length of 64. Example with 1 required and 1 
  
 // optional parameter: type: OBJECT properties: 
  
 // 
  
 //	param1: 
  
 //	  type: STRING 
  
 //	param2: 
  
 //	  type: INTEGER 
  
 // 
  
 // required: 
  
 //   - param1 
  
 Parameters 
  
 * 
  Schema 
 
  
 // Optional. Describes the output from this function in JSON Schema format. 
  
 // Reflects the Open API 3.03 Response Object. The Schema defines the type 
  
 // used for the response value of the function. 
  
 Response 
  
 * 
  Schema 
 
 } 
 

FunctionDeclaration is structured representation of a function declaration as defined by the OpenAPI 3.0 specification . Included in this declaration are the function name and parameters. This FunctionDeclaration is a representation of a block of code that can be used as a Tool by the model and executed by the client.

FunctionResponse

  type 
  
 FunctionResponse 
  
 struct 
  
 { 
  
 // Required. The name of the function to call. 
  
 // Matches [FunctionDeclaration.name] and [FunctionCall.name]. 
  
 Name 
  
  string 
 
  
 // Required. The function response in JSON object format. 
  
 Response 
  
 map 
 [ 
  string 
 
 ] 
  any 
 
 } 
 

FunctionResponse is the result output from a [FunctionCall] that contains a string representing the [FunctionDeclaration.name] and a structured JSON object containing any output from the function is used as context to the model. This should contain the result of a [FunctionCall] made based on model prediction.

GenerateContentResponse

  type 
  
 GenerateContentResponse 
  
 struct 
  
 { 
  
 // Output only. Generated candidates. 
  
 Candidates 
  
 [] 
 * 
  Candidate 
 
  
 // Output only. Content filter results for a prompt sent in the request. 
  
 // Note: Sent only in the first stream chunk. 
  
 // Only happens when no candidates were generated due to content violations. 
  
 PromptFeedback 
  
 * 
  PromptFeedback 
 
  
 // Usage metadata about the response(s). 
  
 UsageMetadata 
  
 * 
  UsageMetadata 
 
 } 
 

GenerateContentResponse is the response from a GenerateContent or GenerateContentStream call.

GenerateContentResponseIterator

  type 
  
 GenerateContentResponseIterator 
  
 struct 
  
 { 
  
 // contains filtered or unexported fields 
 } 
 

GenerateContentResponseIterator is an iterator over GnerateContentResponse.

func (*GenerateContentResponseIterator) Next

Next returns the next response.

GenerationConfig

  type 
  
 GenerationConfig 
  
 struct 
  
 { 
  
 // Optional. Controls the randomness of predictions. 
  
 Temperature 
  
 * 
  float32 
 
  
 // Optional. If specified, nucleus sampling will be used. 
  
 TopP 
  
 * 
  float32 
 
  
 // Optional. If specified, top-k sampling will be used. 
  
 TopK 
  
 * 
  int32 
 
  
 // Optional. Number of candidates to generate. 
  
 CandidateCount 
  
 * 
  int32 
 
  
 // Optional. The maximum number of output tokens to generate per message. 
  
 MaxOutputTokens 
  
 * 
  int32 
 
  
 // Optional. Stop sequences. 
  
 StopSequences 
  
 [] 
  string 
 
  
 // Optional. Positive penalties. 
  
 PresencePenalty 
  
 * 
  float32 
 
  
 // Optional. Frequency penalties. 
  
 FrequencyPenalty 
  
 * 
  float32 
 
  
 // Optional. Output response mimetype of the generated candidate text. 
  
 // Supported mimetype: 
  
 // - `text/plain`: (default) Text output. 
  
 // - `application/json`: JSON response in the candidates. 
  
 // The model needs to be prompted to output the appropriate response type, 
  
 // otherwise the behavior is undefined. 
  
 // This is a preview feature. 
  
 ResponseMIMEType 
  
  string 
 
  
 // Optional. The `Schema` object allows the definition of input and output 
  
 // data types. These types can be objects, but also primitives and arrays. 
  
 // Represents a select subset of an [OpenAPI 3.0 schema 
  
 // object](https://spec.openapis.org/oas/v3.0.3#schema). 
  
 // If set, a compatible response_mime_type must also be set. 
  
 // Compatible mimetypes: 
  
 // `application/json`: Schema for JSON response. 
  
 ResponseSchema 
  
 * 
  Schema 
 
 } 
 

GenerationConfig is generation config.

func (*GenerationConfig) SetCandidateCount

  func 
  
 ( 
 c 
  
 * 
  GenerationConfig 
 
 ) 
  
 SetCandidateCount 
 ( 
 x 
  
  int32 
 
 ) 
 

SetCandidateCount sets the CandidateCount field.

func (*GenerationConfig) SetMaxOutputTokens

  func 
  
 ( 
 c 
  
 * 
  GenerationConfig 
 
 ) 
  
 SetMaxOutputTokens 
 ( 
 x 
  
  int32 
 
 ) 
 

SetMaxOutputTokens sets the MaxOutputTokens field.

func (*GenerationConfig) SetTemperature

  func 
  
 ( 
 c 
  
 * 
  GenerationConfig 
 
 ) 
  
 SetTemperature 
 ( 
 x 
  
  float32 
 
 ) 
 

SetTemperature sets the Temperature field.

func (*GenerationConfig) SetTopK

  func 
  
 ( 
 c 
  
 * 
  GenerationConfig 
 
 ) 
  
 SetTopK 
 ( 
 x 
  
  int32 
 
 ) 
 

SetTopK sets the TopK field.

func (*GenerationConfig) SetTopP

  func 
  
 ( 
 c 
  
 * 
  GenerationConfig 
 
 ) 
  
 SetTopP 
 ( 
 x 
  
  float32 
 
 ) 
 

SetTopP sets the TopP field.

GenerativeModel

  type 
  
 GenerativeModel 
  
 struct 
  
 { 
  
  GenerationConfig 
 
  
 SafetySettings 
  
 [] 
 * 
  SafetySetting 
 
  
 Tools 
  
 [] 
 * 
  Tool 
 
  
 ToolConfig 
  
 * 
  ToolConfig 
 
  
 // configuration for tools 
  
 SystemInstruction 
  
 * 
  Content 
 
  
 // The name of the CachedContent to use. 
  
 // Must have already been created with [Client.CreateCachedContent]. 
  
 CachedContentName 
  
  string 
 
  
 // contains filtered or unexported fields 
 } 
 

GenerativeModel is a model that can generate text. Create one with [Client.GenerativeModel], then configure it by setting the exported fields.

The model holds all the config for a GenerateContentRequest, so the GenerateContent method can use a vararg for the content.

func (*GenerativeModel) CountTokens

  func 
  
 ( 
 m 
  
 * 
  GenerativeModel 
 
 ) 
  
 CountTokens 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 parts 
  
 ... 
  Part 
 
 ) 
  
 ( 
 * 
  CountTokensResponse 
 
 , 
  
  error 
 
 ) 
 

CountTokens counts the number of tokens in the content.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "log" 
  
 "cloud.google.com/go/vertexai/genai" 
 ) 
 // Your GCP project 
 const 
  
 projectID 
  
 = 
  
 "your-project" 
 // A GCP location like "us-central1"; if you're using standard Google-published 
 // models (like untuned Gemini models), you can keep location blank (""). 
 const 
  
 location 
  
 = 
  
 "some-gcp-location" 
 // A model name like "gemini-1.0-pro" 
 // For custom models from different publishers, prepent the full publisher 
 // prefix for the model, e.g.: 
 // 
 //	modelName = publishers/some-publisher/models/some-model-name 
 const 
  
 modelName 
  
 = 
  
 "some-model" 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 model 
  
 := 
  
 client 
 . 
 GenerativeModel 
 ( 
 modelName 
 ) 
  
 resp 
 , 
  
 err 
  
 := 
  
 model 
 . 
 CountTokens 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 "What kind of fish is this?" 
 )) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "Num tokens:" 
 , 
  
 resp 
 . 
 TotalTokens 
 ) 
 } 
 

func (*GenerativeModel) GenerateContent

  func 
  
 ( 
 m 
  
 * 
  GenerativeModel 
 
 ) 
  
 GenerateContent 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 parts 
  
 ... 
  Part 
 
 ) 
  
 ( 
 * 
  GenerateContentResponse 
 
 , 
  
  error 
 
 ) 
 

GenerateContent produces a single request and response.

Examples

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "log" 
  
 "cloud.google.com/go/vertexai/genai" 
 ) 
 // Your GCP project 
 const 
  
 projectID 
  
 = 
  
 "your-project" 
 // A GCP location like "us-central1"; if you're using standard Google-published 
 // models (like untuned Gemini models), you can keep location blank (""). 
 const 
  
 location 
  
 = 
  
 "some-gcp-location" 
 // A model name like "gemini-1.0-pro" 
 // For custom models from different publishers, prepent the full publisher 
 // prefix for the model, e.g.: 
 // 
 //	modelName = publishers/some-publisher/models/some-model-name 
 const 
  
 modelName 
  
 = 
  
 "some-model" 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 model 
  
 := 
  
 client 
 . 
 GenerativeModel 
 ( 
 modelName 
 ) 
  
 model 
 . 
  SetTemperature 
 
 ( 
 0.9 
 ) 
  
 resp 
 , 
  
 err 
  
 := 
  
 model 
 . 
  GenerateContent 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 "What is the average size of a swallow?" 
 )) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 printResponse 
 ( 
 resp 
 ) 
 } 
 func 
  
 printResponse 
 ( 
 resp 
  
 * 
 genai 
 . 
  GenerateContentResponse 
 
 ) 
  
 { 
  
 for 
  
 _ 
 , 
  
 cand 
  
 := 
  
 range 
  
 resp 
 . 
 Candidates 
  
 { 
  
 for 
  
 _ 
 , 
  
 part 
  
 := 
  
 range 
  
 cand 
 . 
  Content 
 
 . 
 Parts 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 part 
 ) 
  
 } 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "---" 
 ) 
 } 
 
config
  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "log" 
  
 "cloud.google.com/go/vertexai/genai" 
 ) 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 const 
  
 projectID 
  
 = 
  
 "YOUR PROJECT ID" 
  
 const 
  
 location 
  
 = 
  
 "GCP LOCATION" 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 model 
  
 := 
  
 client 
 . 
 GenerativeModel 
 ( 
 "gemini-1.0-pro" 
 ) 
  
 model 
 . 
  SetTemperature 
 
 ( 
 0.9 
 ) 
  
 model 
 . 
  SetTopP 
 
 ( 
 0.5 
 ) 
  
 model 
 . 
  SetTopK 
 
 ( 
 20 
 ) 
  
 model 
 . 
  SetMaxOutputTokens 
 
 ( 
 100 
 ) 
  
 model 
 . 
 SystemInstruction 
  
 = 
  
& genai 
 . 
  Content 
 
 { 
  
 Parts 
 : 
  
 [] 
 genai 
 . 
  Part 
 
 { 
 genai 
 . 
  Text 
 
 ( 
 "You are Yoda from Star Wars." 
 )}, 
  
 } 
  
 resp 
 , 
  
 err 
  
 := 
  
 model 
 . 
  GenerateContent 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 "What is the average size of a swallow?" 
 )) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 printResponse 
 ( 
 resp 
 ) 
 } 
 func 
  
 printResponse 
 ( 
 resp 
  
 * 
 genai 
 . 
  GenerateContentResponse 
 
 ) 
  
 { 
  
 for 
  
 _ 
 , 
  
 cand 
  
 := 
  
 range 
  
 resp 
 . 
 Candidates 
  
 { 
  
 for 
  
 _ 
 , 
  
 part 
  
 := 
  
 range 
  
 cand 
 . 
  Content 
 
 . 
 Parts 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 part 
 ) 
  
 } 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "---" 
 ) 
 } 
 

func (*GenerativeModel) GenerateContentStream

  func 
  
 ( 
 m 
  
 * 
  GenerativeModel 
 
 ) 
  
 GenerateContentStream 
 ( 
 ctx 
  
  context 
 
 . 
  Context 
 
 , 
  
 parts 
  
 ... 
  Part 
 
 ) 
  
 * 
  GenerateContentResponseIterator 
 
 

GenerateContentStream returns an iterator that enumerates responses.

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "log" 
  
 "cloud.google.com/go/vertexai/genai" 
  
 "google.golang.org/api/iterator" 
 ) 
 // Your GCP project 
 const 
  
 projectID 
  
 = 
  
 "your-project" 
 // A GCP location like "us-central1"; if you're using standard Google-published 
 // models (like untuned Gemini models), you can keep location blank (""). 
 const 
  
 location 
  
 = 
  
 "some-gcp-location" 
 // A model name like "gemini-1.0-pro" 
 // For custom models from different publishers, prepent the full publisher 
 // prefix for the model, e.g.: 
 // 
 //	modelName = publishers/some-publisher/models/some-model-name 
 const 
  
 modelName 
  
 = 
  
 "some-model" 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 model 
  
 := 
  
 client 
 . 
 GenerativeModel 
 ( 
 modelName 
 ) 
  
 iter 
  
 := 
  
 model 
 . 
  GenerateContentStream 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 "Tell me a story about a lumberjack and his giant ox. Keep it very short." 
 )) 
  
 for 
  
 { 
  
 resp 
 , 
  
 err 
  
 := 
  
 iter 
 . 
 Next 
 () 
  
 if 
  
 err 
  
 == 
  
 iterator 
 . 
 Done 
  
 { 
  
 break 
  
 } 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 printResponse 
 ( 
 resp 
 ) 
  
 } 
 } 
 func 
  
 printResponse 
 ( 
 resp 
  
 * 
 genai 
 . 
  GenerateContentResponse 
 
 ) 
  
 { 
  
 for 
  
 _ 
 , 
  
 cand 
  
 := 
  
 range 
  
 resp 
 . 
 Candidates 
  
 { 
  
 for 
  
 _ 
 , 
  
 part 
  
 := 
  
 range 
  
 cand 
 . 
  Content 
 
 . 
 Parts 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 part 
 ) 
  
 } 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "---" 
 ) 
 } 
 

func (*GenerativeModel) Name

  func 
  
 ( 
 m 
  
 * 
  GenerativeModel 
 
 ) 
  
 Name 
 () 
  
  string 
 
 

Name returns the name of the model.

func (*GenerativeModel) StartChat

  func 
  
 ( 
 m 
  
 * 
  GenerativeModel 
 
 ) 
  
 StartChat 
 () 
  
 * 
  ChatSession 
 
 

StartChat starts a chat session.

HarmBlockMethod

  type 
  
 HarmBlockMethod 
  
  int32 
 
 

HarmBlockMethod determines how harm blocking is done.

HarmBlockMethodUnspecified, HarmBlockMethodSeverity, HarmBlockMethodProbability

  const 
  
 ( 
  
 // HarmBlockMethodUnspecified means the harm block method is unspecified. 
  
 HarmBlockMethodUnspecified 
  
  HarmBlockMethod 
 
  
 = 
  
 0 
  
 // HarmBlockMethodSeverity means the harm block method uses both probability and severity scores. 
  
 HarmBlockMethodSeverity 
  
  HarmBlockMethod 
 
  
 = 
  
 1 
  
 // HarmBlockMethodProbability means the harm block method uses the probability score. 
  
 HarmBlockMethodProbability 
  
  HarmBlockMethod 
 
  
 = 
  
 2 
 ) 
 

func (HarmBlockMethod) String

  func 
  
 ( 
 v 
  
  HarmBlockMethod 
 
 ) 
  
 String 
 () 
  
  string 
 
 

HarmBlockThreshold

  type 
  
 HarmBlockThreshold 
  
  int32 
 
 

HarmBlockThreshold specifies probability based thresholds levels for blocking.

HarmBlockUnspecified, HarmBlockLowAndAbove, HarmBlockMediumAndAbove, HarmBlockOnlyHigh, HarmBlockNone

  const 
  
 ( 
  
 // HarmBlockUnspecified means unspecified harm block threshold. 
  
 HarmBlockUnspecified 
  
  HarmBlockThreshold 
 
  
 = 
  
 0 
  
 // HarmBlockLowAndAbove means block low threshold and above (i.e. block more). 
  
 HarmBlockLowAndAbove 
  
  HarmBlockThreshold 
 
  
 = 
  
 1 
  
 // HarmBlockMediumAndAbove means block medium threshold and above. 
  
 HarmBlockMediumAndAbove 
  
  HarmBlockThreshold 
 
  
 = 
  
 2 
  
 // HarmBlockOnlyHigh means block only high threshold (i.e. block less). 
  
 HarmBlockOnlyHigh 
  
  HarmBlockThreshold 
 
  
 = 
  
 3 
  
 // HarmBlockNone means block none. 
  
 HarmBlockNone 
  
  HarmBlockThreshold 
 
  
 = 
  
 4 
 ) 
 

func (HarmBlockThreshold) String

  func 
  
 ( 
 v 
  
  HarmBlockThreshold 
 
 ) 
  
 String 
 () 
  
  string 
 
 

HarmCategory

  type 
  
 HarmCategory 
  
  int32 
 
 

HarmCategory specifies harm categories that will block the content.

HarmCategoryUnspecified, HarmCategoryHateSpeech, HarmCategoryDangerousContent, HarmCategoryHarassment, HarmCategorySexuallyExplicit

  const 
  
 ( 
  
 // HarmCategoryUnspecified means the harm category is unspecified. 
  
 HarmCategoryUnspecified 
  
  HarmCategory 
 
  
 = 
  
 0 
  
 // HarmCategoryHateSpeech means the harm category is hate speech. 
  
 HarmCategoryHateSpeech 
  
  HarmCategory 
 
  
 = 
  
 1 
  
 // HarmCategoryDangerousContent means the harm category is dangerous content. 
  
 HarmCategoryDangerousContent 
  
  HarmCategory 
 
  
 = 
  
 2 
  
 // HarmCategoryHarassment means the harm category is harassment. 
  
 HarmCategoryHarassment 
  
  HarmCategory 
 
  
 = 
  
 3 
  
 // HarmCategorySexuallyExplicit means the harm category is sexually explicit content. 
  
 HarmCategorySexuallyExplicit 
  
  HarmCategory 
 
  
 = 
  
 4 
 ) 
 

func (HarmCategory) String

  func 
  
 ( 
 v 
  
  HarmCategory 
 
 ) 
  
 String 
 () 
  
  string 
 
 

HarmProbability

  type 
  
 HarmProbability 
  
  int32 
 
 

HarmProbability specifies harm probability levels in the content.

HarmProbabilityUnspecified, HarmProbabilityNegligible, HarmProbabilityLow, HarmProbabilityMedium, HarmProbabilityHigh

  const 
  
 ( 
  
 // HarmProbabilityUnspecified means harm probability unspecified. 
  
 HarmProbabilityUnspecified 
  
  HarmProbability 
 
  
 = 
  
 0 
  
 // HarmProbabilityNegligible means negligible level of harm. 
  
 HarmProbabilityNegligible 
  
  HarmProbability 
 
  
 = 
  
 1 
  
 // HarmProbabilityLow means low level of harm. 
  
 HarmProbabilityLow 
  
  HarmProbability 
 
  
 = 
  
 2 
  
 // HarmProbabilityMedium means medium level of harm. 
  
 HarmProbabilityMedium 
  
  HarmProbability 
 
  
 = 
  
 3 
  
 // HarmProbabilityHigh means high level of harm. 
  
 HarmProbabilityHigh 
  
  HarmProbability 
 
  
 = 
  
 4 
 ) 
 

func (HarmProbability) String

  func 
  
 ( 
 v 
  
  HarmProbability 
 
 ) 
  
 String 
 () 
  
  string 
 
 

HarmSeverity

  type 
  
 HarmSeverity 
  
  int32 
 
 

HarmSeverity specifies harm severity levels.

HarmSeverityUnspecified, HarmSeverityNegligible, HarmSeverityLow, HarmSeverityMedium, HarmSeverityHigh

  const 
  
 ( 
  
 // HarmSeverityUnspecified means harm severity unspecified. 
  
 HarmSeverityUnspecified 
  
  HarmSeverity 
 
  
 = 
  
 0 
  
 // HarmSeverityNegligible means negligible level of harm severity. 
  
 HarmSeverityNegligible 
  
  HarmSeverity 
 
  
 = 
  
 1 
  
 // HarmSeverityLow means low level of harm severity. 
  
 HarmSeverityLow 
  
  HarmSeverity 
 
  
 = 
  
 2 
  
 // HarmSeverityMedium means medium level of harm severity. 
  
 HarmSeverityMedium 
  
  HarmSeverity 
 
  
 = 
  
 3 
  
 // HarmSeverityHigh means high level of harm severity. 
  
 HarmSeverityHigh 
  
  HarmSeverity 
 
  
 = 
  
 4 
 ) 
 

func (HarmSeverity) String

  func 
  
 ( 
 v 
  
  HarmSeverity 
 
 ) 
  
 String 
 () 
  
  string 
 
 

Part

  type 
  
 Part 
  
 interface 
  
 { 
  
 // contains filtered or unexported methods 
 } 
 

A Part is either a Text, a Blob, or a FileData.

PromptFeedback

  type 
  
 PromptFeedback 
  
 struct 
  
 { 
  
 // Output only. Blocked reason. 
  
 BlockReason 
  
  BlockedReason 
 
  
 // Output only. Safety ratings. 
  
 SafetyRatings 
  
 [] 
 * 
  SafetyRating 
 
  
 // Output only. A readable block reason message. 
  
 BlockReasonMessage 
  
  string 
 
 } 
 

PromptFeedback contains content filter results for a prompt sent in the request.

SafetyRating

  type 
  
 SafetyRating 
  
 struct 
  
 { 
  
 // Output only. Harm category. 
  
 Category 
  
  HarmCategory 
 
  
 // Output only. Harm probability levels in the content. 
  
 Probability 
  
  HarmProbability 
 
  
 // Output only. Harm probability score. 
  
 ProbabilityScore 
  
  float32 
 
  
 // Output only. Harm severity levels in the content. 
  
 Severity 
  
  HarmSeverity 
 
  
 // Output only. Harm severity score. 
  
 SeverityScore 
  
  float32 
 
  
 // Output only. Indicates whether the content was filtered out because of this 
  
 // rating. 
  
 Blocked 
  
  bool 
 
 } 
 

SafetyRating is the safety rating corresponding to the generated content.

SafetySetting

  type 
  
 SafetySetting 
  
 struct 
  
 { 
  
 // Required. Harm category. 
  
 Category 
  
  HarmCategory 
 
  
 // Required. The harm block threshold. 
  
 Threshold 
  
  HarmBlockThreshold 
 
  
 // Optional. Specify if the threshold is used for probability or severity 
  
 // score. If not specified, the threshold is used for probability score. 
  
 Method 
  
  HarmBlockMethod 
 
 } 
 

SafetySetting is safety settings.

Schema

  type 
  
 Schema 
  
 struct 
  
 { 
  
 // Optional. The type of the data. 
  
 Type 
  
  Type 
 
  
 // Optional. The format of the data. 
  
 // Supported formats: 
  
 // 
  
 //	for NUMBER type: "float", "double" 
  
 //	for INTEGER type: "int32", "int64" 
  
 //	for STRING type: "email", "byte", etc 
  
 Format 
  
  string 
 
  
 // Optional. The title of the Schema. 
  
 Title 
  
  string 
 
  
 // Optional. The description of the data. 
  
 Description 
  
  string 
 
  
 // Optional. Indicates if the value may be null. 
  
 Nullable 
  
  bool 
 
  
 // Optional. SCHEMA FIELDS FOR TYPE ARRAY 
  
 // Schema of the elements of Type.ARRAY. 
  
 Items 
  
 * 
  Schema 
 
  
 // Optional. Minimum number of the elements for Type.ARRAY. 
  
 MinItems 
  
  int64 
 
  
 // Optional. Maximum number of the elements for Type.ARRAY. 
  
 MaxItems 
  
  int64 
 
  
 // Optional. Possible values of the element of Type.STRING with enum format. 
  
 // For example we can define an Enum Direction as : 
  
 // {type:STRING, format:enum, enum:["EAST", NORTH", "SOUTH", "WEST"]} 
  
 Enum 
  
 [] 
  string 
 
  
 // Optional. SCHEMA FIELDS FOR TYPE OBJECT 
  
 // Properties of Type.OBJECT. 
  
 Properties 
  
 map 
 [ 
  string 
 
 ] 
 * 
  Schema 
 
  
 // Optional. Required properties of Type.OBJECT. 
  
 Required 
  
 [] 
  string 
 
  
 // Optional. Minimum number of the properties for Type.OBJECT. 
  
 MinProperties 
  
  int64 
 
  
 // Optional. Maximum number of the properties for Type.OBJECT. 
  
 MaxProperties 
  
  int64 
 
  
 // Optional. SCHEMA FIELDS FOR TYPE INTEGER and NUMBER 
  
 // Minimum value of the Type.INTEGER and Type.NUMBER 
  
 Minimum 
  
  float64 
 
  
 // Optional. Maximum value of the Type.INTEGER and Type.NUMBER 
  
 Maximum 
  
  float64 
 
  
 // Optional. SCHEMA FIELDS FOR TYPE STRING 
  
 // Minimum length of the Type.STRING 
  
 MinLength 
  
  int64 
 
  
 // Optional. Maximum length of the Type.STRING 
  
 MaxLength 
  
  int64 
 
  
 // Optional. Pattern of the Type.STRING to restrict a string to a regular 
  
 // expression. 
  
 Pattern 
  
  string 
 
 } 
 

Schema is used to define the format of input/output data. Represents a select subset of an OpenAPI 3.0 schema object . More fields may be added in the future as needed.

Text

  type 
  
 Text 
  
  string 
 
 

A Text is a piece of text, like a question or phrase.

Tool

  type 
  
 Tool 
  
 struct 
  
 { 
  
 // Optional. Function tool type. 
  
 // One or more function declarations to be passed to the model along with the 
  
 // current user query. Model may decide to call a subset of these functions 
  
 // by populating [FunctionCall][content.part.function_call] in the response. 
  
 // User should provide a [FunctionResponse][content.part.function_response] 
  
 // for each function call in the next turn. Based on the function responses, 
  
 // Model will generate the final response back to the user. 
  
 // Maximum 64 function declarations can be provided. 
  
 FunctionDeclarations 
  
 [] 
 * 
  FunctionDeclaration 
 
 } 
 

Tool details that the model may use to generate response.

A Tool is a piece of code that enables the system to interact with external systems to perform an action, or set of actions, outside of knowledge and scope of the model. A Tool object should contain exactly one type of Tool (e.g FunctionDeclaration, Retrieval or GoogleSearchRetrieval).

Example

  package 
  
 main 
 import 
  
 ( 
  
 "context" 
  
 "fmt" 
  
 "log" 
  
 "cloud.google.com/go/vertexai/genai" 
 ) 
 // Your GCP project 
 const 
  
 projectID 
  
 = 
  
 "your-project" 
 // A GCP location like "us-central1"; if you're using standard Google-published 
 // models (like untuned Gemini models), you can keep location blank (""). 
 const 
  
 location 
  
 = 
  
 "some-gcp-location" 
 func 
  
 main 
 () 
  
 { 
  
 ctx 
  
 := 
  
 context 
 . 
 Background 
 () 
  
 client 
 , 
  
 err 
  
 := 
  
 genai 
 . 
  NewClient 
 
 ( 
 ctx 
 , 
  
 projectID 
 , 
  
 location 
 ) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 defer 
  
 client 
 . 
  Close 
 
 () 
  
 currentWeather 
  
 := 
  
 func 
 ( 
 city 
  
 string 
 ) 
  
 string 
  
 { 
  
 switch 
  
 city 
  
 { 
  
 case 
  
 "New York, NY" 
 : 
  
 return 
  
 "cold" 
  
 case 
  
 "Miami, FL" 
 : 
  
 return 
  
 "warm" 
  
 default 
 : 
  
 return 
  
 "unknown" 
  
 } 
  
 } 
  
 // To use functions / tools, we have to first define a schema that describes 
  
 // the function to the model. The schema is similar to OpenAPI 3.0. 
  
 // 
  
 // In this example, we create a single function that provides the model with 
  
 // a weather forecast in a given location. 
  
 schema 
  
 := 
  
& genai 
 . 
  Schema 
 
 { 
  
 Type 
 : 
  
 genai 
 . 
  TypeObject 
 
 , 
  
 Properties 
 : 
  
 map 
 [ 
 string 
 ] 
 * 
 genai 
 . 
  Schema 
 
 { 
  
 "location" 
 : 
  
 { 
  
 Type 
 : 
  
 genai 
 . 
  TypeString 
 
 , 
  
 Description 
 : 
  
 "The city and state, e.g. San Francisco, CA" 
 , 
  
 }, 
  
 "unit" 
 : 
  
 { 
  
 Type 
 : 
  
 genai 
 . 
  TypeString 
 
 , 
  
 Enum 
 : 
  
 [] 
 string 
 { 
 "celsius" 
 , 
  
 "fahrenheit" 
 }, 
  
 }, 
  
 }, 
  
 Required 
 : 
  
 [] 
 string 
 { 
 "location" 
 }, 
  
 } 
  
 weatherTool 
  
 := 
  
& genai 
 . 
  Tool 
 
 { 
  
 FunctionDeclarations 
 : 
  
 [] 
 * 
 genai 
 . 
 FunctionDeclaration 
{{
			Name:        "CurrentWeather",
			Description: "Get the current weather in a given location",
			Parameters:  schema,
		}} , 
  
 } 
  
 model 
  
 := 
  
 client 
 . 
 GenerativeModel 
 ( 
 "gemini-1.0-pro" 
 ) 
  
 // Before initiating a conversation, we tell the model which tools it has 
  
 // at its disposal. 
  
 model 
 . 
 Tools 
  
 = 
  
 [] 
 * 
 genai 
 . 
  Tool 
 
 { 
 weatherTool 
 } 
  
 // For using tools, the chat mode is useful because it provides the required 
  
 // chat context. A model needs to have tools supplied to it in the chat 
  
 // history so it can use them in subsequent conversations. 
  
 // 
  
 // The flow of message expected here is: 
  
 // 
  
 // 1. We send a question to the model 
  
 // 2. The model recognizes that it needs to use a tool to answer the question, 
  
 //    an returns a FunctionCall response asking to use the CurrentWeather 
  
 //    tool. 
  
 // 3. We send a FunctionResponse message, simulating the return value of 
  
 //    CurrentWeather for the model's query. 
  
 // 4. The model provides its text answer in response to this message. 
  
 session 
  
 := 
  
 model 
 . 
  StartChat 
 
 () 
  
 res 
 , 
  
 err 
  
 := 
  
 session 
 . 
  SendMessage 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  Text 
 
 ( 
 "What is the weather like in New York?" 
 )) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 part 
  
 := 
  
 res 
 . 
 Candidates 
 [ 
 0 
 ]. 
  Content 
 
 . 
 Parts 
 [ 
 0 
 ] 
  
 funcall 
 , 
  
 ok 
  
 := 
  
 part 
 .( 
 genai 
 . 
  FunctionCall 
 
 ) 
  
 if 
  
 ! 
 ok 
  
 { 
  
 log 
 . 
 Fatalf 
 ( 
 "expected FunctionCall: %v" 
 , 
  
 part 
 ) 
  
 } 
  
 if 
  
 funcall 
 . 
  Name 
 
  
 != 
  
 "CurrentWeather" 
  
 { 
  
 log 
 . 
 Fatalf 
 ( 
 "expected CurrentWeather: %v" 
 , 
  
 funcall 
 . 
  Name 
 
 ) 
  
 } 
  
 // Expect the model to pass a proper string "location" argument to the tool. 
  
 locArg 
 , 
  
 ok 
  
 := 
  
 funcall 
 . 
 Args 
 [ 
 "location" 
 ].( 
 string 
 ) 
  
 if 
  
 ! 
 ok 
  
 { 
  
 log 
 . 
 Fatalf 
 ( 
 "expected string: %v" 
 , 
  
 funcall 
 . 
 Args 
 [ 
 "location" 
 ]) 
  
 } 
  
 weatherData 
  
 := 
  
 currentWeather 
 ( 
 locArg 
 ) 
  
 res 
 , 
  
 err 
  
 = 
  
 session 
 . 
  SendMessage 
 
 ( 
 ctx 
 , 
  
 genai 
 . 
  FunctionResponse 
 
 { 
  
 Name 
 : 
  
 weatherTool 
 . 
 FunctionDeclarations 
 [ 
 0 
 ]. 
  Name 
 
 , 
  
 Response 
 : 
  
 map 
 [ 
 string 
 ] 
 any 
 { 
  
 "weather" 
 : 
  
 weatherData 
 , 
  
 }, 
  
 }) 
  
 if 
  
 err 
  
 != 
  
 nil 
  
 { 
  
 log 
 . 
 Fatal 
 ( 
 err 
 ) 
  
 } 
  
 printResponse 
 ( 
 res 
 ) 
 } 
 func 
  
 printResponse 
 ( 
 resp 
  
 * 
 genai 
 . 
  GenerateContentResponse 
 
 ) 
  
 { 
  
 for 
  
 _ 
 , 
  
 cand 
  
 := 
  
 range 
  
 resp 
 . 
 Candidates 
  
 { 
  
 for 
  
 _ 
 , 
  
 part 
  
 := 
  
 range 
  
 cand 
 . 
  Content 
 
 . 
 Parts 
  
 { 
  
 fmt 
 . 
 Println 
 ( 
 part 
 ) 
  
 } 
  
 } 
  
 fmt 
 . 
 Println 
 ( 
 "---" 
 ) 
 } 
 

ToolConfig

  type 
  
 ToolConfig 
  
 struct 
  
 { 
  
 // Optional. Function calling config. 
  
 FunctionCallingConfig 
  
 * 
  FunctionCallingConfig 
 
 } 
 

ToolConfig configures tools.

Type

  type 
  
 Type 
  
  int32 
 
 

Type contains the list of OpenAPI data types as defined by https://swagger.io/docs/specification/data-models/data-types/

TypeUnspecified, TypeString, TypeNumber, TypeInteger, TypeBoolean, TypeArray, TypeObject

  const 
  
 ( 
  
 // TypeUnspecified means not specified, should not be used. 
  
 TypeUnspecified 
  
  Type 
 
  
 = 
  
 0 
  
 // TypeString means openAPI string type 
  
 TypeString 
  
  Type 
 
  
 = 
  
 1 
  
 // TypeNumber means openAPI number type 
  
 TypeNumber 
  
  Type 
 
  
 = 
  
 2 
  
 // TypeInteger means openAPI integer type 
  
 TypeInteger 
  
  Type 
 
  
 = 
  
 3 
  
 // TypeBoolean means openAPI boolean type 
  
 TypeBoolean 
  
  Type 
 
  
 = 
  
 4 
  
 // TypeArray means openAPI array type 
  
 TypeArray 
  
  Type 
 
  
 = 
  
 5 
  
 // TypeObject means openAPI object type 
  
 TypeObject 
  
  Type 
 
  
 = 
  
 6 
 ) 
 

func (Type) String

  func 
  
 ( 
 v 
  
  Type 
 
 ) 
  
 String 
 () 
  
  string 
 
 
  type 
  
 UsageMetadata 
  
 struct 
  
 { 
  
 // Number of tokens in the request. 
  
 PromptTokenCount 
  
  int32 
 
  
 // Number of tokens in the response(s). 
  
 CandidatesTokenCount 
  
  int32 
 
  
 TotalTokenCount 
  
  int32 
 
 } 
 

UsageMetadata is usage metadata about response(s).

Create a Mobile Website
View Site in Mobile | Classic
Share by: