Interface GenerationConfig (1.9.0)

Configuration options for model generation and outputs.

Package

@google-cloud/vertexai

Properties

candidateCount

  candidateCount 
 ?: 
  
 number 
 ; 
 

Optional. Number of candidates to generate.

frequencyPenalty

  frequencyPenalty 
 ?: 
  
 number 
 ; 
 

Optional. Positive values penalize tokens that repeatedly appear in the generated text, decreasing the probability of repeating content. This maximum value for frequencyPenalty is up to, but not including, 2.0. Its minimum value is -2.0. Supported by gemini-1.5-pro and gemini-1.5-flash only.

maxOutputTokens

  maxOutputTokens 
 ?: 
  
 number 
 ; 
 

Optional. The maximum number of output tokens to generate per message.

responseMimeType

  responseMimeType 
 ?: 
  
 string 
 ; 
 

Optional. Output response mimetype of the generated candidate text. Supported mimetype: - text/plain : (default) Text output. - application/json : JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.

responseSchema

  responseSchema 
 ?: 
  
 ResponseSchema 
 ; 
 

Optional. The schema that generated candidate text must follow. For more information, see https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output . If set, a compatible responseMimeType must also be set.

stopSequences

  stopSequences 
 ?: 
  
 string 
 []; 
 

Optional. Stop sequences.

temperature

  temperature 
 ?: 
  
 number 
 ; 
 

Optional. Controls the randomness of predictions.

topK

  topK 
 ?: 
  
 number 
 ; 
 

Optional. If specified, topK sampling will be used.

topP

  topP 
 ?: 
  
 number 
 ; 
 

Optional. If specified, nucleus sampling will be used.

Create a Mobile Website
View Site in Mobile | Classic
Share by: