Module language_models (1.33.1)

Classes for working with language models.

Classes

ChatMessage

  ChatMessage 
 ( 
 content 
 : 
 str 
 , 
 author 
 : 
 str 
 ) 
 

A chat message.

Author of the message.

ChatSession

  ChatSession 
 ( 
 model 
 : 
 vertexai 
 . 
 language_models 
 . 
 ChatModel 
 , 
 context 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 examples 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 vertexai 
 . 
 language_models 
 . 
 InputOutputTextPair 
 ] 
 ] 
 = 
 None 
 , 
 max_output_tokens 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 temperature 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 top_k 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 top_p 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 message_history 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 vertexai 
 . 
 language_models 
 . 
 ChatMessage 
 ] 
 ] 
 = 
 None 
 , 
 stop_sequences 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ]] 
 = 
 None 
 , 
 ) 
 

ChatSession represents a chat session with a language model.

Within a chat session, the model keeps context and remembers the previous conversation.

CodeChatSession

  CodeChatSession 
 ( 
 model 
 : 
 vertexai 
 . 
 language_models 
 . 
 CodeChatModel 
 , 
 context 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 max_output_tokens 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 temperature 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 message_history 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 vertexai 
 . 
 language_models 
 . 
 ChatMessage 
 ] 
 ] 
 = 
 None 
 , 
 stop_sequences 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ]] 
 = 
 None 
 , 
 ) 
 

CodeChatSession represents a chat session with code chat language model.

Within a code chat session, the model keeps context and remembers the previous converstion.

EvaluationClassificationMetric

  EvaluationClassificationMetric 
 ( 
 label_name 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 auPrc 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 auRoc 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 logLoss 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 confidenceMetrics 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 typing 
 . 
 Any 
 ]] 
 ] 
 = 
 None 
 , 
 confusionMatrix 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 typing 
 . 
 Any 
 ]] 
 = 
 None 
 , 
 ) 
 

The evaluation metric response for classification metrics.

Parameters
Name
Description
label_name
str

Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False is passed to evaluate().

auPrc
float

Optional. The area under the precision recall curve.

auRoc
float

Optional. The area under the receiver operating characteristic curve.

logLoss
float

Optional. Logarithmic loss.

confidenceMetrics
List[Dict[str, Any]]

Optional. This is only returned when only_summary_metrics=False is passed to evaluate().

confusionMatrix
Dict[str, Any]

Optional. This is only returned when only_summary_metrics=False is passed to evaluate().

EvaluationMetric

  EvaluationMetric 
 ( 
 bleu 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 rougeLSum 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 ) 
 

The evaluation metric response.

Parameters
Name
Description
bleu
float

Optional. BLEU (Bilingual evauation understudy). Scores based on sacrebleu implementation.

rougeLSum
float

Optional. ROUGE-L (Longest Common Subsequence) scoring at summary level.

EvaluationQuestionAnsweringSpec

  EvaluationQuestionAnsweringSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ], 
 task_name 
 : 
 str 
 = 
 "question-answering" 
 , 
 ) 
 

Spec for question answering model evaluation tasks.

EvaluationTextClassificationSpec

  EvaluationTextClassificationSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ], 
 target_column_name 
 : 
 str 
 , 
 class_names 
 : 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 ) 
 

Spec for text classification model evaluation tasks.

Parameters
Name
Description
target_column_name
str

Required. The label column in the dataset provided in ground_truth_data . Required when task_name='text-classification'.

class_names
List[str]

Required. A list of all possible label names in your dataset. Required when task_name='text-classification'.

EvaluationTextGenerationSpec

  EvaluationTextGenerationSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ] 
 ) 
 

Spec for text generation model evaluation tasks.

EvaluationTextSummarizationSpec

  EvaluationTextSummarizationSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ], 
 task_name 
 : 
 str 
 = 
 "summarization" 
 , 
 ) 
 

Spec for text summarization model evaluation tasks.

InputOutputTextPair

  InputOutputTextPair 
 ( 
 input_text 
 : 
 str 
 , 
 output_text 
 : 
 str 
 ) 
 

InputOutputTextPair represents a pair of input and output texts.

TextEmbedding

  TextEmbedding 
 ( 
 values 
 : 
 typing 
 . 
 List 
 [ 
 float 
 ], 
 statistics 
 : 
 typing 
 . 
 Optional 
 [ 
 vertexai 
 . 
 language_models 
 . 
 TextEmbeddingStatistics 
 ] 
 = 
 None 
 , 
 _prediction_response 
 : 
 typing 
 . 
 Optional 
 [ 
 google 
 . 
 cloud 
 . 
 aiplatform 
 . 
 models 
 . 
 Prediction 
 ] 
 = 
 None 
 , 
 ) 
 

Text embedding vector and statistics.

TextEmbeddingInput

  TextEmbeddingInput 
 ( 
 text 
 : 
 str 
 , 
 task_type 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 title 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 ) 
 

Structural text embedding input.

The name of the downstream task the embeddings will be used for. Valid values: RETRIEVAL_QUERY Specifies the given text is a query in a search/retrieval setting. RETRIEVAL_DOCUMENT Specifies the given text is a document from the corpus being searched. SEMANTIC_SIMILARITY Specifies the given text will be used for STS. CLASSIFICATION Specifies that the given text will be classified. CLUSTERING Specifies that the embeddings will be used for clustering.

TextGenerationResponse

  TextGenerationResponse 
 ( 
 text 
 : 
 str 
 , 
 _prediction_response 
 : 
 typing 
 . 
 Any 
 , 
 is_blocked 
 : 
 bool 
 = 
 False 
 , 
 safety_attributes 
 : 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 float 
 ] 
 = 
< factory 
> ) 
 

TextGenerationResponse represents a response of a language model. .. attribute:: text

The generated text

Scores for safety attributes. Learn more about the safety attributes here: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/responsible-ai#safety_attribute_descriptions

TuningEvaluationSpec

  TuningEvaluationSpec 
 ( 
 evaluation_data 
 : 
 str 
 , 
 evaluation_interval 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 enable_early_stopping 
 : 
 typing 
 . 
 Optional 
 [ 
 bool 
 ] 
 = 
 None 
 , 
 tensorboard 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Union 
 [ 
 google 
 . 
 cloud 
 . 
 aiplatform 
 . 
 tensorboard 
 . 
 tensorboard_resource 
 . 
 Tensorboard 
 , 
 str 
 ] 
 ] 
 = 
 None 
 , 
 ) 
 

Specification for model evaluation to perform during tuning.

The evaluation will run at every evaluation_interval tuning steps. Default: 20.

Vertex Tensorboard where to write the evaluation metrics. The Tensorboard must be in the same location as the tuning job.

Create a Mobile Website
View Site in Mobile | Classic
Share by: