Module language_models (1.37.0)

Classes for working with language models.

Classes

ChatMessage

  ChatMessage 
 ( 
 content 
 : 
 str 
 , 
 author 
 : 
 str 
 ) 
 

A chat message.

CountTokensResponse

  CountTokensResponse 
 ( 
 total_tokens 
 : 
 int 
 , 
 total_billable_characters 
 : 
 int 
 , 
 _count_tokens_response 
 : 
 typing 
 . 
 Any 
 , 
 ) 
 

The response from a count_tokens request. .. attribute:: total_tokens

The total number of tokens counted across all instances passed to the request.

:type: int

EvaluationClassificationMetric

  EvaluationClassificationMetric 
 ( 
 label_name 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 auPrc 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 auRoc 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 logLoss 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 confidenceMetrics 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 List 
 [ 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 typing 
 . 
 Any 
 ]] 
 ] 
 = 
 None 
 , 
 confusionMatrix 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 typing 
 . 
 Any 
 ]] 
 = 
 None 
 , 
 ) 
 

The evaluation metric response for classification metrics.

Parameters
Name
Description
label_name
str

Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False is passed to evaluate().

auPrc
float

Optional. The area under the precision recall curve.

auRoc
float

Optional. The area under the receiver operating characteristic curve.

logLoss
float

Optional. Logarithmic loss.

confidenceMetrics
List[Dict[str, Any]]

Optional. This is only returned when only_summary_metrics=False is passed to evaluate().

confusionMatrix
Dict[str, Any]

Optional. This is only returned when only_summary_metrics=False is passed to evaluate().

EvaluationMetric

  EvaluationMetric 
 ( 
 bleu 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 , 
 rougeLSum 
 : 
 typing 
 . 
 Optional 
 [ 
 float 
 ] 
 = 
 None 
 ) 
 

The evaluation metric response.

Parameters
Name
Description
bleu
float

Optional. BLEU (Bilingual evauation understudy). Scores based on sacrebleu implementation.

rougeLSum
float

Optional. ROUGE-L (Longest Common Subsequence) scoring at summary level.

EvaluationQuestionAnsweringSpec

  EvaluationQuestionAnsweringSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ], 
 task_name 
 : 
 str 
 = 
 "question-answering" 
 , 
 ) 
 

Spec for question answering model evaluation tasks.

EvaluationTextClassificationSpec

  EvaluationTextClassificationSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ], 
 target_column_name 
 : 
 str 
 , 
 class_names 
 : 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 ) 
 

Spec for text classification model evaluation tasks.

Parameters
Name
Description
target_column_name
str

Required. The label column in the dataset provided in ground_truth_data . Required when task_name='text-classification'.

class_names
List[str]

Required. A list of all possible label names in your dataset. Required when task_name='text-classification'.

EvaluationTextGenerationSpec

  EvaluationTextGenerationSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ] 
 ) 
 

Spec for text generation model evaluation tasks.

EvaluationTextSummarizationSpec

  EvaluationTextSummarizationSpec 
 ( 
 ground_truth_data 
 : 
 typing 
 . 
 Union 
 [ 
 typing 
 . 
 List 
 [ 
 str 
 ], 
 str 
 , 
 pandas 
 . 
 DataFrame 
 ], 
 task_name 
 : 
 str 
 = 
 "summarization" 
 , 
 ) 
 

Spec for text summarization model evaluation tasks.

InputOutputTextPair

  InputOutputTextPair 
 ( 
 input_text 
 : 
 str 
 , 
 output_text 
 : 
 str 
 ) 
 

InputOutputTextPair represents a pair of input and output texts.

TextEmbedding

  TextEmbedding 
 ( 
 values 
 : 
 typing 
 . 
 List 
 [ 
 float 
 ], 
 statistics 
 : 
 typing 
 . 
 Optional 
 [ 
 vertexai 
 . 
 language_models 
 . 
 TextEmbeddingStatistics 
 ] 
 = 
 None 
 , 
 _prediction_response 
 : 
 typing 
 . 
 Optional 
 [ 
 google 
 . 
 cloud 
 . 
 aiplatform 
 . 
 models 
 . 
 Prediction 
 ] 
 = 
 None 
 , 
 ) 
 

Text embedding vector and statistics.

TextEmbeddingInput

  TextEmbeddingInput 
 ( 
 text 
 : 
 str 
 , 
 task_type 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 title 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 ) 
 

Structural text embedding input.

TextGenerationResponse

  TextGenerationResponse 
 ( 
 text 
 : 
 str 
 , 
 _prediction_response 
 : 
 typing 
 . 
 Any 
 , 
 is_blocked 
 : 
 bool 
 = 
 False 
 , 
 errors 
 : 
 typing 
 . 
 Tuple 
 [ 
 int 
 ] 
 = 
 (), 
 safety_attributes 
 : 
 typing 
 . 
 Dict 
 [ 
 str 
 , 
 float 
 ] 
 = 
< factory 
> , 
 grounding_metadata 
 : 
 typing 
 . 
 Optional 
 [ 
 vertexai 
 . 
 language_models 
 . 
 _language_models 
 . 
 GroundingMetadata 
 ] 
 = 
 None 
 ) 
 

TextGenerationResponse represents a response of a language model. .. attribute:: text

The generated text

:type: str

TuningEvaluationSpec

  TuningEvaluationSpec 
 ( 
 evaluation_data 
 : 
 typing 
 . 
 Optional 
 [ 
 str 
 ] 
 = 
 None 
 , 
 evaluation_interval 
 : 
 typing 
 . 
 Optional 
 [ 
 int 
 ] 
 = 
 None 
 , 
 enable_early_stopping 
 : 
 typing 
 . 
 Optional 
 [ 
 bool 
 ] 
 = 
 None 
 , 
 enable_checkpoint_selection 
 : 
 typing 
 . 
 Optional 
 [ 
 bool 
 ] 
 = 
 None 
 , 
 tensorboard 
 : 
 typing 
 . 
 Optional 
 [ 
 typing 
 . 
 Union 
 [ 
 google 
 . 
 cloud 
 . 
 aiplatform 
 . 
 tensorboard 
 . 
 tensorboard_resource 
 . 
 Tensorboard 
 , 
 str 
 ] 
 ] 
 = 
 None 
 , 
 ) 
 

Specification for model evaluation to perform during tuning.

Create a Mobile Website
View Site in Mobile | Classic
Share by: