- 1.111.0 (latest)
- 1.110.0
- 1.109.0
- 1.108.0
- 1.107.0
- 1.106.0
- 1.105.0
- 1.104.0
- 1.103.0
- 1.102.0
- 1.101.0
- 1.100.0
- 1.99.0
- 1.98.0
- 1.97.0
- 1.96.0
- 1.95.1
- 1.94.0
- 1.93.1
- 1.92.0
- 1.91.0
- 1.90.0
- 1.89.0
- 1.88.0
- 1.87.0
- 1.86.0
- 1.85.0
- 1.84.0
- 1.83.0
- 1.82.0
- 1.81.0
- 1.80.0
- 1.79.0
- 1.78.0
- 1.77.0
- 1.76.0
- 1.75.0
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
- 1.58.0
- 1.57.0
- 1.56.0
- 1.55.0
- 1.54.1
- 1.53.0
- 1.52.0
- 1.51.0
- 1.50.0
- 1.49.0
- 1.48.0
- 1.47.0
- 1.46.0
- 1.45.0
- 1.44.0
- 1.43.0
- 1.39.0
- 1.38.1
- 1.37.0
- 1.36.4
- 1.35.0
- 1.34.0
- 1.33.1
- 1.32.0
- 1.31.1
- 1.30.1
- 1.29.0
- 1.28.1
- 1.27.1
- 1.26.1
- 1.25.0
- 1.24.1
- 1.23.0
- 1.22.1
- 1.21.0
- 1.20.0
- 1.19.1
- 1.18.3
- 1.17.1
- 1.16.1
- 1.15.1
- 1.14.0
- 1.13.1
- 1.12.1
- 1.11.0
- 1.10.0
- 1.9.0
- 1.8.1
- 1.7.1
- 1.6.2
- 1.5.0
- 1.4.3
- 1.3.0
- 1.2.0
- 1.1.1
- 1.0.1
- 0.9.0
- 0.8.0
- 0.7.1
- 0.6.0
- 0.5.1
- 0.4.0
- 0.3.1
Classes for working with language models.
Classes
ChatMessage
ChatMessage
(
content
:
str
,
author
:
str
)
A chat message.
CountTokensResponse
CountTokensResponse
(
total_tokens
:
int
,
total_billable_characters
:
int
,
_count_tokens_response
:
typing
.
Any
,
)
The response from a count_tokens request. .. attribute:: total_tokens
The total number of tokens counted across all instances passed to the request.
:type: int
EvaluationClassificationMetric
EvaluationClassificationMetric
(
label_name
:
typing
.
Optional
[
str
]
=
None
,
auPrc
:
typing
.
Optional
[
float
]
=
None
,
auRoc
:
typing
.
Optional
[
float
]
=
None
,
logLoss
:
typing
.
Optional
[
float
]
=
None
,
confidenceMetrics
:
typing
.
Optional
[
typing
.
List
[
typing
.
Dict
[
str
,
typing
.
Any
]]
]
=
None
,
confusionMatrix
:
typing
.
Optional
[
typing
.
Dict
[
str
,
typing
.
Any
]]
=
None
,
)
The evaluation metric response for classification metrics.
label_name
str
Optional. The name of the label associated with the metrics. This is only returned when only_summary_metrics=False
is passed to evaluate().
auPrc
float
Optional. The area under the precision recall curve.
auRoc
float
Optional. The area under the receiver operating characteristic curve.
logLoss
float
Optional. Logarithmic loss.
confidenceMetrics
List[Dict[str, Any]]
Optional. This is only returned when only_summary_metrics=False
is passed to evaluate().
confusionMatrix
Dict[str, Any]
Optional. This is only returned when only_summary_metrics=False
is passed to evaluate().
EvaluationMetric
EvaluationMetric
(
bleu
:
typing
.
Optional
[
float
]
=
None
,
rougeLSum
:
typing
.
Optional
[
float
]
=
None
)
The evaluation metric response.
bleu
float
Optional. BLEU (Bilingual evauation understudy). Scores based on sacrebleu implementation.
rougeLSum
float
Optional. ROUGE-L (Longest Common Subsequence) scoring at summary level.
EvaluationQuestionAnsweringSpec
EvaluationQuestionAnsweringSpec
(
ground_truth_data
:
typing
.
Union
[
typing
.
List
[
str
],
str
,
pandas
.
DataFrame
],
task_name
:
str
=
"question-answering"
,
)
Spec for question answering model evaluation tasks.
EvaluationTextClassificationSpec
EvaluationTextClassificationSpec
(
ground_truth_data
:
typing
.
Union
[
typing
.
List
[
str
],
str
,
pandas
.
DataFrame
],
target_column_name
:
str
,
class_names
:
typing
.
List
[
str
],
)
Spec for text classification model evaluation tasks.
target_column_name
str
Required. The label column in the dataset provided in ground_truth_data
. Required when task_name='text-classification'.
class_names
List[str]
Required. A list of all possible label names in your dataset. Required when task_name='text-classification'.
EvaluationTextGenerationSpec
EvaluationTextGenerationSpec
(
ground_truth_data
:
typing
.
Union
[
typing
.
List
[
str
],
str
,
pandas
.
DataFrame
]
)
Spec for text generation model evaluation tasks.
EvaluationTextSummarizationSpec
EvaluationTextSummarizationSpec
(
ground_truth_data
:
typing
.
Union
[
typing
.
List
[
str
],
str
,
pandas
.
DataFrame
],
task_name
:
str
=
"summarization"
,
)
Spec for text summarization model evaluation tasks.
InputOutputTextPair
InputOutputTextPair
(
input_text
:
str
,
output_text
:
str
)
InputOutputTextPair represents a pair of input and output texts.
TextEmbedding
TextEmbedding
(
values
:
typing
.
List
[
float
],
statistics
:
typing
.
Optional
[
vertexai
.
language_models
.
TextEmbeddingStatistics
]
=
None
,
_prediction_response
:
typing
.
Optional
[
google
.
cloud
.
aiplatform
.
models
.
Prediction
]
=
None
,
)
Text embedding vector and statistics.
TextEmbeddingInput
TextEmbeddingInput
(
text
:
str
,
task_type
:
typing
.
Optional
[
str
]
=
None
,
title
:
typing
.
Optional
[
str
]
=
None
,
)
Structural text embedding input.
TextGenerationResponse
TextGenerationResponse
(
text
:
str
,
_prediction_response
:
typing
.
Any
,
is_blocked
:
bool
=
False
,
errors
:
typing
.
Tuple
[
int
]
=
(),
safety_attributes
:
typing
.
Dict
[
str
,
float
]
=
< factory
> ,
grounding_metadata
:
typing
.
Optional
[
vertexai
.
language_models
.
_language_models
.
GroundingMetadata
]
=
None
)
TextGenerationResponse represents a response of a language model. .. attribute:: text
The generated text
:type: str
TuningEvaluationSpec
TuningEvaluationSpec
(
evaluation_data
:
typing
.
Optional
[
str
]
=
None
,
evaluation_interval
:
typing
.
Optional
[
int
]
=
None
,
enable_early_stopping
:
typing
.
Optional
[
bool
]
=
None
,
enable_checkpoint_selection
:
typing
.
Optional
[
bool
]
=
None
,
tensorboard
:
typing
.
Optional
[
typing
.
Union
[
google
.
cloud
.
aiplatform
.
tensorboard
.
tensorboard_resource
.
Tensorboard
,
str
]
]
=
None
,
)
Specification for model evaluation to perform during tuning.