Class PointwiseMetric (1.95.1)
Stay organized with collections
Save and categorize content based on your preferences.
PointwiseMetric
(
*
,
metric
:
str
,
metric_prompt_template
:
typing
.
Union
[
vertexai
.
evaluation
.
metrics
.
metric_prompt_template
.
PointwiseMetricPromptTemplate
,
str
,
]
)
A Model-based Pointwise Metric.
A model-based evaluation metric that evaluate a single generative model's
response.
For more details on when to use model-based pointwise metrics, see Evaluation methods and metrics
.
Usage Examples:
```
candidate_model = GenerativeModel("gemini-1.5-pro")
eval_dataset = pd.DataFrame({
"prompt" : [...],
})
fluency_metric = PointwiseMetric(
metric="fluency",
metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template('fluency'),
)
pointwise_eval_task = EvalTask(
dataset=eval_dataset,
metrics=[
fluency_metric,
MetricPromptTemplateExamples.Pointwise.GROUNDEDNESS,
],
)
pointwise_result = pointwise_eval_task.evaluate(
model=candidate_model,
)
```
Methods
PointwiseMetric
PointwiseMetric
(
*
,
metric
:
str
,
metric_prompt_template
:
typing
.
Union
[
vertexai
.
evaluation
.
metrics
.
metric_prompt_template
.
PointwiseMetricPromptTemplate
,
str
,
]
)
Initializes a pointwise evaluation metric.
Except as otherwise noted, the content of this page is licensed under the Creative Commons Attribution 4.0 License
, and code samples are licensed under the Apache 2.0 License
. For details, see the Google Developers Site Policies
. Java is a registered trademark of Oracle and/or its affiliates.
Last updated 2025-09-04 UTC.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Class PointwiseMetric (1.95.1)\n\nVersion latestkeyboard_arrow_down\n\n- [1.95.1 (latest)](/python/docs/reference/vertexai/latest/vertexai.evaluation.PointwiseMetric)\n- [1.94.0](/python/docs/reference/vertexai/1.94.0/vertexai.evaluation.PointwiseMetric)\n- [1.93.1](/python/docs/reference/vertexai/1.93.1/vertexai.evaluation.PointwiseMetric)\n- [1.92.0](/python/docs/reference/vertexai/1.92.0/vertexai.evaluation.PointwiseMetric)\n- [1.91.0](/python/docs/reference/vertexai/1.91.0/vertexai.evaluation.PointwiseMetric)\n- [1.90.0](/python/docs/reference/vertexai/1.90.0/vertexai.evaluation.PointwiseMetric)\n- [1.89.0](/python/docs/reference/vertexai/1.89.0/vertexai.evaluation.PointwiseMetric)\n- [1.88.0](/python/docs/reference/vertexai/1.88.0/vertexai.evaluation.PointwiseMetric)\n- [1.87.0](/python/docs/reference/vertexai/1.87.0/vertexai.evaluation.PointwiseMetric)\n- [1.86.0](/python/docs/reference/vertexai/1.86.0/vertexai.evaluation.PointwiseMetric)\n- [1.85.0](/python/docs/reference/vertexai/1.85.0/vertexai.evaluation.PointwiseMetric)\n- [1.84.0](/python/docs/reference/vertexai/1.84.0/vertexai.evaluation.PointwiseMetric)\n- [1.83.0](/python/docs/reference/vertexai/1.83.0/vertexai.evaluation.PointwiseMetric)\n- [1.82.0](/python/docs/reference/vertexai/1.82.0/vertexai.evaluation.PointwiseMetric)\n- [1.81.0](/python/docs/reference/vertexai/1.81.0/vertexai.evaluation.PointwiseMetric)\n- [1.80.0](/python/docs/reference/vertexai/1.80.0/vertexai.evaluation.PointwiseMetric)\n- [1.79.0](/python/docs/reference/vertexai/1.79.0/vertexai.evaluation.PointwiseMetric)\n- [1.78.0](/python/docs/reference/vertexai/1.78.0/vertexai.evaluation.PointwiseMetric)\n- [1.77.0](/python/docs/reference/vertexai/1.77.0/vertexai.evaluation.PointwiseMetric)\n- [1.76.0](/python/docs/reference/vertexai/1.76.0/vertexai.evaluation.PointwiseMetric)\n- [1.75.0](/python/docs/reference/vertexai/1.75.0/vertexai.evaluation.PointwiseMetric)\n- [1.74.0](/python/docs/reference/vertexai/1.74.0/vertexai.evaluation.PointwiseMetric)\n- [1.73.0](/python/docs/reference/vertexai/1.73.0/vertexai.evaluation.PointwiseMetric)\n- [1.72.0](/python/docs/reference/vertexai/1.72.0/vertexai.evaluation.PointwiseMetric)\n- [1.71.1](/python/docs/reference/vertexai/1.71.1/vertexai.evaluation.PointwiseMetric)\n- [1.70.0](/python/docs/reference/vertexai/1.70.0/vertexai.evaluation.PointwiseMetric)\n- [1.69.0](/python/docs/reference/vertexai/1.69.0/vertexai.evaluation.PointwiseMetric)\n- [1.68.0](/python/docs/reference/vertexai/1.68.0/vertexai.evaluation.PointwiseMetric)\n- [1.67.1](/python/docs/reference/vertexai/1.67.1/vertexai.evaluation.PointwiseMetric)\n- [1.66.0](/python/docs/reference/vertexai/1.66.0/vertexai.evaluation.PointwiseMetric)\n- [1.65.0](/python/docs/reference/vertexai/1.65.0/vertexai.evaluation.PointwiseMetric)\n- [1.63.0](/python/docs/reference/vertexai/1.63.0/vertexai.evaluation.PointwiseMetric)\n- [1.62.0](/python/docs/reference/vertexai/1.62.0/vertexai.evaluation.PointwiseMetric)\n- [1.60.0](/python/docs/reference/vertexai/1.60.0/vertexai.evaluation.PointwiseMetric)\n- [1.59.0](/python/docs/reference/vertexai/1.59.0/vertexai.evaluation.PointwiseMetric) \n\n PointwiseMetric(\n *,\n metric: str,\n metric_prompt_template: typing.Union[\n vertexai.evaluation.metrics.metric_prompt_template.PointwiseMetricPromptTemplate,\n str,\n ]\n )\n\nA Model-based Pointwise Metric.\n\nA model-based evaluation metric that evaluate a single generative model's\nresponse.\n\nFor more details on when to use model-based pointwise metrics, see\n[Evaluation methods and metrics](https://cloud.google.com/vertex-ai/generative-ai/docs/models/determine-eval).\n\nUsage Examples: \n\n ```\n candidate_model = GenerativeModel(\"gemini-1.5-pro\")\n eval_dataset = pd.DataFrame({\n \"prompt\" : [...],\n })\n fluency_metric = PointwiseMetric(\n metric=\"fluency\",\n metric_prompt_template=MetricPromptTemplateExamples.get_prompt_template('fluency'),\n )\n pointwise_eval_task = EvalTask(\n dataset=eval_dataset,\n metrics=[\n fluency_metric,\n MetricPromptTemplateExamples.Pointwise.GROUNDEDNESS,\n ],\n )\n pointwise_result = pointwise_eval_task.evaluate(\n model=candidate_model,\n )\n ```\n\nMethods\n-------\n\n### PointwiseMetric\n\n PointwiseMetric(\n *,\n metric: str,\n metric_prompt_template: typing.Union[\n vertexai.evaluation.metrics.metric_prompt_template.PointwiseMetricPromptTemplate,\n str,\n ]\n )\n\nInitializes a pointwise evaluation metric."]]