- 1.122.0 (latest)
- 1.121.0
- 1.120.0
- 1.119.0
- 1.118.0
- 1.117.0
- 1.95.1
- 1.94.0
- 1.93.1
- 1.92.0
- 1.91.0
- 1.90.0
- 1.89.0
- 1.88.0
- 1.87.0
- 1.86.0
- 1.85.0
- 1.84.0
- 1.83.0
- 1.82.0
- 1.81.0
- 1.80.0
- 1.79.0
- 1.78.0
- 1.77.0
- 1.76.0
- 1.75.0
- 1.74.0
- 1.73.0
- 1.72.0
- 1.71.1
- 1.70.0
- 1.69.0
- 1.68.0
- 1.67.1
- 1.66.0
- 1.65.0
- 1.63.0
- 1.62.0
- 1.60.0
- 1.59.0
EvalResult
(
summary_metrics
:
typing
.
Dict
[
str
,
float
],
metrics_table
:
typing
.
Optional
[
pd
.
DataFrame
]
=
None
,
metadata
:
typing
.
Optional
[
typing
.
Dict
[
str
,
str
]]
=
None
,
)
Evaluation result.
Attributes
Name
Description
summary_metrics
Dict[str, float]
A dictionary of summary evaluation metrics for an evaluation run.
metrics_table
Optional[pd.DataFrame]
A pandas.DataFrame table containing evaluation dataset inputs, predictions, explanations, and metric results per row.
metadata
Optional[Dict[str, str]]
The metadata for the evaluation run.

