[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-06-10 UTC."],[[["\u003cp\u003eThe content details metrics that are assessed across various confidence levels, using both fuzzy and exact matching methods.\u003c/p\u003e\n"],["\u003cp\u003e\u003ccode\u003econfidenceLevelMetrics\u003c/code\u003e and \u003ccode\u003econfidenceLevelMetricsExact\u003c/code\u003e provide metrics data for fuzzy and exact matching respectively, with each being represented as an array of \u003ccode\u003eConfidenceLevelMetrics\u003c/code\u003e objects.\u003c/p\u003e\n"],["\u003cp\u003eThe area under the precision recall curve (AUPRC) and estimated calibration error (ECE) are calculated both with and without fuzzy matching, and are described with \u003ccode\u003eauprc\u003c/code\u003e, \u003ccode\u003eestimatedCalibrationError\u003c/code\u003e, \u003ccode\u003eauprcExact\u003c/code\u003e, and \u003ccode\u003eestimatedCalibrationErrorExact\u003c/code\u003e respectively.\u003c/p\u003e\n"],["\u003cp\u003eThe \u003ccode\u003emetricsType\u003c/code\u003e field specifies the type of metrics being used for a given label, referencing a defined \u003ccode\u003eMetricsType\u003c/code\u003e.\u003c/p\u003e\n"]]],[],null,["# MultiConfidenceMetrics\n\nMetrics across multiple confidence levels."]]