Output only. Only available when model is for smart reply.
↳ raw_human_eval_template_csv
string
Output only. Human eval template in csv format. It takes real-world conversations provided through input dataset, generates example suggestions for customer to verify quality of the model. For Smart Reply, the generated csv file contains columns of Context, (Suggestions,Q1,Q2)*3, Actual reply. Context contains at most 10 latest messages in the conversation prior to the current suggestion. Q1: "Would you send it as the next message of agent?" Evaluated based on whether the suggest is appropriate to be sent by agent in current context. Q2: "Does the suggestion move the conversation closer to resolution?" Evaluated based on whether the suggestion provide solutions, or answers customer's question or collect information from customer to resolve the customer's issue. Actual reply column contains the actual agent reply sent in the context.
getName
The resource name of the evaluation. Format:projects/<Project ID>/conversationModels/<Conversation Model
ID>/evaluations/<Evaluation ID>
Returns
Type
Description
string
setName
The resource name of the evaluation. Format:projects/<Project ID>/conversationModels/<Conversation Model
ID>/evaluations/<Evaluation ID>
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getDisplayName
Optional. The display name of the model evaluation. At most 64 bytes long.
Returns
Type
Description
string
setDisplayName
Optional. The display name of the model evaluation. At most 64 bytes long.
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getEvaluationConfig
Optional. The configuration of the evaluation task.
It takes real-world conversations provided through input dataset, generates
example suggestions for customer to verify quality of the model.
For Smart Reply, the generated csv file contains columns of
Context, (Suggestions,Q1,Q2)*3, Actual reply.
Context contains at most 10 latest messages in the conversation prior to
the current suggestion.
Q1: "Would you send it as the next message of agent?"
Evaluated based on whether the suggest is appropriate to be sent by
agent in current context.
Q2: "Does the suggestion move the conversation closer to resolution?"
Evaluated based on whether the suggestion provide solutions, or answers
customer's question or collect information from customer to resolve the
customer's issue.
Actual reply column contains the actual agent reply sent in the context.
Returns
Type
Description
string
setRawHumanEvalTemplateCsv
Output only. Human eval template in csv format.
It takes real-world conversations provided through input dataset, generates
example suggestions for customer to verify quality of the model.
For Smart Reply, the generated csv file contains columns of
Context, (Suggestions,Q1,Q2)*3, Actual reply.
Context contains at most 10 latest messages in the conversation prior to
the current suggestion.
Q1: "Would you send it as the next message of agent?"
Evaluated based on whether the suggest is appropriate to be sent by
agent in current context.
Q2: "Does the suggestion move the conversation closer to resolution?"
Evaluated based on whether the suggestion provide solutions, or answers
customer's question or collect information from customer to resolve the
customer's issue.
Actual reply column contains the actual agent reply sent in the context.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Dialogflow V2 Client - Class ConversationModelEvaluation (2.1.2)\n\nVersion latestkeyboard_arrow_down\n\n- [2.1.2 (latest)](/php/docs/reference/cloud-dialogflow/latest/V2.ConversationModelEvaluation)\n- [2.1.1](/php/docs/reference/cloud-dialogflow/2.1.1/V2.ConversationModelEvaluation)\n- [2.0.1](/php/docs/reference/cloud-dialogflow/2.0.1/V2.ConversationModelEvaluation)\n- [1.17.2](/php/docs/reference/cloud-dialogflow/1.17.2/V2.ConversationModelEvaluation)\n- [1.16.0](/php/docs/reference/cloud-dialogflow/1.16.0/V2.ConversationModelEvaluation)\n- [1.15.1](/php/docs/reference/cloud-dialogflow/1.15.1/V2.ConversationModelEvaluation)\n- [1.14.0](/php/docs/reference/cloud-dialogflow/1.14.0/V2.ConversationModelEvaluation)\n- [1.13.0](/php/docs/reference/cloud-dialogflow/1.13.0/V2.ConversationModelEvaluation)\n- [1.12.3](/php/docs/reference/cloud-dialogflow/1.12.3/V2.ConversationModelEvaluation)\n- [1.11.0](/php/docs/reference/cloud-dialogflow/1.11.0/V2.ConversationModelEvaluation)\n- [1.10.2](/php/docs/reference/cloud-dialogflow/1.10.2/V2.ConversationModelEvaluation)\n- [1.9.0](/php/docs/reference/cloud-dialogflow/1.9.0/V2.ConversationModelEvaluation)\n- [1.8.0](/php/docs/reference/cloud-dialogflow/1.8.0/V2.ConversationModelEvaluation)\n- [1.7.2](/php/docs/reference/cloud-dialogflow/1.7.2/V2.ConversationModelEvaluation)\n- [1.6.0](/php/docs/reference/cloud-dialogflow/1.6.0/V2.ConversationModelEvaluation)\n- [1.5.0](/php/docs/reference/cloud-dialogflow/1.5.0/V2.ConversationModelEvaluation)\n- [1.4.0](/php/docs/reference/cloud-dialogflow/1.4.0/V2.ConversationModelEvaluation)\n- [1.3.2](/php/docs/reference/cloud-dialogflow/1.3.2/V2.ConversationModelEvaluation)\n- [1.2.0](/php/docs/reference/cloud-dialogflow/1.2.0/V2.ConversationModelEvaluation)\n- [1.1.1](/php/docs/reference/cloud-dialogflow/1.1.1/V2.ConversationModelEvaluation)\n- [1.0.1](/php/docs/reference/cloud-dialogflow/1.0.1/V2.ConversationModelEvaluation) \nReference documentation and code samples for the Google Cloud Dialogflow V2 Client class ConversationModelEvaluation.\n\nRepresents evaluation result of a conversation model.\n\nGenerated from protobuf message `google.cloud.dialogflow.v2.ConversationModelEvaluation`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Dialogflow \\\\ V2\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getName\n\nThe resource name of the evaluation. Format:\n`projects/\u003cProject ID\u003e/conversationModels/\u003cConversation Model\nID\u003e/evaluations/\u003cEvaluation ID\u003e`\n\n### setName\n\nThe resource name of the evaluation. Format:\n`projects/\u003cProject ID\u003e/conversationModels/\u003cConversation Model\nID\u003e/evaluations/\u003cEvaluation ID\u003e`\n\n### getDisplayName\n\nOptional. The display name of the model evaluation. At most 64 bytes long.\n\n### setDisplayName\n\nOptional. The display name of the model evaluation. At most 64 bytes long.\n\n### getEvaluationConfig\n\nOptional. The configuration of the evaluation task.\n\n### hasEvaluationConfig\n\n### clearEvaluationConfig\n\n### setEvaluationConfig\n\nOptional. The configuration of the evaluation task.\n\n### getCreateTime\n\nOutput only. Creation time of this model.\n\n### hasCreateTime\n\n### clearCreateTime\n\n### setCreateTime\n\nOutput only. Creation time of this model.\n\n### getSmartReplyMetrics\n\nOutput only. Only available when model is for smart reply.\n\n### hasSmartReplyMetrics\n\n### setSmartReplyMetrics\n\nOutput only. Only available when model is for smart reply.\n\n### getRawHumanEvalTemplateCsv\n\nOutput only. Human eval template in csv format.\n\nIt takes real-world conversations provided through input dataset, generates\nexample suggestions for customer to verify quality of the model.\nFor Smart Reply, the generated csv file contains columns of\nContext, (Suggestions,Q1,Q2)\\*3, Actual reply.\nContext contains at most 10 latest messages in the conversation prior to\nthe current suggestion.\nQ1: \"Would you send it as the next message of agent?\"\nEvaluated based on whether the suggest is appropriate to be sent by\nagent in current context.\nQ2: \"Does the suggestion move the conversation closer to resolution?\"\nEvaluated based on whether the suggestion provide solutions, or answers\ncustomer's question or collect information from customer to resolve the\ncustomer's issue.\nActual reply column contains the actual agent reply sent in the context.\n\n### setRawHumanEvalTemplateCsv\n\nOutput only. Human eval template in csv format.\n\nIt takes real-world conversations provided through input dataset, generates\nexample suggestions for customer to verify quality of the model.\nFor Smart Reply, the generated csv file contains columns of\nContext, (Suggestions,Q1,Q2)\\*3, Actual reply.\nContext contains at most 10 latest messages in the conversation prior to\nthe current suggestion.\nQ1: \"Would you send it as the next message of agent?\"\nEvaluated based on whether the suggest is appropriate to be sent by\nagent in current context.\nQ2: \"Does the suggestion move the conversation closer to resolution?\"\nEvaluated based on whether the suggestion provide solutions, or answers\ncustomer's question or collect information from customer to resolve the\ncustomer's issue.\nActual reply column contains the actual agent reply sent in the context.\n\n### getMetrics"]]