Quality AI basics

Quality AI uses an AI model to automatically analyze customer service conversations, or interactions between contact center agents and users. The AI model analyzes chat or voice transcripts.

You can perform the following operations on custom tags when you are editing a scorecard.

  • Add tags to existing questions.
  • Remove tags from existing questions.

Conversation details

Conversations contain the following details. These details include identifiers and metrics for analysis.

  • Agent ID: A unique number assigned to each agent which identifies the conversations they have handled.
  • Agent total score: The average score of an agent's performance across that agent's conversations.
  • AHT: Average Handling Time, the average duration of an agent's conversations in a specified timeframe.
  • Average agent score: The average across all your agents' total scores. (See Agent total score.)
  • Average agent quality score: Average of the quality scores produced by a single agent's conversations over a specified period of time. (See Quality score.)
  • Average conversation score: Average score across all conversations.
  • Average quality score: Average of the quality score over a specified period of time. (See Quality score.)
  • Channel: The medium of conversation between a customer and an agent. Channel has one of two values: voice or chat
  • Conversation ID: A unique number assigned to identify each customer service conversation.
  • Conversation total score: Sum of question scores in a single conversation.
  • CSAT: Customer satisfaction rating, generally ranging from 1-5.
  • Duration: Time the conversation spans, beginning to end.
  • Primary topic: The concern discussed during a conversation, determined by topic modeling. Quality AI only displays a primary topic if you've used topic modeling on that conversation.
  • Quality score: The overall score assigned for a scorecard.
  • Question: Used to evaluate an agent's performance in a conversation. You enter your questions into Quality AI, and the agent is then rated on whether they satisfied the criteria for each question.
  • Sentiment: The main emotional state conveyed by the conversation. Sentiment has one of three values: positive, neutral, or negative. Quality AI displays a sentiment only if you've used sentiment analysis on the conversation.
  • Silence: Time during which neither the customer nor the agent spoke or typed.
  • Start date: The date on which the conversation began.
  • Start time: The time at which the conversation began.
  • Total volume: The total number of conversations that a single agent handled in a specified timeframe.

Scorecards

The scorecard is a structured framework used to assess conversation quality and the performance of contact center agents during conversations. Each contact center has its own scorecards.

Each scorecard consists of the following information:

  • Question (Example: Did the agent provide an appropriate product compliment?).
  • Optional: Tag to group the questions into categories.
  • Instructions for interpreting the question and defining each answer choice.
  • Answer type (can be text, numbers, or yes/no).
  • Answer choices that define the possible answers based on answer type (For example, yes and no, a list of numbers, or some text responses).
  • Score to set the points earned for each answer choice. The maximum score for a single question is determined by the highest score among all the answer choices.

You can create multiple scorecards in a single Google Cloud project. If each scorecard contains different questions, you can see multiple quality scores for the same conversations. Each score is then based on different criteria.

Predefined questions

You can use predefined questions in your scorecard with questions and answer choices that you can't edit. Conversational Insights defines the instructions and answer choices. These questions have an ID indicating its version. You can add predefined questions to any scorecard to use for Quality AI analysis.

Predefined questions identify the following conversation outcome metrics:

  • Conversation outcome
  • Escalation initiator
  • Agent helpfulness
  • User satisfaction

Conversation outcome

Conversation outcomes identify the outcome of the conversation in the context of the user's task.

  • Abandoned: The user stopped responding or dropped off the conversation before any of the user's tasks were completed and without being escalated to a human agent.
  • Partially resolved: The agent understood the user's intent and completed some, but not all of the user's tasks. The conversation ended before all the user's tasks were fully completed.
  • Escalated: Conversation transferred to a human agent, initiated by the user's request.
  • Redirected: Conversation transferred to a human agent, initiated by the agent.
  • Successfully resolved: The agent understood the user's intent, completed the user's tasks, and received an explicit acknowledgement from the user at the end of the conversation.

Escalation initiator

The escalation initiator identifies if there is an escalation to a human agent and who initiated that process.

  • User: The user initiated an escalation to a human agent.
  • Agent: The agent initiated an escalation to a human agent.
  • No transfer: The conversation was not escalated to a human agent. This includes cases where the agent redirected the user to a different resource to resolve their request, but did not escalate to a human agent.

Agent helpfulness

Agent helpfulness identifies whether or not the agent's response was helpful from the perspective of the user. This metric is based on the full conversation.

  • Helpful: The agent provided helpful information to the user; either resolved or partially fulfilled the user's intent, or redirected them to an appropriate resource to resolve their request.
  • Unhelpful: The agent did not provide any helpful information to the user and did not fulfill the user's intent. This includes cases where the agent misinterpreted the user's request or provided incorrect information for the task.

User satisfaction

User satisfaction identifies whether or not the user expressed dissatisfaction or rejected the end solution, got upset, or became abusive.

  • Unsatisfied: The user expressed dissatisfaction with the agent's response, explicitly rejected the solution by stating it, or became upset.
  • Unclearor satisfied: The user did not express any dissatisfaction or anger with the agent's main response and did not reject it.

Custom tags

Use tags to group questions into categories on a given scorecard. Each project includes default BUSINESS, COMPLIANCE, and CUSTOMER tags. In addition to these three tags, you can use Conversational Insights to create your own custom tags. The custom tags are limited to 10 per Google Cloud project. If in case more than 10 tags are needed, contact your Conversational Insights poc.

Conversation scores

Quality AI automatically evaluates conversations against the scorecards you supply. For each question, do the following:

  • Define the answer type.
  • List the possible answer choices.
  • Set the score for each answer choice.

One scorecard

A conversation score consists of the total received score divided by the maximum possible score for that conversation. The total received score is the sum of all the points obtained from the assigned answer choices for each question. The maximum possible score is the sum of the maximum scores for each question. Any question assigned an N/A response is removed from this conversation score calculation. The conversation score is displayed as a percentage.

Multiple scorecards

A conversation can receive multiple conversation scores. Each conversation score reflects the agent's performance during that conversation according to the questions on a single scorecard. When each scorecard contains a different group of questions, you can evaluate a single conversation against multiple types of questions.

Manual updates

After analyzing a conversation, you can manually update the answer to any question. When you manually update an answer, Quality AI automatically adjusts the score for that question and the corresponding conversation score. In addition, the Quality AI console marks that question and conversation with a visual icon to indicate that the answer was manually updated. Lastly, Quality AI automatically adds any manually updated answer as an example conversation to improve the AI model.

Source menu

In the Conversational Insights console, each page in the Quality AI section includes a Sourcemenu. This menu lists your scorecards so that you can choose which information to display.

For example, on the Conversationspage, you can view the scores for each conversation. Those scores depend on which scorecard Quality AI evaluated the conversations against. So, if you select a different scorecard from the Sourcemenu the scores for the same conversations might change.

Examples

The following examples illustrate how a conversation score is calculated.

Example 1

If the following is true:

  • A scorecard has 10 questions
  • Each question is a yes or no question
  • Yes receives a score of 1 and No gets 0
  • A conversation has received all "Yes" answers

Then the conversation score is 100%.

Example 2

If the following is true:

  • A scorecard has 10 questions
  • Each question is a yes or no question
  • Yes receives a score of 1 and No gets 0
  • A conversation receive 7 "Yes" responses, 2 "No", and 1 "N/A"

Then the "N/A" question is removed so there are 9 total possible points. The conversation received 7 out of 9 possible points. The conversation score is rounded and displayed as 78%.

What's next?

Design a Mobile Site
View Site in Mobile | Classic
Share by: