- Resource: ConversationProfile
- JSON representation
- AutomatedAgentConfig
- HumanAgentAssistantConfig
- NotificationConfig
- MessageFormat
- SuggestionConfig
- SuggestionFeatureConfig
- SuggestionFeature
- Type
- SuggestionTriggerSettings
- SuggestionQueryConfig
- KnowledgeBaseQuerySource
- DocumentQuerySource
- DialogflowQuerySource
- HumanAgentSideConfig
- ContextFilterSettings
- Sections
- SectionType
- ConversationModelConfig
- ConversationProcessConfig
- MessageAnalysisConfig
- HumanAgentHandoffConfig
- LivePersonConfig
- SalesforceLiveAgentConfig
- LoggingConfig
- SpeechToTextConfig
- SpeechModelVariant
- AudioEncoding
- Methods
Resource: ConversationProfile
Defines the services to connect to incoming Dialogflow conversations.
JSON representation |
---|
{ "name" : string , "displayName" : string , "createTime" : string , "updateTime" : string , "automatedAgentConfig" : { object ( |
Fields | |
---|---|
name
|
The unique identifier of this conversation profile. Format: |
displayName
|
Required. Human readable name for this profile. Max length 1024 bytes. |
createTime
|
Output only. Create time of the conversation profile. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
updateTime
|
Output only. Update time of the conversation profile. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
automatedAgentConfig
|
Configuration for an automated agent to use with this profile. |
humanAgentAssistantConfig
|
Configuration for agent assistance to use with this profile. |
humanAgentHandoffConfig
|
Configuration for connecting to a live agent. Currently, this feature is not general available, please contact Google to get access. |
notificationConfig
|
Configuration for publishing conversation lifecycle events. |
loggingConfig
|
Configuration for logging conversation lifecycle events. |
newRecognitionResultNotificationConfig
|
Optional. Configuration for publishing transcription intermediate results. Event will be sent in format of |
sttConfig
|
Settings for speech transcription. |
languageCode
|
Language code for the conversation profile. If not specified, the language is en-US. Language at ConversationProfile should be set for all non en-us languages. This should be a BCP-47 language tag. Example: "en-US". |
timeZone
|
The time zone of this conversational profile from the time zone database , e.g., America/New_York, Europe/Paris. Defaults to America/New_York. |
securitySettings
|
Name of the CX SecuritySettings reference for the agent. Format: |
ttsConfig
|
Configuration for Text-to-Speech synthesization. Used by Phone Gateway to specify synthesization options. If agent defines synthesization options as well, agent settings overrides the option here. |
AutomatedAgentConfig
Defines the Automated Agent to connect to a conversation.
JSON representation |
---|
{ "agent" : string , "sessionTtl" : string } |
agent
string
Required. ID of the Dialogflow agent environment to use.
This project needs to either be the same project as the conversation or you need to grant service-<Conversation Project
Number>@gcp-sa-dialogflow.iam.gserviceaccount.com
the Dialogflow API
Service Agent
role in this project.
-
For ES agents, use format:
projects/<Project ID>/locations/<Location ID>/agent/environments/<Environment ID or '-'>
. If environment is not specified, the defaultdraft
environment is used. Refer to DetectIntentRequest for more details. -
For CX agents, use format
projects/<Project ID>/locations/<Location ID>/agents/<Agent ID>/environments/<Environment ID or '-'>
. If environment is not specified, the defaultdraft
environment is used.
sessionTtl
string (
Duration
format)
Optional. Configure lifetime of the Dialogflow session. By default, a Dialogflow CX session remains active and its data is stored for 30 minutes after the last request is sent for the session. This value should be no longer than 1 day.
A duration in seconds with up to nine fractional digits, ending with ' s
'. Example: "3.5s"
.
HumanAgentAssistantConfig
Defines the Human Agent Assistant to connect to a conversation.
JSON representation |
---|
{ "notificationConfig" : { object ( |
Fields | |
---|---|
notificationConfig
|
Pub/Sub topic on which to publish new agent assistant events. |
humanAgentSuggestionConfig
|
Configuration for agent assistance of human agent participant. |
endUserSuggestionConfig
|
Configuration for agent assistance of end user participant. Currently, this feature is not general available, please contact Google to get access. |
NotificationConfig
Defines notification behavior.
JSON representation |
---|
{
"topic"
:
string
,
"messageFormat"
:
enum (
|
Fields | |
---|---|
topic
|
Name of the Pub/Sub topic to publish conversation events like For telephony integration to receive notification, make sure either this topic is in the same project as the conversation or you grant For chat integration to receive notification, make sure API caller has been granted the Format: |
MessageFormat
Format of cloud pub/sub message.
Enums | |
---|---|
MESSAGE_FORMAT_UNSPECIFIED
|
If it is unspecified, PROTO will be used. |
PROTO
|
Pub/Sub message will be serialized proto. |
JSON
|
Pub/Sub message will be json. |
SuggestionConfig
Detail human agent assistant config.
JSON representation |
---|
{
"featureConfigs"
:
[
{
object (
|
Fields | |
---|---|
featureConfigs[]
|
Configuration of different suggestion features. One feature can have only one config. |
groupSuggestionResponses
|
If If |
generators[]
|
Optional. List of various generator resource names used in the conversation profile. |
disableHighLatencyFeaturesSyncDelivery
|
Optional. When disableHighLatencyFeaturesSyncDelivery is true and using the AnalyzeContent API, we will not deliver the responses from high latency features in the API response. The humanAgentAssistantConfig.notification_config must be configured and enableEventBasedSuggestion must be set to true to receive the responses from high latency features in Pub/Sub. High latency feature(s): KNOWLEDGE_ASSIST |
SuggestionFeatureConfig
Config for suggestion features.
JSON representation |
---|
{ "suggestionFeature" : { object ( |
Fields | |
---|---|
suggestionFeature
|
The suggestion feature. |
enableEventBasedSuggestion
|
Automatically iterates all participants and tries to compile suggestions. Supported features: ARTICLE_SUGGESTION, FAQ, DIALOGFLOW_ASSIST, ENTITY_EXTRACTION, KNOWLEDGE_ASSIST. |
disableAgentQueryLogging
|
Optional. Disable the logging of search queries sent by human agents. It can prevent those queries from being stored at answer records. Supported features: KNOWLEDGE_SEARCH. |
enableQuerySuggestionWhenNoAnswer
|
Optional. Enable query suggestion even if we can't find its answer. By default, queries are suggested only if we find its answer. Supported features: KNOWLEDGE_ASSIST |
enableConversationAugmentedQuery
|
Optional. Enable including conversation context during query answer generation. Supported features: KNOWLEDGE_SEARCH. |
enableQuerySuggestionOnly
|
Optional. Enable query suggestion only. Supported features: KNOWLEDGE_ASSIST |
suggestionTriggerSettings
|
Settings of suggestion trigger. Currently, only ARTICLE_SUGGESTION, FAQ, and DIALOGFLOW_ASSIST will use this field. |
queryConfig
|
Configs of query. |
conversationModelConfig
|
Configs of custom conversation model. |
conversationProcessConfig
|
Configs for processing conversation. |
SuggestionFeature
The type of Human Agent Assistant API suggestion to perform, and the maximum number of results to return for that type. Multiple Feature
objects can be specified in the features
list.
JSON representation |
---|
{
"type"
:
enum (
|
Fields | |
---|---|
type
|
Type of Human Agent Assistant API feature to request. |
Type
Defines the type of Human Agent Assistant feature.
Enums | |
---|---|
TYPE_UNSPECIFIED
|
Unspecified feature type. |
ARTICLE_SUGGESTION
|
Run article suggestion model for chat. |
FAQ
|
Run FAQ model. |
SMART_REPLY
|
Run smart reply model for chat. |
DIALOGFLOW_ASSIST
|
Run Dialogflow assist model for chat, which will return automated agent response as suggestion. |
CONVERSATION_SUMMARIZATION
|
Run conversation summarization model for chat. |
KNOWLEDGE_SEARCH
|
Run knowledge search with text input from agent or text generated query. |
KNOWLEDGE_ASSIST
|
Run knowledge assist with automatic query generation. |
SuggestionTriggerSettings
Settings of suggestion trigger.
JSON representation |
---|
{ "noSmallTalk" : boolean , "onlyEndUser" : boolean } |
Fields | |
---|---|
noSmallTalk
|
Do not trigger if last utterance is small talk. |
onlyEndUser
|
Only trigger suggestion if participant role of last utterance is END_USER. |
SuggestionQueryConfig
Config for suggestion query.
JSON representation |
---|
{ "maxResults" : integer , "confidenceThreshold" : number , "contextFilterSettings" : { object ( |
maxResults
integer
Maximum number of results to return. Currently, if unset, defaults to 10. And the max number is 20.
confidenceThreshold
number
Confidence threshold of query result.
Agent Assist gives each suggestion a score in the range [0.0, 1.0], based on the relevance between the suggestion and the current conversation context. A score of 0.0 has no relevance, while a score of 1.0 has high relevance. Only suggestions with a score greater than or equal to the value of this field are included in the results.
For a baseline model (the default), the recommended value is in the range [0.05, 0.1].
For a custom model, there is no recommended value. Tune this value by starting from a very low value and slowly increasing until you have desired results.
If this field is not set, it is default to 0.0, which means that all suggestions are returned.
Supported features: ARTICLE_SUGGESTION, FAQ, SMART_REPLY, SMART_COMPOSE, KNOWLEDGE_SEARCH, KNOWLEDGE_ASSIST, ENTITY_EXTRACTION.
contextFilterSettings
object (
ContextFilterSettings
)
Determines how recent conversation context is filtered when generating suggestions. If unspecified, no messages will be dropped.
sections
object (
Sections
)
Optional. The customized sections chosen to return when requesting a summary of a conversation.
contextSize
integer
Optional. The number of recent messages to include in the context. Supported features: KNOWLEDGE_ASSIST.
query_source
. Source of query. query_source
can be only one of the following:knowledgeBaseQuerySource
object (
KnowledgeBaseQuerySource
)
Query from knowledgebase. It is used by: ARTICLE_SUGGESTION, FAQ.
documentQuerySource
object (
DocumentQuerySource
)
Query from knowledge base document. It is used by: SMART_REPLY, SMART_COMPOSE.
dialogflowQuerySource
object (
DialogflowQuerySource
)
Query from Dialogflow agent. It is used by DIALOGFLOW_ASSIST, ENTITY_EXTRACTION.
KnowledgeBaseQuerySource
Knowledge base source settings.
Supported features: ARTICLE_SUGGESTION, FAQ.
JSON representation |
---|
{ "knowledgeBases" : [ string ] } |
Fields | |
---|---|
knowledgeBases[]
|
Required. Knowledge bases to query. Format: |
DocumentQuerySource
Document source settings.
Supported features: SMART_REPLY, SMART_COMPOSE.
JSON representation |
---|
{ "documents" : [ string ] } |
Fields | |
---|---|
documents[]
|
Required. Knowledge documents to query from. Format: |
DialogflowQuerySource
Dialogflow source setting.
Supported feature: DIALOGFLOW_ASSIST, ENTITY_EXTRACTION.
JSON representation |
---|
{
"agent"
:
string
,
"humanAgentSideConfig"
:
{
object (
|
Fields | |
---|---|
agent
|
Required. The name of a dialogflow virtual agent used for end user side intent detection and suggestion. Format: |
humanAgentSideConfig
|
The Dialogflow assist configuration for human agent. |
HumanAgentSideConfig
The configuration used for human agent side Dialogflow assist suggestion.
JSON representation |
---|
{ "agent" : string } |
Fields | |
---|---|
agent
|
Optional. The name of a dialogflow virtual agent used for intent detection and suggestion triggered by human agent. Format: |
ContextFilterSettings
Settings that determine how to filter recent conversation context when generating suggestions.
JSON representation |
---|
{ "dropHandoffMessages" : boolean , "dropVirtualAgentMessages" : boolean , "dropIvrMessages" : boolean } |
Fields |
---|
Sections
Custom sections to return when requesting a summary of a conversation. This is only supported when baselineModelVersion
== '2.0'.
Supported features: CONVERSATION_SUMMARIZATION, CONVERSATION_SUMMARIZATION_VOICE.
JSON representation |
---|
{
"sectionTypes"
:
[
enum (
|
Fields | |
---|---|
sectionTypes[]
|
The selected sections chosen to return when requesting a summary of a conversation. A duplicate selected section will be treated as a single selected section. If section types are not provided, the default will be {SITUATION, ACTION, RESULT}. |
SectionType
Selectable sections to return when requesting a summary of a conversation.
Enums | |
---|---|
SECTION_TYPE_UNSPECIFIED
|
Undefined section type, does not return anything. |
SITUATION
|
What the customer needs help with or has question about. Section name: "situation". |
ACTION
|
What the agent does to help the customer. Section name: "action". |
RESOLUTION
|
Result of the customer service. A single word describing the result of the conversation. Section name: "resolution". |
REASON_FOR_CANCELLATION
|
Reason for cancellation if the customer requests for a cancellation. "N/A" otherwise. Section name: "reason_for_cancellation". |
CUSTOMER_SATISFACTION
|
"Unsatisfied" or "Satisfied" depending on the customer's feelings at the end of the conversation. Section name: "customer_satisfaction". |
ENTITIES
|
Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by "entities/". |
ConversationModelConfig
Custom conversation models used in agent assist feature.
Supported feature: ARTICLE_SUGGESTION, SMART_COMPOSE, SMART_REPLY, CONVERSATION_SUMMARIZATION.
JSON representation |
---|
{ "model" : string , "baselineModelVersion" : string } |
model
string
Conversation model resource name. Format: projects/<Project
ID>/conversationModels/<Model ID>
.
baselineModelVersion
string
Version of current baseline model. It will be ignored if model
is set. Valid versions are:
- Article Suggestion baseline model:
- 0.9
- 1.0 (default)
- Summarization baseline model:
- 1.0
ConversationProcessConfig
Config to process conversation.
JSON representation |
---|
{ "recentSentencesCount" : integer } |
Fields | |
---|---|
recentSentencesCount
|
Number of recent non-small-talk sentences to use as context for article and FAQ suggestion |
MessageAnalysisConfig
Configuration for analyses to run on each conversation message.
JSON representation |
---|
{ "enableEntityExtraction" : boolean , "enableSentimentAnalysis" : boolean } |
Fields | |
---|---|
enableEntityExtraction
|
Enable entity extraction in conversation messages on agent assist stage . If unspecified, defaults to false. Currently, this feature is not general available, please contact Google to get access. |
enableSentimentAnalysis
|
Enable sentiment analysis in conversation messages on agent assist stage
. If unspecified, defaults to false. Sentiment analysis inspects user input and identifies the prevailing subjective opinion, especially to determine a user's attitude as positive, negative, or neutral: https://cloud.google.com/natural-language/docs/basics#sentimentAnalysis
For |
HumanAgentHandoffConfig
Defines the hand off to a live agent, typically on which external agent service provider to connect to a conversation.
Currently, this feature is not general available, please contact Google to get access.
JSON representation |
---|
{ // Union field |
agent_service
. Required. Specifies which agent service to connect for human agent handoff. agent_service
can be only one of the following:livePersonConfig
object (
LivePersonConfig
)
Uses LivePerson .
salesforceLiveAgentConfig
object (
SalesforceLiveAgentConfig
)
Uses Salesforce Live Agent.
LivePersonConfig
Configuration specific to LivePerson .
JSON representation |
---|
{ "accountNumber" : string } |
Fields | |
---|---|
accountNumber
|
Required. Account number of the LivePerson account to connect. This is the account number you input at the login page. |
SalesforceLiveAgentConfig
Configuration specific to Salesforce Live Agent.
JSON representation |
---|
{ "organizationId" : string , "deploymentId" : string , "buttonId" : string , "endpointDomain" : string } |
Fields | |
---|---|
organizationId
|
Required. The organization ID of the Salesforce account. |
deploymentId
|
Required. Live Agent deployment ID. |
buttonId
|
Required. Live Agent chat button ID. |
endpointDomain
|
Required. Domain of the Live Agent endpoint for this agent. You can find the endpoint URL in the |
LoggingConfig
Defines logging behavior for conversation lifecycle events.
JSON representation |
---|
{ "enableStackdriverLogging" : boolean } |
Fields | |
---|---|
enableStackdriverLogging
|
Whether to log conversation events like |
SpeechToTextConfig
Configures speech transcription for ConversationProfile
.
JSON representation |
---|
{ "speechModelVariant" : enum ( |
speechModelVariant
enum (
SpeechModelVariant
)
The speech model used in speech to text. SPEECH_MODEL_VARIANT_UNSPECIFIED
, USE_BEST_AVAILABLE
will be treated as USE_ENHANCED
. It can be overridden in AnalyzeContentRequest
and StreamingAnalyzeContentRequest
request. If enhanced model variant is specified and an enhanced version of the specified model for the language does not exist, then it would emit an error.
model
string
Which Speech model to select. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then Dialogflow auto-selects a model based on other parameters in the SpeechToTextConfig and Agent settings. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to Cloud Speech API documentation for more details. If you specify a model, the following models typically have the best performance:
- phone_call (best for Agent Assist and telephony)
- latest_short (best for Dialogflow non-telephony)
- command_and_search
Leave this field unspecified to use Agent Speech settings for model selection.
phraseSets[]
string
List of names of Cloud Speech phrase sets that are used for transcription. For phrase set limitations, please refer to Cloud Speech API quotas and limits .
audioEncoding
enum (
AudioEncoding
)
Audio encoding of the audio content to process.
sampleRateHertz
integer
Sample rate (in Hertz) of the audio content sent in the query. Refer to Cloud Speech API documentation for more details.
languageCode
string
The language of the supplied audio. Dialogflow does not do translations. See Language Support
for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language. If not specified, the default language configured at ConversationProfile
is used.
enableWordInfo
boolean
If true
, Dialogflow returns SpeechWordInfo
in StreamingRecognitionResult
with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
useTimeoutBasedEndpointing
boolean
Use timeout based endpointing, interpreting endpointer sensitivity as seconds of timeout value.
SpeechModelVariant
Variant of the specified Speech model
to use.
See the Cloud Speech documentation for which models have different variants. For example, the "phone_call" model has both a standard and an enhanced variant. When you use an enhanced model, you will generally receive higher quality results than for a standard model.
SPEECH_MODEL_VARIANT_UNSPECIFIED
USE_BEST_AVAILABLE
Use the best available variant of the [Speech model][InputAudioConfig.model] that the caller is eligible for.
Please see the Dialogflow docs for how to make your project eligible for enhanced models.
USE_STANDARD
USE_ENHANCED
Use an enhanced model variant:
- If an enhanced variant does not exist for the given
model
and request language, Dialogflow falls back to the standard variant.
The Cloud Speech documentation describes which models have enhanced variants.
- If the API caller isn't eligible for enhanced models, Dialogflow returns an error. Please see the Dialogflow docs for how to make your project eligible.
AudioEncoding
Audio encoding of the audio content sent in the conversational query request. Refer to the Cloud Speech API documentation for more details.
Enums | |
---|---|
AUDIO_ENCODING_UNSPECIFIED
|
Not specified. |
AUDIO_ENCODING_LINEAR_16
|
Uncompressed 16-bit signed little-endian samples (Linear PCM). LINT: LEGACY_NAMES |
AUDIO_ENCODING_FLAC
|
FLAC
(Free Lossless Audio Codec) is the recommended encoding because it is lossless (therefore recognition is not compromised) and requires only about half the bandwidth of LINEAR16
. FLAC
stream encoding supports 16-bit and 24-bit samples, however, not all fields in STREAMINFO
are supported. |
AUDIO_ENCODING_MULAW
|
8-bit samples that compand 14-bit audio samples using G.711 PCMU/mu-law. |
AUDIO_ENCODING_AMR
|
Adaptive Multi-Rate Narrowband codec. sampleRateHertz
must be 8000. |
AUDIO_ENCODING_AMR_WB
|
Adaptive Multi-Rate Wideband codec. sampleRateHertz
must be 16000. |
AUDIO_ENCODING_OGG_OPUS
|
Opus encoded audio frames in Ogg container ( OggOpus
). sampleRateHertz
must be 16000. |
AUDIO_ENCODING_SPEEX_WITH_HEADER_BYTE
|
Although the use of lossy encodings is not recommended, if a very low bitrate encoding is required, OGG_OPUS
is highly preferred over Speex encoding. The Speex
encoding supported by Dialogflow API has a header byte in each block, as in MIME type audio/x-speex-with-header-byte
. It is a variant of the RTP Speex encoding defined in RFC 5574
. The stream is a sequence of blocks, one block per RTP packet. Each block starts with a byte containing the length of the block, in bytes, followed by one or more frames of Speex data, padded to an integral number of bytes (octets) as specified in RFC 5574. In other words, each RTP header is replaced with a single byte containing the block length. Only Speex wideband is supported. sampleRateHertz
must be 16000. |
AUDIO_ENCODING_ALAW
|
8-bit samples that compand 13-bit audio samples using G.711 PCMU/a-law. |
Methods |
|
---|---|
|
Clears a suggestion feature from a conversation profile for the given participant role. |
|
Creates a conversation profile in the specified project. |
|
Deletes the specified conversation profile. |
|
Retrieves the specified conversation profile. |
|
Returns the list of all conversation profiles in the specified project. |
|
Updates the specified conversation profile. |
|
Adds or updates a suggestion feature in a conversation profile. |