Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class StreamingAnalyzeContentRequest.
The top-level message sent by the client to the Participants.StreamingAnalyzeContent method.
Multiple request messages should be sent in order:
- The first message must contain participant , config and optionally query_params . If you want to receive an audio response, it should also contain reply_audio_config . The message must not contain input .
- If config
in
the first message
was set to audio_config
,
all subsequent messages must contain input_audio
to continue with Speech recognition. However, note that:
- Dialogflow will bill you for the audio so far.
- Dialogflow discards all Speech recognition results in favor of the text input.
- If StreamingAnalyzeContentRequest.config in the first message was set to StreamingAnalyzeContentRequest.text_config , then the second message must contain only input_text . Moreover, you must not send more than two messages. After you sent all input, you must half-close or abort the request stream.
Generated from protobuf message google.cloud.dialogflow.v2.StreamingAnalyzeContentRequest
Namespace
Google \ Cloud \ Dialogflow \ V2Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ participant
string
Required. The name of the participant this text comes from. Format: projects/<Project ID>/locations/<Location ID>/conversations/<Conversation ID>/participants/<Participant ID>
.
↳ audio_config
↳ text_config
↳ reply_audio_config
OutputAudioConfig
Speech synthesis configuration. The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
↳ input_audio
string
The input audio content to be recognized. Must be sent if audio_config
is set in the first message. The complete audio over all streaming messages must not exceed 1 minute.
↳ input_text
string
The UTF-8 encoded natural language text to be processed. Must be sent if text_config
is set in the first message. Text length must not exceed 256 bytes for virtual agent interactions. The input_text
field can be only sent once, and would cancel the speech recognition if any ongoing.
↳ input_dtmf
TelephonyDtmfEvents
The DTMF digits used to invoke intent and fill in parameter value. This input is ignored if the previous response indicated that DTMF input is not accepted.
↳ query_params
↳ assist_query_params
↳ cx_parameters
Google\Protobuf\Struct
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
↳ enable_extended_streaming
bool
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response. Restrictions: - Timeout: 3 mins. - Audio Encoding: only supports AudioEncoding.AUDIO_ENCODING_LINEAR_16
and AudioEncoding.AUDIO_ENCODING_MULAW
- Lifecycle: conversation should be in Assist Stage
, go to Conversations.CreateConversation
for more information. InvalidArgument Error will be returned if the one of restriction checks failed. You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming
↳ enable_partial_automated_agent_reply
bool
Optional. Enable partial responses from Dialogflow CX agent. If this flag is not enabled, response stream still contains only one final response even if some Fulfillment
s in Dialogflow CX agent have been configured to return partial responses.
↳ enable_debugging_info
bool
If true, StreamingAnalyzeContentResponse.debugging_info
will get populated.
getParticipant
Required. The name of the participant this text comes from.
Format: projects/<Project ID>/locations/<Location
ID>/conversations/<Conversation ID>/participants/<Participant ID>
.
string
setParticipant
Required. The name of the participant this text comes from.
Format: projects/<Project ID>/locations/<Location
ID>/conversations/<Conversation ID>/participants/<Participant ID>
.
var
string
$this
getAudioConfig
Instructs the speech recognizer how to process the speech audio.
hasAudioConfig
setAudioConfig
Instructs the speech recognizer how to process the speech audio.
$this
getTextConfig
The natural language text to be processed.
hasTextConfig
setTextConfig
The natural language text to be processed.
$this
getReplyAudioConfig
Speech synthesis configuration.
The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
hasReplyAudioConfig
clearReplyAudioConfig
setReplyAudioConfig
Speech synthesis configuration.
The speech synthesis settings for a virtual agent that may be configured for the associated conversation profile are not used when calling StreamingAnalyzeContent. If this configuration is not supplied, speech synthesis is disabled.
$this
getInputAudio
The input audio content to be recognized. Must be sent if audio_config
is set in the first message. The complete audio over all streaming
messages must not exceed 1 minute.
string
hasInputAudio
setInputAudio
The input audio content to be recognized. Must be sent if audio_config
is set in the first message. The complete audio over all streaming
messages must not exceed 1 minute.
var
string
$this
getInputText
The UTF-8 encoded natural language text to be processed. Must be sent if text_config
is set in the first message. Text length must not exceed
256 bytes for virtual agent interactions. The input_text
field can be
only sent once, and would cancel the speech recognition if any ongoing.
string
hasInputText
setInputText
The UTF-8 encoded natural language text to be processed. Must be sent if text_config
is set in the first message. Text length must not exceed
256 bytes for virtual agent interactions. The input_text
field can be
only sent once, and would cancel the speech recognition if any ongoing.
var
string
$this
getInputDtmf
The DTMF digits used to invoke intent and fill in parameter value.
This input is ignored if the previous response indicated that DTMF input is not accepted.
hasInputDtmf
setInputDtmf
The DTMF digits used to invoke intent and fill in parameter value.
This input is ignored if the previous response indicated that DTMF input is not accepted.
$this
getQueryParams
Parameters for a Dialogflow virtual-agent query.
hasQueryParams
clearQueryParams
setQueryParams
Parameters for a Dialogflow virtual-agent query.
$this
getAssistQueryParams
Parameters for a human assist query.
hasAssistQueryParams
clearAssistQueryParams
setAssistQueryParams
Parameters for a human assist query.
$this
getCxParameters
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.
Note: this field should only be used if you are connecting to a Dialogflow CX agent.
hasCxParameters
clearCxParameters
setCxParameters
Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null.
Note: this field should only be used if you are connecting to a Dialogflow CX agent.
$this
getEnableExtendedStreaming
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response.
Restrictions:
- Timeout: 3 mins.
- Audio Encoding: only supports AudioEncoding.AUDIO_ENCODING_LINEAR_16 and AudioEncoding.AUDIO_ENCODING_MULAW
- Lifecycle: conversation should be in
Assist Stage
, go to Conversations.CreateConversation for more information. InvalidArgument Error will be returned if the one of restriction checks failed. You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming
bool
setEnableExtendedStreaming
Optional. Enable full bidirectional streaming. You can keep streaming the audio until timeout, and there's no need to half close the stream to get the response.
Restrictions:
- Timeout: 3 mins.
- Audio Encoding: only supports AudioEncoding.AUDIO_ENCODING_LINEAR_16 and AudioEncoding.AUDIO_ENCODING_MULAW
- Lifecycle: conversation should be in
Assist Stage
, go to Conversations.CreateConversation for more information. InvalidArgument Error will be returned if the one of restriction checks failed. You can find more details in https://cloud.google.com/agent-assist/docs/extended-streaming
var
bool
$this
getEnablePartialAutomatedAgentReply
Optional. Enable partial responses from Dialogflow CX agent. If this flag
is not enabled, response stream still contains only one final response even
if some Fulfillment
s in Dialogflow CX agent have been configured to
return partial responses.
bool
setEnablePartialAutomatedAgentReply
Optional. Enable partial responses from Dialogflow CX agent. If this flag
is not enabled, response stream still contains only one final response even
if some Fulfillment
s in Dialogflow CX agent have been configured to
return partial responses.
var
bool
$this
getEnableDebuggingInfo
If true, StreamingAnalyzeContentResponse.debugging_info
will get
populated.
bool
setEnableDebuggingInfo
If true, StreamingAnalyzeContentResponse.debugging_info
will get
populated.
var
bool
$this
getConfig
string
getInput
string