Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class StreamingRecognitionResult.
Contains a speech recognition result corresponding to a portion of the audio that is currently being processed or an indication that this is the end of the single requested utterance.
While end-user audio is being processed, Dialogflow sends a series of
results. Each result may contain a transcript
value. A transcript
represents a portion of the utterance. While the recognizer is processing
audio, transcript values may be interim values or finalized values.
Once a transcript is finalized, the is_final
value is set to true and
processing continues for the next transcript.
If StreamingDetectIntentRequest.query_input.audio_config.single_utterance
was true, and the recognizer has completed processing audio,
the message_type
value is set to `END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.
The complete end-user utterance is determined by concatenating the
finalized transcript values received for the series of results.
In the following example, single utterance is enabled. In the case where
single utterance is not enabled, result 7 would not occur.
Num | transcript | message_type | is_final
--- | ----------------------- | ----------------------- | --------
1 | "tube" | TRANSCRIPT | false
2 | "to be a" | TRANSCRIPT | false
3 | "to be" | TRANSCRIPT | false
4 | "to be or not to be" | TRANSCRIPT | true
5 | "that's" | TRANSCRIPT | false
6 | "that is | TRANSCRIPT | false
7 | unset | END_OF_SINGLE_UTTERANCE | unset
8 | " that is the question" | TRANSCRIPT | true
Concatenating the finalized transcripts with is_final
set to true,
the complete utterance becomes "to be or not to be that is the question".
Generated from protobuf message google.cloud.dialogflow.v2.StreamingRecognitionResult
Namespace
Google \ Cloud \ Dialogflow \ V2Methods
__construct
Constructor.
data
array
Optional. Data for populating the Message object.
↳ message_type
int
Type of the result message.
↳ transcript
string
Transcript text representing the words that the user spoke. Populated if and only if message_type
= TRANSCRIPT
.
↳ is_final
bool
If false
, the StreamingRecognitionResult
represents an interim result that may change. If true
, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated for message_type
= TRANSCRIPT
.
↳ confidence
float
The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided if is_final
is true and you should not rely on it being accurate or even set.
↳ speech_word_info
array< Google\Cloud\Dialogflow\V2\SpeechWordInfo
>
Word-specific information for the words recognized by Speech in transcript
. Populated if and only if message_type
= TRANSCRIPT
and [InputAudioConfig.enable_word_info] is set.
↳ speech_end_offset
Google\Protobuf\Duration
Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
↳ language_code
string
Detected language code for the transcript.
getMessageType
Type of the result message.
int
setMessageType
Type of the result message.
var
int
$this
getTranscript
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
string
setTranscript
Transcript text representing the words that the user spoke.
Populated if and only if message_type
= TRANSCRIPT
.
var
string
$this
getIsFinal
If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
bool
setIsFinal
If false
, the StreamingRecognitionResult
represents an
interim result that may change. If true
, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
for message_type
= TRANSCRIPT
.
var
bool
$this
getConfidence
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
float
setConfidence
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided if is_final
is true and you should
not rely on it being accurate or even set.
var
float
$this
getSpeechWordInfo
Word-specific information for the words recognized by Speech in transcript .
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
setSpeechWordInfo
Word-specific information for the words recognized by Speech in transcript .
Populated if and only if message_type
= TRANSCRIPT
and
[InputAudioConfig.enable_word_info] is set.
$this
getSpeechEndOffset
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
hasSpeechEndOffset
clearSpeechEndOffset
setSpeechEndOffset
Time offset of the end of this Speech recognition result relative to the
beginning of the audio. Only populated for message_type
= TRANSCRIPT
.
$this
getLanguageCode
Detected language code for the transcript.
string
setLanguageCode
Detected language code for the transcript.
var
string
$this