Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class StreamingRecognitionResult.
Contains a speech recognition result corresponding to a portion of the audio
that is currently being processed or an indication that this is the end
of the single requested utterance.
While end-user audio is being processed, Dialogflow sends a series of
results. Each result may contain atranscriptvalue. A transcript
represents a portion of the utterance. While the recognizer is processing
audio, transcript values may be interim values or finalized values.
Once a transcript is finalized, theis_finalvalue is set to true and
processing continues for the next transcript.
IfStreamingDetectIntentRequest.query_input.audio_config.single_utterancewas true, and the recognizer has completed processing audio,
themessage_typevalue is set to `END_OF_SINGLE_UTTERANCE and the
following (last) result contains the last finalized transcript.
The complete end-user utterance is determined by concatenating the
finalized transcript values received for the series of results.
In the following example, single utterance is enabled. In the case where
single utterance is not enabled, result 7 would not occur.
Num | transcript | message_type | is_final
--- | ----------------------- | ----------------------- | --------
1 | "tube" | TRANSCRIPT | false
2 | "to be a" | TRANSCRIPT | false
3 | "to be" | TRANSCRIPT | false
4 | "to be or not to be" | TRANSCRIPT | true
5 | "that's" | TRANSCRIPT | false
6 | "that is | TRANSCRIPT | false
7 | unset | END_OF_SINGLE_UTTERANCE | unset
8 | " that is the question" | TRANSCRIPT | true
Concatenating the finalized transcripts withis_finalset to true,
the complete utterance becomes "to be or not to be that is the question".
Generated from protobuf messagegoogle.cloud.dialogflow.v2.StreamingRecognitionResult
Namespace
Google \ Cloud \ Dialogflow \ V2
Methods
__construct
Constructor.
Parameters
Name
Description
data
array
Optional. Data for populating the Message object.
↳ message_type
int
Type of the result message.
↳ transcript
string
Transcript text representing the words that the user spoke. Populated if and only ifmessage_type=TRANSCRIPT.
↳ is_final
bool
Iffalse, theStreamingRecognitionResultrepresents an interim result that may change. Iftrue, the recognizer will not return any further hypotheses about this piece of the audio. May only be populated formessage_type=TRANSCRIPT.
↳ confidence
float
The Speech confidence between 0.0 and 1.0 for the current portion of audio. A higher number indicates an estimated greater likelihood that the recognized words are correct. The default of 0.0 is a sentinel value indicating that confidence was not set. This field is typically only provided ifis_finalis true and you should not rely on it being accurate or even set.
Word-specific information for the words recognized by Speech intranscript. Populated if and only ifmessage_type=TRANSCRIPTand [InputAudioConfig.enable_word_info] is set.
Time offset of the end of this Speech recognition result relative to the beginning of the audio. Only populated formessage_type=TRANSCRIPT.
↳ language_code
string
Detected language code for the transcript.
getMessageType
Type of the result message.
Returns
Type
Description
int
setMessageType
Type of the result message.
Parameter
Name
Description
var
int
Returns
Type
Description
$this
getTranscript
Transcript text representing the words that the user spoke.
Populated if and only ifmessage_type=TRANSCRIPT.
Returns
Type
Description
string
setTranscript
Transcript text representing the words that the user spoke.
Populated if and only ifmessage_type=TRANSCRIPT.
Parameter
Name
Description
var
string
Returns
Type
Description
$this
getIsFinal
Iffalse, theStreamingRecognitionResultrepresents an
interim result that may change. Iftrue, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
formessage_type=TRANSCRIPT.
Returns
Type
Description
bool
setIsFinal
Iffalse, theStreamingRecognitionResultrepresents an
interim result that may change. Iftrue, the recognizer will not return
any further hypotheses about this piece of the audio. May only be populated
formessage_type=TRANSCRIPT.
Parameter
Name
Description
var
bool
Returns
Type
Description
$this
getConfidence
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided ifis_finalis true and you should
not rely on it being accurate or even set.
Returns
Type
Description
float
setConfidence
The Speech confidence between 0.0 and 1.0 for the current portion of audio.
A higher number indicates an estimated greater likelihood that the
recognized words are correct. The default of 0.0 is a sentinel value
indicating that confidence was not set.
This field is typically only provided ifis_finalis true and you should
not rely on it being accurate or even set.
Parameter
Name
Description
var
float
Returns
Type
Description
$this
getSpeechWordInfo
Word-specific information for the words recognized by Speech intranscript.
Populated if and only ifmessage_type=TRANSCRIPTand
[InputAudioConfig.enable_word_info] is set.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Dialogflow V2 Client - Class StreamingRecognitionResult (2.1.2)\n\nVersion latestkeyboard_arrow_down\n\n- [2.1.2 (latest)](/php/docs/reference/cloud-dialogflow/latest/V2.StreamingRecognitionResult)\n- [2.1.1](/php/docs/reference/cloud-dialogflow/2.1.1/V2.StreamingRecognitionResult)\n- [2.0.1](/php/docs/reference/cloud-dialogflow/2.0.1/V2.StreamingRecognitionResult)\n- [1.17.2](/php/docs/reference/cloud-dialogflow/1.17.2/V2.StreamingRecognitionResult)\n- [1.16.0](/php/docs/reference/cloud-dialogflow/1.16.0/V2.StreamingRecognitionResult)\n- [1.15.1](/php/docs/reference/cloud-dialogflow/1.15.1/V2.StreamingRecognitionResult)\n- [1.14.0](/php/docs/reference/cloud-dialogflow/1.14.0/V2.StreamingRecognitionResult)\n- [1.13.0](/php/docs/reference/cloud-dialogflow/1.13.0/V2.StreamingRecognitionResult)\n- [1.12.3](/php/docs/reference/cloud-dialogflow/1.12.3/V2.StreamingRecognitionResult)\n- [1.11.0](/php/docs/reference/cloud-dialogflow/1.11.0/V2.StreamingRecognitionResult)\n- [1.10.2](/php/docs/reference/cloud-dialogflow/1.10.2/V2.StreamingRecognitionResult)\n- [1.9.0](/php/docs/reference/cloud-dialogflow/1.9.0/V2.StreamingRecognitionResult)\n- [1.8.0](/php/docs/reference/cloud-dialogflow/1.8.0/V2.StreamingRecognitionResult)\n- [1.7.2](/php/docs/reference/cloud-dialogflow/1.7.2/V2.StreamingRecognitionResult)\n- [1.6.0](/php/docs/reference/cloud-dialogflow/1.6.0/V2.StreamingRecognitionResult)\n- [1.5.0](/php/docs/reference/cloud-dialogflow/1.5.0/V2.StreamingRecognitionResult)\n- [1.4.0](/php/docs/reference/cloud-dialogflow/1.4.0/V2.StreamingRecognitionResult)\n- [1.3.2](/php/docs/reference/cloud-dialogflow/1.3.2/V2.StreamingRecognitionResult)\n- [1.2.0](/php/docs/reference/cloud-dialogflow/1.2.0/V2.StreamingRecognitionResult)\n- [1.1.1](/php/docs/reference/cloud-dialogflow/1.1.1/V2.StreamingRecognitionResult)\n- [1.0.1](/php/docs/reference/cloud-dialogflow/1.0.1/V2.StreamingRecognitionResult) \nReference documentation and code samples for the Google Cloud Dialogflow V2 Client class StreamingRecognitionResult.\n\nContains a speech recognition result corresponding to a portion of the audio\nthat is currently being processed or an indication that this is the end\nof the single requested utterance.\n\nWhile end-user audio is being processed, Dialogflow sends a series of\nresults. Each result may contain a `transcript` value. A transcript\nrepresents a portion of the utterance. While the recognizer is processing\naudio, transcript values may be interim values or finalized values.\nOnce a transcript is finalized, the `is_final` value is set to true and\nprocessing continues for the next transcript.\nIf `StreamingDetectIntentRequest.query_input.audio_config.single_utterance`\nwas true, and the recognizer has completed processing audio,\nthe `message_type` value is set to \\`END_OF_SINGLE_UTTERANCE and the\nfollowing (last) result contains the last finalized transcript.\nThe complete end-user utterance is determined by concatenating the\nfinalized transcript values received for the series of results.\nIn the following example, single utterance is enabled. In the case where\nsingle utterance is not enabled, result 7 would not occur. \n\n Num | transcript | message_type | is_final\n --- | ----------------------- | ----------------------- | --------\n 1 | \"tube\" | TRANSCRIPT | false\n 2 | \"to be a\" | TRANSCRIPT | false\n 3 | \"to be\" | TRANSCRIPT | false\n 4 | \"to be or not to be\" | TRANSCRIPT | true\n 5 | \"that's\" | TRANSCRIPT | false\n 6 | \"that is | TRANSCRIPT | false\n 7 | unset | END_OF_SINGLE_UTTERANCE | unset\n 8 | \" that is the question\" | TRANSCRIPT | true\n\nConcatenating the finalized transcripts with `is_final` set to true,\nthe complete utterance becomes \"to be or not to be that is the question\".\n\nGenerated from protobuf message `google.cloud.dialogflow.v2.StreamingRecognitionResult`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Dialogflow \\\\ V2\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getMessageType\n\nType of the result message.\n\n### setMessageType\n\nType of the result message.\n\n### getTranscript\n\nTranscript text representing the words that the user spoke.\n\nPopulated if and only if `message_type` = `TRANSCRIPT`.\n\n### setTranscript\n\nTranscript text representing the words that the user spoke.\n\nPopulated if and only if `message_type` = `TRANSCRIPT`.\n\n### getIsFinal\n\nIf `false`, the `StreamingRecognitionResult` represents an\ninterim result that may change. If `true`, the recognizer will not return\nany further hypotheses about this piece of the audio. May only be populated\nfor `message_type` = `TRANSCRIPT`.\n\n### setIsFinal\n\nIf `false`, the `StreamingRecognitionResult` represents an\ninterim result that may change. If `true`, the recognizer will not return\nany further hypotheses about this piece of the audio. May only be populated\nfor `message_type` = `TRANSCRIPT`.\n\n### getConfidence\n\nThe Speech confidence between 0.0 and 1.0 for the current portion of audio.\n\nA higher number indicates an estimated greater likelihood that the\nrecognized words are correct. The default of 0.0 is a sentinel value\nindicating that confidence was not set.\nThis field is typically only provided if `is_final` is true and you should\nnot rely on it being accurate or even set.\n\n### setConfidence\n\nThe Speech confidence between 0.0 and 1.0 for the current portion of audio.\n\nA higher number indicates an estimated greater likelihood that the\nrecognized words are correct. The default of 0.0 is a sentinel value\nindicating that confidence was not set.\nThis field is typically only provided if `is_final` is true and you should\nnot rely on it being accurate or even set.\n\n### getSpeechWordInfo\n\nWord-specific information for the words recognized by Speech in\n[transcript](/php/docs/reference/cloud-dialogflow/latest/V2.StreamingRecognitionResult#_Google_Cloud_Dialogflow_V2_StreamingRecognitionResult__getTranscript__).\n\nPopulated if and only if `message_type` = `TRANSCRIPT` and\n\\[InputAudioConfig.enable_word_info\\] is set.\n\n### setSpeechWordInfo\n\nWord-specific information for the words recognized by Speech in\n[transcript](/php/docs/reference/cloud-dialogflow/latest/V2.StreamingRecognitionResult#_Google_Cloud_Dialogflow_V2_StreamingRecognitionResult__getTranscript__).\n\nPopulated if and only if `message_type` = `TRANSCRIPT` and\n\\[InputAudioConfig.enable_word_info\\] is set.\n\n### getSpeechEndOffset\n\nTime offset of the end of this Speech recognition result relative to the\nbeginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.\n\n### hasSpeechEndOffset\n\n### clearSpeechEndOffset\n\n### setSpeechEndOffset\n\nTime offset of the end of this Speech recognition result relative to the\nbeginning of the audio. Only populated for `message_type` = `TRANSCRIPT`.\n\n### getLanguageCode\n\nDetected language code for the transcript.\n\n### setLanguageCode\n\nDetected language code for the transcript."]]