Reference documentation and code samples for the Cloud Speech V2 Client class StreamingRecognizeResponse.
StreamingRecognizeResponseis the only message returned to the client byStreamingRecognize. A series of zero or moreStreamingRecognizeResponsemessages are streamed back to the client. If there is no recognizable
audio then no messages are streamed back to the client.
Here are some examples ofStreamingRecognizeResponses that might
be returned while processing audio:
results { alternatives { transcript: " that is the question"
confidence: 0.98 }
alternatives { transcript: " that was the question" }
is_final: true }
Notes:
Only two of the above responses #4 and #7 contain final results; they are
indicated byis_final: true. Concatenating these together generates the
full transcript: "to be or not to be that is the question".
The others contain interimresults. #3 and #6 contain two interimresults: the first portion has a high stability and is less likely to
change; the second portion has a low stability and is very likely to
change. A UI designer might choose to show only high stabilityresults.
The specificstabilityandconfidencevalues shown above are only for
illustrative purposes. Actual values may vary.
In each response, only one of these fields will be set:error,speech_event_type, or
one or more (repeated)results.
Generated from protobuf messagegoogle.cloud.speech.v2.StreamingRecognizeResponse
This repeated list contains zero or more results that correspond to consecutive portions of the audio currently being processed. It contains zero or oneis_final=trueresult (the newly settled portion), followed by zero or moreis_final=falseresults (the interim results).
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Cloud Speech V2 Client - Class StreamingRecognizeResponse (2.1.1)\n\nVersion latestkeyboard_arrow_down\n\n- [2.1.1 (latest)](/php/docs/reference/cloud-speech/latest/V2.StreamingRecognizeResponse)\n- [2.1.0](/php/docs/reference/cloud-speech/2.1.0/V2.StreamingRecognizeResponse)\n- [2.0.1](/php/docs/reference/cloud-speech/2.0.1/V2.StreamingRecognizeResponse)\n- [1.20.1](/php/docs/reference/cloud-speech/1.20.1/V2.StreamingRecognizeResponse)\n- [1.19.2](/php/docs/reference/cloud-speech/1.19.2/V2.StreamingRecognizeResponse)\n- [1.18.3](/php/docs/reference/cloud-speech/1.18.3/V2.StreamingRecognizeResponse)\n- [1.16.0](/php/docs/reference/cloud-speech/1.16.0/V2.StreamingRecognizeResponse)\n- [1.15.0](/php/docs/reference/cloud-speech/1.15.0/V2.StreamingRecognizeResponse)\n- [1.14.3](/php/docs/reference/cloud-speech/1.14.3/V2.StreamingRecognizeResponse)\n- [1.13.1](/php/docs/reference/cloud-speech/1.13.1/V2.StreamingRecognizeResponse)\n- [1.12.0](/php/docs/reference/cloud-speech/1.12.0/V2.StreamingRecognizeResponse)\n- [1.11.2](/php/docs/reference/cloud-speech/1.11.2/V2.StreamingRecognizeResponse)\n- [1.10.0](/php/docs/reference/cloud-speech/1.10.0/V2.StreamingRecognizeResponse)\n- [1.9.1](/php/docs/reference/cloud-speech/1.9.1/V2.StreamingRecognizeResponse)\n- [1.8.0](/php/docs/reference/cloud-speech/1.8.0/V2.StreamingRecognizeResponse)\n- [1.7.0](/php/docs/reference/cloud-speech/1.7.0/V2.StreamingRecognizeResponse) \nReference documentation and code samples for the Cloud Speech V2 Client class StreamingRecognizeResponse.\n\n`StreamingRecognizeResponse` is the only message returned to the client by\n`StreamingRecognize`. A series of zero or more `StreamingRecognizeResponse`\nmessages are streamed back to the client. If there is no recognizable\naudio then no messages are streamed back to the client.\n\nHere are some examples of `StreamingRecognizeResponse`s that might\nbe returned while processing audio:\n\n1. results { alternatives { transcript: \"tube\" } stability: 0.01 }\n2. results { alternatives { transcript: \"to be a\" } stability: 0.01 }\n3. results { alternatives { transcript: \"to be\" } stability: 0.9 } results { alternatives { transcript: \" or not to be\" } stability: 0.01 }\n4. results { alternatives { transcript: \"to be or not to be\" confidence: 0.92 } alternatives { transcript: \"to bee or not to bee\" } is_final: true }\n5. results { alternatives { transcript: \" that's\" } stability: 0.01 }\n6. results { alternatives { transcript: \" that is\" } stability: 0.9 } results { alternatives { transcript: \" the question\" } stability: 0.01 }\n7. results { alternatives { transcript: \" that is the question\" confidence: 0.98 } alternatives { transcript: \" that was the question\" } is_final: true } Notes:\n8. Only two of the above responses #4 and #7 contain final results; they are indicated by `is_final: true`. Concatenating these together generates the full transcript: \"to be or not to be that is the question\".\n9. The others contain interim `results`. #3 and #6 contain two interim `results`: the first portion has a high stability and is less likely to change; the second portion has a low stability and is very likely to change. A UI designer might choose to show only high stability `results`.\n10. The specific `stability` and `confidence` values shown above are only for illustrative purposes. Actual values may vary.\n11. In each response, only one of these fields will be set: `error`, `speech_event_type`, or one or more (repeated) `results`.\n\nGenerated from protobuf message `google.cloud.speech.v2.StreamingRecognizeResponse`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Speech \\\\ V2\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getResults\n\nThis repeated list contains zero or more results that\ncorrespond to consecutive portions of the audio currently being processed.\n\nIt contains zero or one\n[is_final](/php/docs/reference/cloud-speech/latest/V2.StreamingRecognitionResult#_Google_Cloud_Speech_V2_StreamingRecognitionResult__getIsFinal__)=`true`\nresult (the newly settled portion), followed by zero or more\n[is_final](/php/docs/reference/cloud-speech/latest/V2.StreamingRecognitionResult#_Google_Cloud_Speech_V2_StreamingRecognitionResult__getIsFinal__)=`false`\nresults (the interim results).\n\n### setResults\n\nThis repeated list contains zero or more results that\ncorrespond to consecutive portions of the audio currently being processed.\n\nIt contains zero or one\n[is_final](/php/docs/reference/cloud-speech/latest/V2.StreamingRecognitionResult#_Google_Cloud_Speech_V2_StreamingRecognitionResult__getIsFinal__)=`true`\nresult (the newly settled portion), followed by zero or more\n[is_final](/php/docs/reference/cloud-speech/latest/V2.StreamingRecognitionResult#_Google_Cloud_Speech_V2_StreamingRecognitionResult__getIsFinal__)=`false`\nresults (the interim results).\n\n### getSpeechEventType\n\nIndicates the type of speech event.\n\n### setSpeechEventType\n\nIndicates the type of speech event.\n\n### getSpeechEventOffset\n\nTime offset between the beginning of the audio and event emission.\n\n### hasSpeechEventOffset\n\n### clearSpeechEventOffset\n\n### setSpeechEventOffset\n\nTime offset between the beginning of the audio and event emission.\n\n### getMetadata\n\nMetadata about the recognition.\n\n### hasMetadata\n\n### clearMetadata\n\n### setMetadata\n\nMetadata about the recognition."]]