Optional. If true, the timestamp of the audio will be included in the response.
candidateCount
candidateCount?:number;
Optional. Number of candidates to generate.
frequencyPenalty
frequencyPenalty?:number;
Optional. Positive values penalize tokens that repeatedly appear in the generated text, decreasing the probability of repeating content. This maximum value for frequencyPenalty is up to, but not including, 2.0. Its minimum value is -2.0. Supported by gemini-1.5-pro and gemini-1.5-flash only.
maxOutputTokens
maxOutputTokens?:number;
Optional. The maximum number of output tokens to generate per message.
responseMimeType
responseMimeType?:string;
Optional. Output response mimetype of the generated candidate text. Supported mimetype: -text/plain: (default) Text output. -application/json: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Interface GenerationConfig (1.10.0)\n\nVersion latestkeyboard_arrow_down\n\n- [1.10.0 (latest)](/nodejs/docs/reference/vertexai/latest/vertexai/generationconfig)\n- [1.9.0](/nodejs/docs/reference/vertexai/1.9.0/vertexai/generationconfig)\n- [1.8.1](/nodejs/docs/reference/vertexai/1.8.1/vertexai/generationconfig)\n- [1.7.0](/nodejs/docs/reference/vertexai/1.7.0/vertexai/generationconfig)\n- [1.6.0](/nodejs/docs/reference/vertexai/1.6.0/vertexai/generationconfig)\n- [1.5.0](/nodejs/docs/reference/vertexai/1.5.0/vertexai/generationconfig)\n- [1.4.1](/nodejs/docs/reference/vertexai/1.4.1/vertexai/generationconfig)\n- [1.3.1](/nodejs/docs/reference/vertexai/1.3.1/vertexai/generationconfig)\n- [1.2.0](/nodejs/docs/reference/vertexai/1.2.0/vertexai/generationconfig)\n- [1.1.0](/nodejs/docs/reference/vertexai/1.1.0/vertexai/generationconfig)\n- [1.0.0](/nodejs/docs/reference/vertexai/1.0.0/vertexai/generationconfig)\n- [0.5.0](/nodejs/docs/reference/vertexai/0.5.0/vertexai/generationconfig)\n- [0.4.0](/nodejs/docs/reference/vertexai/0.4.0/vertexai/generationconfig)\n- [0.3.1](/nodejs/docs/reference/vertexai/0.3.1/vertexai/generationconfig)\n- [0.2.1](/nodejs/docs/reference/vertexai/0.2.1/vertexai/generationconfig) \nConfiguration options for model generation and outputs.\n\nPackage\n-------\n\n[@google-cloud/vertexai](../overview.html)\n\nProperties\n----------\n\n### audioTimestamp\n\n audioTimestamp?: boolean;\n\nOptional. If true, the timestamp of the audio will be included in the response.\n\n### candidateCount\n\n candidateCount?: number;\n\nOptional. Number of candidates to generate.\n\n### frequencyPenalty\n\n frequencyPenalty?: number;\n\nOptional. Positive values penalize tokens that repeatedly appear in the generated text, decreasing the probability of repeating content. This maximum value for frequencyPenalty is up to, but not including, 2.0. Its minimum value is -2.0. Supported by gemini-1.5-pro and gemini-1.5-flash only.\n\n### maxOutputTokens\n\n maxOutputTokens?: number;\n\nOptional. The maximum number of output tokens to generate per message.\n\n### responseMimeType\n\n responseMimeType?: string;\n\nOptional. Output response mimetype of the generated candidate text. Supported mimetype: - `text/plain`: (default) Text output. - `application/json`: JSON response in the candidates. The model needs to be prompted to output the appropriate response type, otherwise the behavior is undefined.\n\n### responseSchema\n\n responseSchema?: ResponseSchema;\n\nOptional. The schema that generated candidate text must follow. For more information, see \u003chttps://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output\u003e. If set, a compatible responseMimeType must also be set.\n\n### stopSequences\n\n stopSequences?: string[];\n\nOptional. Stop sequences.\n\n### temperature\n\n temperature?: number;\n\nOptional. Controls the randomness of predictions.\n\n### topK\n\n topK?: number;\n\nOptional. If specified, topK sampling will be used.\n\n### topP\n\n topP?: number;\n\nOptional. If specified, nucleus sampling will be used."]]