Reference documentation and code samples for the Google Cloud Dialogflow V2 Client class InferenceParameter.
The parameters of inference.
Generated from protobuf messagegoogle.cloud.dialogflow.v2.InferenceParameter
Namespace
Google \ Cloud \ Dialogflow \ V2
Methods
__construct
Constructor.
Parameters
Name
Description
data
array
Optional. Data for populating the Message object.
↳ max_output_tokens
int
Optional. Maximum number of the output tokens for the generator.
↳ temperature
float
Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0.
↳ top_k
int
Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40.
↳ top_p
float
Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95.
getMaxOutputTokens
Optional. Maximum number of the output tokens for the generator.
Returns
Type
Description
int
hasMaxOutputTokens
clearMaxOutputTokens
setMaxOutputTokens
Optional. Maximum number of the output tokens for the generator.
Parameter
Name
Description
var
int
Returns
Type
Description
$this
getTemperature
Optional. Controls the randomness of LLM predictions.
Low temperature = less random. High temperature = more random.
If unset (or 0), uses a default value of 0.
Returns
Type
Description
float
hasTemperature
clearTemperature
setTemperature
Optional. Controls the randomness of LLM predictions.
Low temperature = less random. High temperature = more random.
If unset (or 0), uses a default value of 0.
Parameter
Name
Description
var
float
Returns
Type
Description
$this
getTopK
Optional. Top-k changes how the model selects tokens for output. A top-k of
1 means the selected token is the most probable among all tokens in the
model's vocabulary (also called greedy decoding), while a top-k of 3 means
that the next token is selected from among the 3 most probable tokens
(using temperature). For each token selection step, the top K tokens with
the highest probabilities are sampled. Then tokens are further filtered
based on topP with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more
random responses. Acceptable value is [1, 40], default to 40.
Returns
Type
Description
int
hasTopK
clearTopK
setTopK
Optional. Top-k changes how the model selects tokens for output. A top-k of
1 means the selected token is the most probable among all tokens in the
model's vocabulary (also called greedy decoding), while a top-k of 3 means
that the next token is selected from among the 3 most probable tokens
(using temperature). For each token selection step, the top K tokens with
the highest probabilities are sampled. Then tokens are further filtered
based on topP with the final token selected using temperature sampling.
Specify a lower value for less random responses and a higher value for more
random responses. Acceptable value is [1, 40], default to 40.
Parameter
Name
Description
var
int
Returns
Type
Description
$this
getTopP
Optional. Top-p changes how the model selects tokens for output. Tokens are
selected from most K (see topK parameter) probable to least until the sum
of their probabilities equals the top-p value. For example, if tokens A, B,
and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5,
then the model will select either A or B as the next token (using
temperature) and doesn't consider C. The default top-p value is 0.95.
Specify a lower value for less random responses and a higher value for more
random responses. Acceptable value is [0.0, 1.0], default to 0.95.
Returns
Type
Description
float
hasTopP
clearTopP
setTopP
Optional. Top-p changes how the model selects tokens for output. Tokens are
selected from most K (see topK parameter) probable to least until the sum
of their probabilities equals the top-p value. For example, if tokens A, B,
and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5,
then the model will select either A or B as the next token (using
temperature) and doesn't consider C. The default top-p value is 0.95.
Specify a lower value for less random responses and a higher value for more
random responses. Acceptable value is [0.0, 1.0], default to 0.95.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[],[],null,["# Google Cloud Dialogflow V2 Client - Class InferenceParameter (2.1.2)\n\nVersion latestkeyboard_arrow_down\n\n- [2.1.2 (latest)](/php/docs/reference/cloud-dialogflow/latest/V2.InferenceParameter)\n- [2.1.1](/php/docs/reference/cloud-dialogflow/2.1.1/V2.InferenceParameter)\n- [2.0.1](/php/docs/reference/cloud-dialogflow/2.0.1/V2.InferenceParameter)\n- [1.17.2](/php/docs/reference/cloud-dialogflow/1.17.2/V2.InferenceParameter)\n- [1.16.0](/php/docs/reference/cloud-dialogflow/1.16.0/V2.InferenceParameter)\n- [1.15.1](/php/docs/reference/cloud-dialogflow/1.15.1/V2.InferenceParameter)\n- [1.14.0](/php/docs/reference/cloud-dialogflow/1.14.0/V2.InferenceParameter)\n- [1.13.0](/php/docs/reference/cloud-dialogflow/1.13.0/V2.InferenceParameter)\n- [1.12.3](/php/docs/reference/cloud-dialogflow/1.12.3/V2.InferenceParameter)\n- [1.11.0](/php/docs/reference/cloud-dialogflow/1.11.0/V2.InferenceParameter)\n- [1.10.2](/php/docs/reference/cloud-dialogflow/1.10.2/V2.InferenceParameter)\n- [1.9.0](/php/docs/reference/cloud-dialogflow/1.9.0/V2.InferenceParameter)\n- [1.8.0](/php/docs/reference/cloud-dialogflow/1.8.0/V2.InferenceParameter)\n- [1.7.2](/php/docs/reference/cloud-dialogflow/1.7.2/V2.InferenceParameter)\n- [1.6.0](/php/docs/reference/cloud-dialogflow/1.6.0/V2.InferenceParameter)\n- [1.5.0](/php/docs/reference/cloud-dialogflow/1.5.0/V2.InferenceParameter)\n- [1.4.0](/php/docs/reference/cloud-dialogflow/1.4.0/V2.InferenceParameter)\n- [1.3.2](/php/docs/reference/cloud-dialogflow/1.3.2/V2.InferenceParameter)\n- [1.2.0](/php/docs/reference/cloud-dialogflow/1.2.0/V2.InferenceParameter)\n- [1.1.1](/php/docs/reference/cloud-dialogflow/1.1.1/V2.InferenceParameter)\n- [1.0.1](/php/docs/reference/cloud-dialogflow/1.0.1/V2.InferenceParameter) \nReference documentation and code samples for the Google Cloud Dialogflow V2 Client class InferenceParameter.\n\nThe parameters of inference.\n\nGenerated from protobuf message `google.cloud.dialogflow.v2.InferenceParameter`\n\nNamespace\n---------\n\nGoogle \\\\ Cloud \\\\ Dialogflow \\\\ V2\n\nMethods\n-------\n\n### __construct\n\nConstructor.\n\n### getMaxOutputTokens\n\nOptional. Maximum number of the output tokens for the generator.\n\n### hasMaxOutputTokens\n\n### clearMaxOutputTokens\n\n### setMaxOutputTokens\n\nOptional. Maximum number of the output tokens for the generator.\n\n### getTemperature\n\nOptional. Controls the randomness of LLM predictions.\n\nLow temperature = less random. High temperature = more random.\nIf unset (or 0), uses a default value of 0.\n\n### hasTemperature\n\n### clearTemperature\n\n### setTemperature\n\nOptional. Controls the randomness of LLM predictions.\n\nLow temperature = less random. High temperature = more random.\nIf unset (or 0), uses a default value of 0.\n\n### getTopK\n\nOptional. Top-k changes how the model selects tokens for output. A top-k of\n1 means the selected token is the most probable among all tokens in the\nmodel's vocabulary (also called greedy decoding), while a top-k of 3 means\nthat the next token is selected from among the 3 most probable tokens\n(using temperature). For each token selection step, the top K tokens with\nthe highest probabilities are sampled. Then tokens are further filtered\nbased on topP with the final token selected using temperature sampling.\n\nSpecify a lower value for less random responses and a higher value for more\nrandom responses. Acceptable value is \\[1, 40\\], default to 40.\n\n### hasTopK\n\n### clearTopK\n\n### setTopK\n\nOptional. Top-k changes how the model selects tokens for output. A top-k of\n1 means the selected token is the most probable among all tokens in the\nmodel's vocabulary (also called greedy decoding), while a top-k of 3 means\nthat the next token is selected from among the 3 most probable tokens\n(using temperature). For each token selection step, the top K tokens with\nthe highest probabilities are sampled. Then tokens are further filtered\nbased on topP with the final token selected using temperature sampling.\n\nSpecify a lower value for less random responses and a higher value for more\nrandom responses. Acceptable value is \\[1, 40\\], default to 40.\n\n### getTopP\n\nOptional. Top-p changes how the model selects tokens for output. Tokens are\nselected from most K (see topK parameter) probable to least until the sum\nof their probabilities equals the top-p value. For example, if tokens A, B,\nand C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5,\nthen the model will select either A or B as the next token (using\ntemperature) and doesn't consider C. The default top-p value is 0.95.\n\nSpecify a lower value for less random responses and a higher value for more\nrandom responses. Acceptable value is \\[0.0, 1.0\\], default to 0.95.\n\n### hasTopP\n\n### clearTopP\n\n### setTopP\n\nOptional. Top-p changes how the model selects tokens for output. Tokens are\nselected from most K (see topK parameter) probable to least until the sum\nof their probabilities equals the top-p value. For example, if tokens A, B,\nand C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5,\nthen the model will select either A or B as the next token (using\ntemperature) and doesn't consider C. The default top-p value is 0.95.\n\nSpecify a lower value for less random responses and a higher value for more\nrandom responses. Acceptable value is \\[0.0, 1.0\\], default to 0.95."]]