- Resource: Generator
- Methods
Resource: Generator
LLM generator.
JSON representation |
---|
{ "name" : string , "description" : string , "inferenceParameter" : { object ( |
name
string
Output only. Identifier. The resource name of the generator. Format: projects/<Project ID>/locations/<Location ID>/generators/<Generator ID>
description
string
Optional. Human readable description of the generator.
inferenceParameter
object (
InferenceParameter
)
Optional. Inference parameters for this generator.
triggerEvent
enum (
TriggerEvent
)
Optional. The trigger event of the generator. It defines when the generator is triggered in a conversation.
createTime
string (
Timestamp
format)
Output only. Creation time of this generator.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
updateTime
string (
Timestamp
format)
Output only. Update time of this generator.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
tools[]
string
Optional. Resource names of the tools that the generator can choose from. Format: projects/<Project ID>/locations/<Location ID>/tools/<tool ID>
.
context
. Required. Input context of the generator. context
can be only one of the following:freeFormContext
object (
FreeFormContext
)
Input of free from generator to LLM.
summarizationContext
object (
SummarizationContext
)
Input of Summarization feature.
foundation_model
. The foundation model to use for generating suggestions. If a foundation model isn't specified here, a model specifically tuned for the feature type (and version when applicable) will be used. foundation_model
can be only one of the following:publishedModel
string
Optional. The published Large Language Model name. * To use the latest model version, specify the model name without version number. Example: text-bison
* To use a stable model version, specify the version number as well. Example: text-bison@002
.
FreeFormContext
Free form generator context that customer can configure.
JSON representation |
---|
{ "text" : string } |
Fields | |
---|---|
text
|
Optional. Free form text input to LLM. |
SummarizationContext
Summarization context that customer can configure.
JSON representation |
---|
{ "summarizationSections" : [ { object ( |
Fields | |
---|---|
summarizationSections[]
|
Optional. List of sections. Note it contains both predefined section sand customer defined sections. |
fewShotExamples[]
|
Optional. List of few shot examples. |
version
|
Optional. Version of the feature. If not set, default to latest version. Current candidates are ["1.0"]. |
outputLanguageCode
|
Optional. The target language of the generated summary. The language code for conversation will be used if this field is empty. Supported 2.0 and later versions. |
SummarizationSection
Represents the section of summarization.
JSON representation |
---|
{
"key"
:
string
,
"definition"
:
string
,
"type"
:
enum (
|
Fields | |
---|---|
key
|
Optional. Name of the section, for example, "situation". |
definition
|
Optional. Definition of the section, for example, "what the customer needs help with or has question about." |
type
|
Optional. Type of the summarization section. |
Type
Type enum of the summarization sections.
Enums | |
---|---|
TYPE_UNSPECIFIED
|
Undefined section type, does not return anything. |
SITUATION
|
What the customer needs help with or has question about. Section name: "situation". |
ACTION
|
What the agent does to help the customer. Section name: "action". |
RESOLUTION
|
Result of the customer service. A single word describing the result of the conversation. Section name: "resolution". |
REASON_FOR_CANCELLATION
|
Reason for cancellation if the customer requests for a cancellation. "N/A" otherwise. Section name: "reason_for_cancellation". |
CUSTOMER_SATISFACTION
|
"Unsatisfied" or "Satisfied" depending on the customer's feelings at the end of the conversation. Section name: "customer_satisfaction". |
ENTITIES
|
Key entities extracted from the conversation, such as ticket number, order number, dollar amount, etc. Section names are prefixed by "entities/". |
CUSTOMER_DEFINED
|
Customer defined sections. |
SITUATION_CONCISE
|
Concise version of the situation section. This type is only available if type SITUATION is not selected. |
ACTION_CONCISE
|
Concise version of the action section. This type is only available if type ACTION is not selected. |
FewShotExample
Providing examples in the generator (i.e. building a few-shot generator) helps convey the desired format of the LLM response.
JSON representation |
---|
{ "conversationContext" : { object ( |
conversationContext
object (
ConversationContext
)
Optional. Conversation transcripts.
extraInfo
map (key: string, value: string)
Optional. Key is the placeholder field name in input, value is the value of the placeholder. E.g. instruction contains "@price", and ingested data has <"price", "10">
An object containing a list of "key": value
pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }
.
output
object (
GeneratorSuggestion
)
Required. Example output of the model.
instruction_list
. Instruction list of this few_shot example. instruction_list
can be only one of the following:summarizationSectionList
object (
SummarizationSectionList
)
Summarization sections.
ConversationContext
Context of the conversation, including transcripts.
JSON representation |
---|
{
"messageEntries"
:
[
{
object (
|
Fields |
---|
MessageEntry
Represents a message entry of a conversation.
JSON representation |
---|
{
"role"
:
enum (
|
Fields | |
---|---|
role
|
Optional. Participant role of the message. |
text
|
Optional. Transcript content of the message. |
languageCode
|
Optional. The language of the text. See Language Support for a list of the currently supported language codes. |
createTime
|
Optional. Create time of the message entry. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
Role
Enumeration of the roles a participant can play in a conversation.
Enums | |
---|---|
ROLE_UNSPECIFIED
|
Participant role not set. |
HUMAN_AGENT
|
Participant is a human agent. |
AUTOMATED_AGENT
|
Participant is an automated agent, such as a Dialogflow agent. |
END_USER
|
Participant is an end user that has called or chatted with Dialogflow services. |
SummarizationSectionList
List of summarization sections.
JSON representation |
---|
{
"summarizationSections"
:
[
{
object (
|
Fields | |
---|---|
summarizationSections[]
|
Optional. Summarization sections. |
InferenceParameter
The parameters of inference.
JSON representation |
---|
{ "maxOutputTokens" : integer , "temperature" : number , "topK" : integer , "topP" : number } |
Fields | |
---|---|
maxOutputTokens
|
Optional. Maximum number of the output tokens for the generator. |
temperature
|
Optional. Controls the randomness of LLM predictions. Low temperature = less random. High temperature = more random. If unset (or 0), uses a default value of 0. |
topK
|
Optional. Top-k changes how the model selects tokens for output. A top-k of 1 means the selected token is the most probable among all tokens in the model's vocabulary (also called greedy decoding), while a top-k of 3 means that the next token is selected from among the 3 most probable tokens (using temperature). For each token selection step, the top K tokens with the highest probabilities are sampled. Then tokens are further filtered based on topP with the final token selected using temperature sampling. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [1, 40], default to 40. |
topP
|
Optional. Top-p changes how the model selects tokens for output. Tokens are selected from most K (see topK parameter) probable to least until the sum of their probabilities equals the top-p value. For example, if tokens A, B, and C have a probability of 0.3, 0.2, and 0.1 and the top-p value is 0.5, then the model will select either A or B as the next token (using temperature) and doesn't consider C. The default top-p value is 0.95. Specify a lower value for less random responses and a higher value for more random responses. Acceptable value is [0.0, 1.0], default to 0.95. |
TriggerEvent
The event that triggers the generator and LLM execution.
Enums | |
---|---|
TRIGGER_EVENT_UNSPECIFIED
|
Default value for TriggerEvent. |
END_OF_UTTERANCE
|
Triggers when each chat message or voice utterance ends. |
MANUAL_CALL
|
Triggers on the conversation manually by API calls, such as Conversations.GenerateStatelessSuggestion and Conversations.GenerateSuggestions. |
CUSTOMER_MESSAGE
|
Triggers after each customer message only. |
AGENT_MESSAGE
|
Triggers after each agent message only. |
Methods |
|
---|---|
|
Creates a generator. |
|
Lists generators. |