The GenerativeModel
class is the base class for the generative models on Vertex AI. NOTE: Don't instantiate this class directly. Use vertexai.getGenerativeModel()
instead.
Package
@google-cloud/vertexaiConstructors
(constructor)(getGenerativeModelParams)
constructor
(
getGenerativeModelParams
:
GetGenerativeModelParams
);
Constructs a new instance of the GenerativeModel
class
getGenerativeModelParams
Methods
countTokens(request)
countTokens
(
request
:
CountTokensRequest
)
:
Promise<CountTokensResponse>
;
Makes an async request to count tokens.
The countTokens
function returns the token count and the number of billable characters for a prompt.
request
Promise
< CountTokensResponse
>
The CountTokensResponse object with the token count.
const
request
=
{
contents
:
[{
role
:
'user'
,
parts
:
[{
text
:
'How are you doing today?'
}]}],
};
const
resp
=
await
generativeModel
.
countTokens
(
request
);
console
.
log
(
'count tokens response: '
,
resp
);
generateContent(request)
generateContent
(
request
:
GenerateContentRequest
|
string
)
:
Promise<GenerateContentResult>
;
Makes an async call to generate content.
The response will be returned in GenerateContentResult.response .
request
Promise
< GenerateContentResult
>
The GenerateContentResponse object with the response candidates.
const
request
=
{
contents
:
[{
role
:
'user'
,
parts
:
[{
text
:
'How are you doing today?'
}]}],
};
const
result
=
await
generativeModel
.
generateContent
(
request
);
console
.
log
(
'Response: '
,
JSON
.
stringify
(
result
.
response
));
generateContentStream(request)
generateContentStream
(
request
:
GenerateContentRequest
|
string
)
:
Promise<StreamGenerateContentResult>
;
Makes an async stream request to generate content.
The response is returned chunk by chunk as it's being generated in StreamGenerateContentResult.stream . After all chunks of the response are returned, the aggregated response is available in StreamGenerateContentResult.response .
Promise
< StreamGenerateContentResult
>
Promise of StreamGenerateContentResult
const
request
=
{
contents
:
[{
role
:
'user'
,
parts
:
[{
text
:
'How are you doing today?'
}]}],
};
const
streamingResult
=
await
generativeModel
.
generateContentStream
(
request
);
for
await
(
const
item
of
streamingResult
.
stream
)
{
console
.
log
(
'stream chunk: '
,
JSON
.
stringify
(
item
));
}
const
aggregatedResponse
=
await
streamingResult
.
response
;
console
.
log
(
'aggregated response: '
,
JSON
.
stringify
(
aggregatedResponse
));
startChat(request)
startChat
(
request
?:
StartChatParams
)
:
ChatSession
;
Instantiates a ChatSession .
The ChatSession class is a stateful class that holds the state of the conversation with the model and provides methods to interact with the model in chat mode. Calling this method doesn't make any calls to a remote endpoint. To make remote call, use ChatSession.sendMessage() or ChatSession.sendMessageStream}.
const
chat
=
generativeModel
.
startChat
();
const
result1
=
await
chat
.
sendMessage
(
"How can I learn more about Node.js?"
);
const
response1
=
await
result1
.
response
;
console
.
log
(
'Response: '
,
JSON
.
stringify
(
response1
));
const
result2
=
await
chat
.
sendMessageStream
(
"What about python?"
);
const
response2
=
await
result2
.
response
;
console
.
log
(
'Response: '
,
JSON
.
stringify
(
await
response2
));