Defining AI workflows
The core of your app’s AI features are generative model requests, but it’s rare that you can simply take user input, pass it to the model, and display the model output back to the user. Usually, there are pre- and post-processing steps that must accompany the model call. For example:
- Retrieving contextual information to send with the model call
- Retrieving the history of the user’s current session, for example in a chat app
- Using one model to reformat the user input in a way that’s suitable to pass to another model
- Evaluating the “safety” of a model’s output before presenting it to the user
- Combining the output of several models
Every step of this workflow must work together for any AI-related task to succeed.
In Genkit, you represent this tightly-linked logic using a construction called a flow. Flows are written just like functions, using ordinary TypeScript code, but they add additional capabilities intended to ease the development of AI features:
- Type safety: Input and output schemas defined using Zod, which provides both static and runtime type checking
- Integration with developer UI: Debug flows independently of your application code using the developer UI. In the developer UI, you can run flows and view traces for each step of the flow.
- Simplified deployment: Deploy flows directly as web API endpoints, using Cloud Functions for Firebase or any platform that can host a web app.
Unlike similar features in other frameworks, Genkit’s flows are lightweight and unobtrusive, and don’t force your app to conform to any specific abstraction. All of the flow’s logic is written in standard TypeScript, and code inside a flow doesn’t need to be flow-aware.
Defining and calling flows
Section titled “Defining and calling flows”In its simplest form, a flow just wraps a function. The following example wraps
a function that calls generate()
:
export
const
menuSuggestionFlow
=
ai.
defineFlow
(
{
name:
'menuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
outputSchema: z.
object
({ menuItem: z.
string
() }),
},
async
({
theme
})
=>
{
const
{
text
}
=
await
ai.
generate
({
model: googleAI.
model
(
'gemini-2.5-flash'
),
prompt:
`Invent a menu item for a ${
theme
} themed restaurant.`
,
});
return
{ menuItem: text };
},
);
Just by wrapping your generate()
calls like this, you add some functionality:
doing so lets you run the flow from the Genkit CLI and from the developer UI,
and is a requirement for several of Genkit’s features, including deployment and
observability (later sections discuss these topics).
Input and output schemas
Section titled “Input and output schemas”One of the most important advantages Genkit flows have over directly calling a
model API is type safety of both inputs and outputs. When defining flows, you
can define schemas for them using Zod, in much the same way as you define the
output schema of a generate()
call; however, unlike with generate()
, you can
also specify an input schema.
While it’s not mandatory to wrap your input and output schemas in z.object()
, it’s considered best practice for these reasons:
- Better developer experience: Wrapping schemas in objects provides a better experience in the Developer UI by giving you labeled input fields.
- Future-proof API design: Object-based schemas allow for easy extensibility in the future. You can add new fields to your input or output schemas without breaking existing clients, which is a core principle of robust API design.
All examples in this documentation use object-based schemas to follow these best practices.
Here’s a refinement of the last example, which defines a flow that takes a string as input and outputs an object:
import
{ z }
from
'genkit'
;
const
MenuItemSchema
=
z.
object
({
dishname: z.
string
(),
description: z.
string
(),
});
export
const
menuSuggestionFlowWithSchema
=
ai.
defineFlow
(
{
name:
'menuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
outputSchema: MenuItemSchema,
},
async
({
theme
})
=>
{
const
{
output
}
=
await
ai.
generate
({
model: googleAI.
model
(
'gemini-2.5-flash'
),
prompt:
`Invent a menu item for a ${
theme
} themed restaurant.`
,
output: { schema: MenuItemSchema },
});
if
(output
==
null
) {
throw
new
Error
(
"Response doesn't satisfy schema."
);
}
return
output;
},
);
Note that the schema of a flow does not necessarily have to line up with the
schema of the generate()
calls within the flow (in fact, a flow might not even
contain generate()
calls). Here’s a variation of the example that passes a
schema to generate()
, but uses the structured output to format a simple
string, which the flow returns.
export
const
menuSuggestionFlowMarkdown
=
ai.
defineFlow
(
{
name:
'menuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
outputSchema: z.
object
({ formattedMenuItem: z.
string
() }),
},
async
({
theme
})
=>
{
const
{
output
}
=
await
ai.
generate
({
model: googleAI.
model
(
'gemini-2.5-flash'
),
prompt:
`Invent a menu item for a ${
theme
} themed restaurant.`
,
output: { schema: MenuItemSchema },
});
if
(output
==
null
) {
throw
new
Error
(
"Response doesn't satisfy schema."
);
}
return
{
formattedMenuItem:
`**${
output
.
dishname
}**: ${
output
.
description
}`
,
};
},
);
Calling flows
Section titled “Calling flows”Once you’ve defined a flow, you can call it from your Node.js code:
const
{
text
}
=
await
menuSuggestionFlow
({ theme:
'bistro'
});
The argument to the flow must conform to the input schema, if you defined one.
If you defined an output schema, the flow response will conform to it. For
example, if you set the output schema to MenuItemSchema
, the flow output will
contain its properties:
const
{
dishname
,
description
}
=
await
menuSuggestionFlowWithSchema
({ theme:
'bistro'
});
Streaming flows
Section titled “Streaming flows”Flows support streaming using an interface similar to generate()
’s streaming
interface. Streaming is useful when your flow generates a large amount of
output, because you can present the output to the user as it’s being generated,
which improves the perceived responsiveness of your app. As a familiar example,
chat-based LLM interfaces often stream their responses to the user as they are
generated.
Here’s an example of a flow that supports streaming:
export
const
menuSuggestionStreamingFlow
=
ai.
defineFlow
(
{
name:
'menuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
streamSchema: z.
string
(),
outputSchema: z.
object
({ theme: z.
string
(), menuItem: z.
string
() }),
},
async
({
theme
}, {
sendChunk
})
=>
{
const
{
stream
,
response
}
=
ai.
generateStream
({
model: googleAI.
model
(
'gemini-2.5-flash'
),
prompt:
`Invent a menu item for a ${
theme
} themed restaurant.`
,
});
for
await
(
const
chunk
of
stream) {
// Here, you could process the chunk in some way before sending it to
// the output stream via sendChunk(). In this example, we output
// the text of the chunk, unmodified.
sendChunk
(chunk.text);
}
const
{
text
:
menuItem
}
=
await
response;
return
{
theme,
menuItem,
};
},
);
- The
streamSchema
option specifies the type of values your flow streams. This does not necessarily need to be the same type as theoutputSchema
, which is the type of the flow’s complete output. - The second parameter to your flow definition is called
sideChannel
. It provides features such as request context and thesendChunk
callback. ThesendChunk
callback takes a single parameter, of the type specified bystreamSchema
. Whenever data becomes available within your flow, send the data to the output stream by calling this function.
In the above example, the values streamed by the flow are directly coupled to
the values streamed by the generate()
call inside the flow. Although this is
often the case, it doesn’t have to be: you can output values to the stream using
the callback as often as is useful for your flow.
Calling streaming flows
Section titled “Calling streaming flows”Streaming flows are also callable, but they immediately return a response object rather than a promise:
const
response
=
menuSuggestionStreamingFlow.
stream
({ theme:
'Danube'
});
The response object has a stream property, which you can use to iterate over the streaming output of the flow as it’s generated:
for
await
(
const
chunk
of
response.stream) {
console.
log
(
'chunk'
, chunk);
}
You can also get the complete output of the flow, as you can with a non-streaming flow:
const
output
=
await
response.output;
Note that the streaming output of a flow might not be the same type as the
complete output; the streaming output conforms to streamSchema
, whereas the
complete output conforms to outputSchema
.
Running flows from the command line
Section titled “Running flows from the command line”You can run flows from the command line using the Genkit CLI tool:
genkit
flow:run
menuSuggestionFlow
'{"theme": "French"}'
For streaming flows, you can print the streaming output to the console by adding
the -s
flag:
genkit
flow:run
menuSuggestionFlow
'{"theme": "French"}'
-s
Running a flow from the command line is useful for testing a flow, or for running flows that perform tasks needed on an ad hoc basis—for example, to run a flow that ingests a document into your vector database.
Debugging flows
Section titled “Debugging flows”One of the advantages of encapsulating AI logic within a flow is that you can test and debug the flow independently from your app using the Genkit developer UI.
To start the developer UI, run the following commands from your project directory:
genkit
start
--
tsx
--watch
src/your-code.ts
From the Runtab of developer UI, you can run any of the flows defined in your project:
After you’ve run a flow, you can inspect a trace of the flow invocation by either clicking View traceor looking on the Inspecttab.
In the trace viewer, you can see details about the execution of the entire flow, as well as details for each of the individual steps within the flow. For example, consider the following flow, which contains several generation requests:
const
PrixFixeMenuSchema
=
z.
object
({
starter: z.
string
(),
soup: z.
string
(),
main: z.
string
(),
dessert: z.
string
(),
});
export
const
complexMenuSuggestionFlow
=
ai.
defineFlow
(
{
name:
'complexMenuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
outputSchema: PrixFixeMenuSchema,
},
async
({
theme
})
:
Promise
<
z
.
infer
<
typeof
PrixFixeMenuSchema>>
=>
{
const
chat
=
ai.
chat
({ model: googleAI.
model
(
'gemini-2.5-flash'
) });
await
chat.
send
(
'What makes a good prix fixe menu?'
);
await
chat.
send
(
'What are some ingredients, seasonings, and cooking techniques that '
+
`would work for a ${
theme
} themed menu?`
,
);
const
{
output
}
=
await
chat.
send
({
prompt:
`Based on our discussion, invent a prix fixe menu for a ${
theme
} `
+
'themed restaurant.'
,
output: {
schema: PrixFixeMenuSchema,
},
});
if
(
!
output) {
throw
new
Error
(
'No data generated.'
);
}
return
output;
},
);
When you run this flow, the trace viewer shows you details about each generation request including its output:
Flow steps
Section titled “Flow steps”In the last example, you saw that each generate()
call showed up as a separate
step in the trace viewer. Each of Genkit’s fundamental actions show up as
separate steps of a flow:
-
generate()
-
Chat.send()
-
embed()
-
index()
-
retrieve()
If you want to include code other than the above in your traces, you can do so
by wrapping the code in a run()
call. You might do this for calls to
third-party libraries that are not Genkit-aware, or for any critical section of
code.
For example, here’s a flow with two steps: the first step retrieves a menu using
some unspecified method, and the second step includes the menu as context for a generate()
call.
export
const
menuQuestionFlow
=
ai.
defineFlow
(
{
name:
'menuQuestionFlow'
,
inputSchema: z.
object
({ question: z.
string
() }),
outputSchema: z.
object
({ answer: z.
string
() }),
},
async
({
question
})
:
Promise
<{
answer
:
string
}>
=>
{
const
menu
=
await
ai.
run
(
'retrieve-daily-menu'
,
async
()
:
Promise
<
string
>
=>
{
// Retrieve today's menu. (This could be a database access or simply
// fetching the menu from your website.)
// ...
return
menu;
});
const
{
text
}
=
await
ai.
generate
({
model: googleAI.
model
(
'gemini-2.5-flash'
),
system:
"Help the user answer questions about today's menu."
,
prompt: question,
docs: [{ content: [{ text: menu }] }],
});
return
{ answer: text };
},
);
Because the retrieval step is wrapped in a run()
call, it’s included as a step
in the trace viewer:
Deploying flows
Section titled “Deploying flows”You can deploy your flows directly as web API endpoints, ready for you to call from your app clients. Deployment is discussed in detail on several other pages, but this section gives brief overviews of your deployment options.
Cloud Functions for Firebase
Section titled “Cloud Functions for Firebase”To deploy flows with Cloud Functions for Firebase, use the onCallGenkit
feature of firebase-functions/https
. onCallGenkit
wraps your flow in a
callable function. You may set an auth policy and configure App Check.
import
{ hasClaim, onCallGenkit }
from
'firebase-functions/https'
;
import
{ defineSecret }
from
'firebase-functions/params'
;
const
apiKey
=
defineSecret
(
'GOOGLE_AI_API_KEY'
);
const
menuSuggestionFlow
=
ai.
defineFlow
(
{
name:
'menuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
outputSchema: z.
object
({ menuItem: z.
string
() }),
},
async
({
theme
})
=>
{
// ...
return
{ menuItem:
'Generated menu item would go here'
};
},
);
export
const
menuSuggestion
=
onCallGenkit
(
{
secrets: [apiKey],
authPolicy:
hasClaim
(
'email_verified'
),
},
menuSuggestionFlow,
);
For more information, see the following pages:
Express.js
Section titled “Express.js”To deploy flows using any Node.js hosting platform, such as Cloud Run, define
your flows using defineFlow()
and then call startFlowServer()
:
import
{ startFlowServer }
from
'@genkit-ai/express'
;
export
const
menuSuggestionFlow
=
ai.
defineFlow
(
{
name:
'menuSuggestionFlow'
,
inputSchema: z.
object
({ theme: z.
string
() }),
outputSchema: z.
object
({ result: z.
string
() }),
},
async
({
theme
})
=>
{
// ...
},
);
startFlowServer
({
flows: [menuSuggestionFlow],
});
By default, startFlowServer
will serve all the flows defined in your codebase
as HTTP endpoints (for example, http://localhost:3400/menuSuggestionFlow
). You
can call a flow with a POST request as follows:
curl
-X
POST
"http://localhost:3400/menuSuggestionFlow"
\
-H
"Content-Type: application/json"
-d
'{"data": {"theme": "banana"}}'
If needed, you can customize the flows server to serve a specific list of flows, as shown below. You can also specify a custom port (it will use the PORT environment variable if set) or specify CORS settings.
export
const
flowA
=
ai.
defineFlow
(
{
name:
'flowA'
,
inputSchema: z.
object
({ subject: z.
string
() }),
outputSchema: z.
object
({ response: z.
string
() }),
},
async
({
subject
})
=>
{
// ...
return
{ response:
'Generated response would go here'
};
},
);
export
const
flowB
=
ai.
defineFlow
(
{
name:
'flowB'
,
inputSchema: z.
object
({ subject: z.
string
() }),
outputSchema: z.
object
({ response: z.
string
() }),
},
async
({
subject
})
=>
{
// ...
return
{ response:
'Generated response would go here'
};
},
);
startFlowServer
({
flows: [flowB],
port:
4567
,
cors: {
origin:
'*'
,
},
});
For information on deploying to specific platforms, see Deploy with Cloud Run and Deploy flows to any Node.js platform .
The core of your app’s AI features is generative model requests, but it’s rare that you can simply take user input, pass it to the model, and display the model output back to the user. Usually, there are pre- and post-processing steps that must accompany the model call. For example:
- Retrieving contextual information to send with the model call.
- Retrieving the history of the user’s current session, for example in a chat app.
- Using one model to reformat the user input in a way that’s suitable to pass to another model.
- Evaluating the “safety” of a model’s output before presenting it to the user.
- Combining the output of several models.
Every step of this workflow must work together for any AI-related task to succeed.
In Genkit, you represent this tightly-linked logic using a construction called a flow. Flows are written just like functions, using ordinary Go code, but they add additional capabilities intended to ease the development of AI features:
- Type safety: Input and output schemas, which provides both static and runtime type checking.
- Integration with developer UI: Debug flows independently of your application code using the developer UI. In the developer UI, you can run flows and view traces for each step of the flow.
- Simplified deployment: Deploy flows directly as web API endpoints, using any platform that can host a web app.
Genkit’s flows are lightweight and unobtrusive, and don’t force your app to conform to any specific abstraction. All of the flow’s logic is written in standard Go, and code inside a flow doesn’t need to be flow-aware.
Defining and calling flows
Section titled “Defining and calling flows”In its simplest form, a flow just wraps a function. The following example wraps
a function that calls genkit.Generate()
:
menuSuggestionFlow
:=
genkit.
DefineFlow
(g,
"menuSuggestionFlow"
,
func
(
ctx
context
.
Context
,
theme
string
) (
string
,
error
) {
resp, err
:=
genkit.
Generate
(ctx, g,
ai.
WithPrompt
(
"Invent a menu item for a
%s
themed restaurant."
, theme),
)
if
err
!=
nil
{
return
""
, err
}
return
resp.
Text
(),
nil
})
Just by wrapping your genkit.Generate()
calls like this, you add some
functionality: Doing so lets you run the flow from the Genkit CLI and from the
developer UI, and is a requirement for several of Genkit’s features,
including deployment and observability (later sections discuss these topics).
Input and output schemas
Section titled “Input and output schemas”One of the most important advantages Genkit flows have over directly calling a
model API is type safety of both inputs and outputs. When defining flows, you
can define schemas, in much the same way as you define the output schema of a genkit.Generate()
call; however, unlike with genkit.Generate()
, you can also
specify an input schema.
Here’s a refinement of the last example, which defines a flow that takes a string as input and outputs an object:
type
MenuItem
struct
{
Name
string
`json:"name"`
Description
string
`json:"description"`
}
menuSuggestionFlow
:=
genkit.
DefineFlow
(g,
"menuSuggestionFlow"
,
func
(
ctx
context
.
Context
,
theme
string
) (
MenuItem
,
error
) {
item, _, err
:=
genkit.
GenerateData
[
MenuItem
](ctx, g,
ai.
WithPrompt
(
"Invent a menu item for a
%s
themed restaurant."
, theme),
)
return
item, err
})
Note that the schema of a flow does not necessarily have to line up with the
schema of the genkit.Generate()
calls within the flow (in fact, a flow might
not even contain genkit.Generate()
calls). Here’s a variation of the example
that calls genkit.GenerateData()
, but uses the structured
output to format a simple string, which the flow returns. Note how we pass MenuItem
as a type parameter; this is the equivalent of passing the WithOutputType()
option and getting a value of that type in response.
type
MenuItem
struct
{
Name
string
`json:"name"`
Description
string
`json:"description"`
}
menuSuggestionMarkdownFlow
:=
genkit.
DefineFlow
(g,
"menuSuggestionMarkdownFlow"
,
func
(
ctx
context
.
Context
,
theme
string
) (
string
,
error
) {
item, _, err
:=
genkit.
GenerateData
[
MenuItem
](ctx, g,
ai.
WithPrompt
(
"Invent a menu item for a
%s
themed restaurant."
, theme),
)
if
err
!=
nil
{
return
""
, err
}
return
fmt.
Sprintf
(
"**
%s
**:
%s
"
, item.Name, item.Description),
nil
})
Calling flows
Section titled “Calling flows”Once you’ve defined a flow, you can call it from your Go code:
item, err
:=
menuSuggestionFlow.
Run
(context.
Background
(),
"bistro"
)
The argument to the flow must conform to the input schema.
If you defined an output schema, the flow response will conform to it. For
example, if you set the output schema to MenuItem
, the flow output will
contain its properties:
item, err
:=
menuSuggestionFlow.
Run
(context.
Background
(),
"bistro"
)
if
err
!=
nil
{
log.
Fatal
(err)
}
log.
Println
(item.Name)
log.
Println
(item.Description)
Streaming flows
Section titled “Streaming flows”Flows support streaming using an interface similar to genkit.Generate()
’s
streaming interface. Streaming is useful when your flow generates a large
amount of output, because you can present the output to the user as it’s being
generated, which improves the perceived responsiveness of your app. As a
familiar example, chat-based LLM interfaces often stream their responses to the
user as they are generated.
Here’s an example of a flow that supports streaming:
type
Menu
struct
{
Theme
string
`json:"theme"`
Items []
MenuItem
`json:"items"`
}
type
MenuItem
struct
{
Name
string
`json:"name"`
Description
string
`json:"description"`
}
menuSuggestionFlow
:=
genkit.
DefineStreamingFlow
(g,
"menuSuggestionFlow"
,
func
(
ctx
context
.
Context
,
theme
string
,
callback
core
.
StreamCallback
[
string
]) (
Menu
,
error
) {
item, _, err
:=
genkit.
GenerateData
[
MenuItem
](ctx, g,
ai.
WithPrompt
(
"Invent a menu item for a
%s
themed restaurant."
, theme),
ai.
WithStreaming
(
func
(
ctx
context
.
Context
,
chunk
*
ai
.
ModelResponseChunk
)
error
{
// Here, you could process the chunk in some way before sending it to
// the output stream using StreamCallback. In this example, we output
// the text of the chunk, unmodified.
return
callback
(ctx, chunk.
Text
())
}),
)
if
err
!=
nil
{
return
Menu
{}, err
}
return
Menu
{
Theme: theme,
Items: []
MenuItem
{item},
},
nil
})
The string
type in StreamCallback[string]
specifies the type of
values your flow streams. This does not necessarily need to be the same
type as the return type, which is the type of the flow’s complete output
( Menu
in this example).
In this example, the values streamed by the flow are directly coupled to
the values streamed by the genkit.Generate()
call inside the flow.
Although this is often the case, it doesn’t have to be: you can output values
to the stream using the callback as often as is useful for your flow.
Calling streaming flows
Section titled “Calling streaming flows”Streaming flows can be run like non-streaming flows with menuSuggestionFlow.Run(ctx, "bistro")
or they can be streamed:
streamCh, err
:=
menuSuggestionFlow.
Stream
(context.
Background
(),
"bistro"
)
if
err
!=
nil
{
log.
Fatal
(err)
}
for
result
:=
range
streamCh {
if
result.Err
!=
nil
{
log.
Fatalf
(
"Stream error:
%v
"
, result.Err)
}
if
result.Done {
log.
Printf
(
"Menu with
%s
theme:
\n
"
, result.Output.Theme)
for
_, item
:=
range
result.Output.Items {
log.
Printf
(
" -
%s
:
%s
"
, item.Name, item.Description)
}
}
else
{
log.
Println
(
"Stream chunk:"
, result.Stream)
}
}
Running flows from the command line
Section titled “Running flows from the command line”You can run flows from the command line using the Genkit CLI tool:
genkit
flow:run
menuSuggestionFlow
'"French"'
For streaming flows, you can print the streaming output to the console by adding
the -s
flag:
genkit
flow:run
menuSuggestionFlow
'"French"'
-s
Running a flow from the command line is useful for testing a flow, or for running flows that perform tasks needed on an ad hoc basis—for example, to run a flow that ingests a document into your vector database.
Debugging flows
Section titled “Debugging flows”One of the advantages of encapsulating AI logic within a flow is that you can test and debug the flow independently from your app using the Genkit developer UI.
The developer UI relies on the Go app continuing to run, even if the logic has
completed. If you are just getting started and Genkit is not part of a broader
app, add select {}
as the last line of main()
to prevent the app from
shutting down so that you can inspect it in the UI.
To start the developer UI, run the following command from your project directory:
genkit
start
--
go
run
.
From the Runtab of developer UI, you can run any of the flows defined in your project:
After you’ve run a flow, you can inspect a trace of the flow invocation by either clicking View traceor looking at the Inspecttab.
Deploying flows
Section titled “Deploying flows”You can deploy your flows directly as web API endpoints, ready for you to call from your app clients. Deployment is discussed in detail on several other pages, but this section gives brief overviews of your deployment options.
net/http
Server
Section titled “net/http Server”
To deploy a flow using any Go hosting platform, such as Cloud Run, define
your flow using genkit.DefineFlow()
and start a net/http
server with the
provided flow handler using genkit.Handler()
:
package
main
import
(
"
context
"
"
log
"
"
net/http
"
"
github.com/firebase/genkit/go/ai
"
"
github.com/firebase/genkit/go/genkit
"
"
github.com/firebase/genkit/go/plugins/googlegenai
"
"
github.com/firebase/genkit/go/plugins/server
"
)
type
MenuItem
struct
{
Name
string
`json:"name"`
Description
string
`json:"description"`
}
func
main
() {
ctx
:=
context.
Background
()
g
:=
genkit.
Init
(ctx, genkit.
WithPlugins
(
&
googlegenai
.
GoogleAI
{}))
menuSuggestionFlow
:=
genkit.
DefineFlow
(g,
"menuSuggestionFlow"
,
func
(
ctx
context
.
Context
,
theme
string
) (
MenuItem
,
error
) {
item, _, err
:=
genkit.
GenerateData
[
MenuItem
](ctx, g,
ai.
WithPrompt
(
"Invent a menu item for a
%s
themed restaurant."
, theme),
)
return
item, err
})
mux
:=
http.
NewServeMux
()
mux.
HandleFunc
(
"POST /menuSuggestionFlow"
, genkit.
Handler
(menuSuggestionFlow))
log.
Fatal
(server.
Start
(ctx,
"127.0.0.1:3400"
, mux))
}
server.Start()
is an optional helper function that starts the server and
manages its lifecycle, including capturing interrupt signals to ease local
development, but you may use your own method.
To serve all the flows defined in your codebase, you can use genkit.ListFlows()
:
mux
:=
http.
NewServeMux
()
for
_, flow
:=
range
genkit.
ListFlows
(g) {
mux.
HandleFunc
(
"POST /"
+
flow.
Name
(), genkit.
Handler
(flow))
}
log.
Fatal
(server.
Start
(ctx,
"127.0.0.1:3400"
, mux))
You can call a flow endpoint with a POST request as follows:
curl
-X
POST
"http://localhost:3400/menuSuggestionFlow"
\
-H
"Content-Type: application/json"
-d
'{"data": "banana"}'
Other server frameworks
Section titled “Other server frameworks”You can also use other server frameworks to deploy your flows. For example, you can use Gin with just a few lines:
router
:=
gin.
Default
()
for
_, flow
:=
range
genkit.
ListFlows
(g) {
router.
POST
(
"/"
+
flow.
Name
(),
func
(
c
*
gin
.
Context
) {
genkit.
Handler
(flow)(c.Writer, c.Request)
})
}
log.
Fatal
(router.
Run
(
":3400"
))
For information on deploying to specific platforms, see Genkit with Cloud Run .
The core of your app’s AI features are generative model requests, but it’s rare that you can simply take user input, pass it to the model, and display the model output back to the user. Usually, there are pre- and post-processing steps that must accompany the model call. For example:
- Retrieving contextual information to send with the model call
- Retrieving the history of the user’s current session, for example in a chat app
- Using one model to reformat the user input in a way that’s suitable to pass to another model
- Evaluating the “safety” of a model’s output before presenting it to the user
- Combining the output of several models
Every step of this workflow must work together for any AI-related task to succeed.
In Genkit, you represent this tightly-linked logic using a construction called a flow. Flows are written just like functions, using ordinary Python code, but they add additional capabilities intended to ease the development of AI features:
- Type safety: Input and output schemas defined using Pydantic Models , which provides both static and runtime type checking
- Streaming: Flows support streaming of data, such as parital LLM responses, or any custom serializable objects.
- Integration with developer UI: Debug flows independently of your application code using the developer UI. In the developer UI, you can run flows and view traces for each step of the flow.
- Simplified deployment: Deploy flows directly as web API endpoints, using Cloud Run or any platform that can host a web app.
Unlike similar features in other frameworks, Genkit’s flows are lightweight and unobtrusive, and don’t force your app to conform to any specific abstraction. All of the flow’s logic is written in standard Python, and code inside a flow doesn’t need to be flow-aware.
Defining and calling flows
Section titled “Defining and calling flows”In its simplest form, a flow just wraps a function. The following example wraps
a function that calls generate()
:
@ai.flow
()
async
def
menu_suggestion_flow
(theme:
str
):
response
=
await
ai.generate(
prompt
=
f
'Invent a menu item for a
{
theme
}
themed restaurant.'
,
)
return
response.text
Just by wrapping your generate()
calls like this, you add some functionality:
doing so lets you run the flow from the Genkit CLI and from the developer UI,
and is a requirement for several of Genkit’s features, including deployment and
observability (later sections discuss these topics).
Input and output schemas
Section titled “Input and output schemas”One of the most important advantages Genkit flows have over directly calling a model API is type safety of both inputs and outputs. When defining flows, you can define schemas for them using Pydantic.
Here’s a refinement of the last example, which defines a flow that takes a string as input and outputs an object:
from
pydantic
import
BaseModel
class
MenuItemSchema
(
BaseModel
):
dishname:
str
description:
str
@ai.flow
()
async
def
menu_suggestion_flow
(theme:
str
) -> MenuItemSchema:
response
=
await
ai.generate(
prompt
=
f
'Invent a menu item for a
{
theme
}
themed restaurant.'
,
output_schema
=
MenuItemSchema,
)
return
response.output
Note that the schema of a flow does not necessarily have to line up with the
schema of the generate()
calls within the flow (in fact, a flow might not even
contain generate()
calls). Here’s a variation of the example that passes a
schema to generate()
, but uses the structured output to format a simple
string, which the flow returns.
@ai.flow
()
async
def
menu_suggestion_flow
(theme:
str
) ->
str
:
# Changed return type annotation
response
=
await
ai.generate(
prompt
=
f
'Invent a menu item for a
{
theme
}
themed restaurant.'
,
output_schema
=
MenuItemSchema,
)
output: MenuItemSchema
=
response.output
return
f
'**
{
output.dishname
}
**:
{
output.description
}
'
Calling flows
Section titled “Calling flows”Once you’ve defined a flow, you can call it from your Python code as a regular function. The argument to the flow must conform to the input schema, if you defined one.
response
=
await
menu_suggestion_flow(
'bistro'
)
If you defined an output schema, the flow response will conform to it. For
example, if you set the output schema to MenuItemSchema
, the flow output will
contain its properties.
Streaming flows
Section titled “Streaming flows”Flows support streaming using an interface similar to generate_stream()
’s streaming
interface. Streaming is useful when your flow generates a large amount of
output, because you can present the output to the user as it’s being generated,
which improves the perceived responsiveness of your app. As a familiar example,
chat-based LLM interfaces often stream their responses to the user as they are
generated.
Here’s an example of a flow that supports streaming:
@ai.flow
()
async
def
menu_suggestion_flow
(theme:
str
, ctx):
stream, response
=
ai.generate_stream(
prompt
=
f
'Invent a menu item for a
{
theme
}
themed restaurant.'
,
)
async
for
chunk
in
stream:
ctx.send_chunk(chunk.text)
return
{
'theme'
: theme,
'menu_item'
: (
await
response).text,
}
The second parameter to your flow definition is called “side channel”. It
provides features such as request context and the send_chunk
callback.
The send_chunk
callback takes a single parameter. Whenever data becomes
available within your flow, send the data to the output stream by calling
this function.
In the above example, the values streamed by the flow are directly coupled to
the values streamed by the generate_stream()
call inside the flow. Although this is
often the case, it doesn’t have to be: you can output values to the stream using
the callback as often as is useful for your flow.
Calling streaming flows
Section titled “Calling streaming flows”Streaming flows are also callable, but they immediately return a response object
rather than a promise. Flow’s stream
method returns the stream async iterable,
which you can iterate over the streaming output of the flow as it’s generated.
stream, response
=
menu_suggestion_flow.stream(
'bistro'
)
async
for
chunk
in
stream:
print
(chunk)
You can also get the complete output of the flow, as you can with a
non-streaming flow. The final response is a future that you can await
on.
print
(
await
response)
Note that the streaming output of a flow might not be the same type as the complete output.
Debugging flows
Section titled “Debugging flows”One of the advantages of encapsulating AI logic within a flow is that you can test and debug the flow independently from your app using the Genkit developer UI.
To start the developer UI, run the following commands from your project directory:
genkit
start
--
python
app.py
Update python app.py
to match the way you normally run your app.
From the Runtab of developer UI, you can run any of the flows defined in your project:
After you’ve run a flow, you can inspect a trace of the flow invocation by either clicking View traceor looking on the Inspecttab.
In the trace viewer, you can see details about the execution of the entire flow, as well as details for each of the individual steps within the flow.
Deploying flows
Section titled “Deploying flows”You can deploy your flows directly as web API endpoints, ready for you to call from your app clients. Deployment is discussed in detail on several other pages, but this section gives brief overviews of your deployment options.
For information on deploying to specific platforms, see Deploy with Cloud Run and Deploy with Flask .