Deploying an agent on Agent Runtime makes it available remotely to handle requests. This document explains the ways to deploy an agent based on your development workflow: from an execution object, local source files, a Dockerfile, a container image hosted in Artifact Registry, or directly through a connected Git repository.
To deploy an agent on Agent Runtime, choose between the following methods:
- Deploy from an agent object: Ideal for interactive development in
environments like Colab, enabling deployment of in-memory
local_agentobjects. This method works best for agents with structures that don't contain complex, non-serializable components. - Deploy from source files: This method is well-suited for automated workflows such as CI/CD pipelines and Infrastructure as Code tools like Terraform, enabling fully declarative and automated deployments. It deploys your agent directly from local source code and does not require a Cloud Storage bucket.
- Deploy from Dockerfile: This method is similar to the method for deploying from source files. You deploy your agent directly from local source code. You don't need a Cloud Storage bucket. This method is appropriate if you need to define and have control over the API server that is deployed.
- Deploy from Container Image: This method is similar to the method for deploying from Dockerfile. You deploy a container image that is hosted in Artifact Registry. Use this method if you require control over the build process for the container image and lower deployment latency.
- Deploy from Developer Connect: Recommended for projects managed in a Git repository that are linked through Developer Connect . This method streamlines agent deployment directly from your source code and natively supports version control, team collaboration, and CI/CD pipelines. Before using this method, set up your Git repository link by following the instructions in Set up Developer Connect Git repository link .
To get started, use the following steps:
- Complete prerequisites .
- Optional: Configure your agent for deployment .
- Create an Agent Platform instance .
- Optional: Get the agent resource ID .
- Optional: List the supported operations .
- Optional: Grant the deployed agent permissions .
You can also use Agents CLI for deployment.
Prerequisites
Before you deploy an agent, make sure you have completed the following tasks:
Optional: Configure your agent for deployment
You can make the following optional configurations for your agent:
Create an Agent Platform instance
This section describes how to create an Agent Platform instance for deploying an agent.
To deploy an agent on Agent Platform, you can choose between the following methods:
- Deploying from an agent object for interactive development.
- Deploying from Developer Connect for Git-based workflows.
- Deploying from source files or Dockerfile for file-based workflows.
-
Deploying from container image for image-based workflows.
Python Object
To deploy the agent on Agent Platform, use client.agent_engines.create
to
pass in the local_agent
object along with any optional configurations
:
remote_agent
=
client
.
agent_engines
.
create
(
agent
=
local_agent
,
# Optional.
config
=
{
"requirements"
:
requirements
,
# Optional.
"extra_packages"
:
extra_packages
,
# Optional.
"gcs_dir_name"
:
gcs_dir_name
,
# Optional.
"display_name"
:
display_name
,
# Optional.
"description"
:
description
,
# Optional.
"labels"
:
labels
,
# Optional.
"env_vars"
:
env_vars
,
# Optional.
"build_options"
:
build_options
,
# Optional.
"identity_type"
:
identity_type
,
# Optional.
"service_account"
:
service_account
,
# Optional.
"min_instances"
:
min_instances
,
# Optional.
"max_instances"
:
max_instances
,
# Optional.
"resource_limits"
:
resource_limits
,
# Optional.
"container_concurrency"
:
container_concurrency
,
# Optional
"encryption_spec"
:
encryption_spec
,
# Optional.
"agent_framework"
:
agent_framework
,
# Optional.
},
)
Deployment takes a few minutes, during which the following steps happen in the background:
-
A bundle of the following artifacts are generated locally:
-
*.pkla pickle file corresponding to local_agent. -
requirements.txta text file containing the package requirements . -
dependencies.tar.gza tar file containing any extra packages .
-
-
The bundle is uploaded to Cloud Storage (under the corresponding folder ) for staging the artifacts.
-
The Cloud Storage URIs for the respective artifacts are specified in the PackageSpec .
-
The Agent Runtime service receives the request and builds containers and starts HTTP servers on the backend.
Developer Connect
To deploy from Developer Connect on Agent Platform, use client.agent_engines.create
by providing developer_connect_source
, entrypoint_module
, and entrypoint_object
in the config dictionary, along with other optional configurations
. This method lets you deploy code directly from a connected Git repository.
remote_agent
=
client
.
agent_engines
.
create
(
config
=
{
"developer_connect_source"
:
{
# Required.
"git_repository_link"
:
"projects/PROJECT_ID/locations/LOCATION/connections/CONNECTION_ID/gitRepositoryLinks/REPO_ID"
,
"revision"
:
"main"
,
"dir"
:
"path/to/dir"
,
},
"entrypoint_module"
:
"agent"
,
# Required.
"entrypoint_object"
:
"root_agent"
,
# Required.
"requirements_file"
:
"requirements.txt"
,
# Optional.
# Other optional configs:
# "env_vars": {...},
# "service_account": "...",
},
)
The parameters for Developer Connect deployment are:
-
developer_connect_source(Required,dict): The configuration for fetching source code. See set up Developer Connect Git repository link for details.-
git_repository_link(Required,str): The Developer Connect Git repository link resource name. -
revision(Required,str): The revision to fetch (branch, tag, or commit SHA). -
dir(Required,str): The root directory of the agent code within the repository.
-
-
entrypoint_module(Required,str): The Python module name containing the agent entrypoint, relative to the directory specified indeveloper_connect_source.dir. -
entrypoint_object(Required,str): The name of the callable object within theentrypoint_modulethat represents the agent application (for example,root_agent). -
requirements_file(Optional,str): The path to a pip requirements file relative to the source root. Defaults torequirements.txt.
Deployment takes a few minutes, during which the following steps happen in the background:
- The Agent Runtime service fetches the source code from the specified Git repository revision.
- The service installs dependencies from
requirements_file(if provided). - The service starts the agent application using the specified
entrypoint_moduleandentrypoint_object.
Source files
To deploy from source files on Agent Platform, use client.agent_engines.create
by providing source_packages
, entrypoint_module
, entrypoint_object
, and class_methods
in the config dictionary, along with other optional configurations
. With this method, you don't need to pass an agent object or Cloud Storage bucket.
remote_agent
=
client
.
agent_engines
.
create
(
config
=
{
"source_packages"
:
source_packages
,
# Required.
"entrypoint_module"
:
entrypoint_module
,
# Required.
"entrypoint_object"
:
entrypoint_object
,
# Required.
"class_methods"
:
class_methods
,
# Required.
"requirements_file"
:
requirements_file
,
# Optional.
"display_name"
:
display_name
,
# Optional.
"description"
:
description
,
# Optional.
"labels"
:
labels
,
# Optional.
"env_vars"
:
env_vars
,
# Optional.
"build_options"
:
build_options
,
# Optional.
"identity_type"
:
identity_type
,
# Optional.
"service_account"
:
service_account
,
# Optional.
"min_instances"
:
min_instances
,
# Optional.
"max_instances"
:
max_instances
,
# Optional.
"resource_limits"
:
resource_limits
,
# Optional.
"container_concurrency"
:
container_concurrency
,
# Optional
"encryption_spec"
:
encryption_spec
,
# Optional.
"agent_framework"
:
agent_framework
,
# Optional.
},
)
The parameters for inline source deployment are:
-
source_packages(Required,list[str]): A list of local file or directory paths to include in the deployment. The total size of the files and directories insource_packagesshouldn't exceed 8MB. -
entrypoint_module(Required,str): The fully qualified Python module name containing the agent entrypoint (for example,agent_dir.agent). -
entrypoint_object(Required,str): The name of the callable object within theentrypoint_modulethat represents the agent application (for example,root_agent). -
class_methods(Required,list[dict]): A list of dictionaries that define the agent's exposed methods. Each dictionary includes aname(Required), anapi_mode(Required), and aparametersfield. Refer to List supported operations for more information a the methods for a custom agent.For example:
"class_methods" : [ { "name" : "method_name" , "api_mode" : "" , # Possible options are: "", "async", "async_stream", "stream", "bidi_stream" "parameters" : { "type" : "object" , "properties" : { "param1" : { "type" : "string" , "description" : "Description of param1" }, "param2" : { "type" : "integer" } }, "required" : [ "param1" ] } } ] ``` -
requirements_file(Optional,str): The path to a pip requirements file within the paths specified insource_packages. Defaults torequirements.txtat the root directory of the packaged source.
Deployment takes a few minutes, during which the following steps happen in the background:
- The Agent Platform SDK creates a
tar.gzarchive of the paths specified insource_packages. - This archive is encoded and sent directly to the Agent Platform API.
- The Agent Runtime service receives the archive, extracts it, installs dependencies from
requirements_file(if provided), and starts the agent application using the specifiedentrypoint_moduleandentrypoint_object.
The following is an example of deploying an agent from source files:
from
google.cloud.aiplatform
import
vertexai
# Example file structure:
# /agent_directory
# ├── agent.py
# ├── requirements.txt
# Example agent_directory/agent.py:
# class MyAgent:
# def ask(self, question: str) -> str:
# return f"Answer to {question}"
# root_agent = MyAgent()
remote_agent
=
client
.
agent_engines
.
create
(
config
=
{
"display_name"
:
"My Agent"
,
"description"
:
"An agent deployed from a local source."
,
"source_packages"
:
[
"agent_directory"
],
"entrypoint_module"
:
"agent_directory.agent"
,
"entrypoint_object"
:
"root_agent"
,
"requirements_file"
:
"requirements.txt"
,
"class_methods"
:
[
{
"name"
:
"ask"
,
"api_mode"
:
""
,
"parameters"
:
{
"type"
:
"object"
,
"properties"
:
{
"question"
:
{
"type"
:
"string"
}
},
"required"
:
[
"question"
]
}},
],
# Other optional configs:
# "env_vars": {...},
# "service_account": "...",
}
)
Dockerfile
To deploy from Dockerfile on Agent Platform, it follows a similar approach
to deploying from source files
. The only place that
changes when deploying is to replace entrypoint_module
, entrypoint_object
,
and (optionally) requirements_file
in the config with an image_spec
.
The following is an example of deploying an agent using a Dockerfile:
from
google.cloud.aiplatform
import
vertexai
# Example file structure:
# /current_directory
# ├── agent.py
# ├── main.py
# ├── requirements.txt
# ├── Dockerfile
remote_agent
=
client
.
agent_engines
.
create
(
config
=
{
"source_packages"
:
[
"agent.py"
,
"main.py"
,
"requirements.txt"
,
"Dockerfile"
,
],
"image_spec"
:
{},
# tells Agent Runtime to use the Dockerfile
# Other optional configs
}
)
Container Image
To deploy from a container image, first follow the setup instructions for Bring your own container
,
making sure to install a version of google-cloud-aiplatform
satisfying >=1.144
. Next, run the following code:
remote_agent
=
client
.
agent_engines
.
create
(
config
=
{
"container_spec"
:
{
"image_uri"
:
" CONTAINER_IMAGE_URI
"
,
},
# Other optional configs
},
)
Where CONTAINER_IMAGE_URI
corresponds to the URI of the
container image in Artifact Registry (such as us-central1-docker.pkg.dev/my-project/my-repo/my-image:tag
).
Deployment latency depends on the total time it takes to install required
packages. Once deployed, remote_agent
corresponds to an instance of local_agent
that is running on Agent Platform and can be queried or
deleted.
The remote_agent
object corresponds to an AgentEngine
class that contains the following:
-
remote_agent.api_resourcewith information about the deployed agent. You can also callremote_agent.operation_schemas()to return the list of operations that theremote_agentsupports. See Supported operations for details. -
remote_agent.api_clientthat allows for synchronous service interactions -
remote_agent.async_api_clientthat allows for asynchronous service interactions
Optional: Get the agent resource ID
Each deployed agent has a unique identifier. You can run the following command to get the resource name for your deployed agent:
remote_agent
.
api_resource
.
name
The response should look like the following string:
"projects/ PROJECT_NUMBER
/locations/ LOCATION
/reasoningEngines/ RESOURCE_ID
"
where
-
PROJECT_IDis the Google Cloud project ID where the deployed agent runs. -
LOCATIONis the region where the deployed agent runs. -
RESOURCE_IDis the ID of the deployed agent as areasoningEngineresource .
Optional: List the supported operations
Each deployed agent has a list of supported operations. You can use the AgentEngine.operation_schemas
to get the list of operations supported by the deployed agent:
remote_agent
.
operation_schemas
()
The schema for each operation is a dictionary that documents the information of a method for the agent that you can call. The set of supported operations depends on the framework you used to develop your agent:
Optional: Grant the deployed agent permissions
If the deployed agent needs to be granted any additional permissions, follow the instructions in Set up the identity and permissions for your agent .
What's next
Manage deployed agents
Learn how to manage agents that have been deployed to the Agent Platform managed runtime.

