This page introduces the vector databases supported on Vertex AI RAG Engine. You can also see how to connect a vector database (vector store) to your RAG corpus.
Vector databases play a crucial role in enabling retrieval for RAG applications. Vector databases offer a specialized way to store and query vector embeddings, which are mathematical representations of text or other data that capture semantic meaning and relationships. Vector embeddings allow RAG systems to quickly and accurately find the most relevant information within a vast knowledge base, even when dealing with complex or nuanced queries. When combined with an embedding model, vector databases can help overcome the limitations of LLMs, and provide more accurate, relevant, and comprehensive responses.
Supported vector databases
When creating a RAG corpus, Vertex AI RAG Engine offers the
enterprise-ready RagManagedDb
as the default vector database, which requires
no additional provisioning or managing. RagManagedDb
offers both KNN and ANN search options and
allows switching to a basic tier for some quick prototyping and experimentation.
To learn more about choosing a retrieval strategy on RagManagedDb
or for
updating the tier, see Use RagManagedDb
with
RAG
. For
Vertex AI RAG Engine to automatically create and manage the
vector database for you, see Create a RAG
corpus
.
In addition to the default RagManagedDb
, Vertex AI RAG Engine
lets you provision and use your vector database within your RAG corpus. In this
case, you are responsible for the lifecycle and scalability of your vector
database.
Compare vector database options
This table lists your choices of vector databases that are supported within Vertex AI RAG Engine and provides links to pages that explain how to use the vector databases within your RAG corpus.
RagManagedDb
(default) is a regionally-distributed scalable database service that offers very high consistency and high availability and can be used for a vector search. easy simple fast quick
- No setup required.
- Good for enterprise-scale and small-scale use cases.
- Very high consistency.
- High availability.
- Low latency.
- Excellent for transactional workloads.
- CMEK enabled.
- Generating high-volume documents.
- Building enterprise-scale RAG.
- Developing a quick proof of concept.
- Providing low provisioning and maintenance overhead.
- Using with chat bots.
- Building RAG applications.
- For optimal recall, the ANN feature requires that the index be rebuilt after major changes to your data.
cosine
- Integrates with other Google Cloud services.
- Scalability and reliability are supported by Google Cloud infrastructure.
- Uses pay-as-you-go pricing.
- Generating high-volume documents.
- Building enterprise-scale RAG.
- Managing vector database infrastructure.
- Existing Google Cloud customers or anyone looking to use multiple Google Cloud services.
- Updates aren't reflected immediately.
- Vendor lock-in with Google Cloud.
- Could be more expensive depending on your use cases.
cosine
dot-product
- Integrates with Vertex AI and other Google Cloud services.
- Scalability and reliability are supported by Google Cloud infrastructure.
- Leverages existing BigQuery infrastructure.
- Generating high-volume documents.
- Building enterprise-scale RAG.
- Managing vector database infrastructure.
- Existing Google Cloud customers or customers looking to use multiple Google Cloud services.
- Changes are only available in the online store after a manual synchronization is performed.
- Vendor lock-in with Google Cloud.
cosine
dot-product
L2 squared
- Supports various data types and offers built-in graph capabilities.
- Provides open source and a vibrant community.
- Highly flexible and customizable.
- Supports diverse data types and modules for different modalities, such as text and images.
- Can choose among Cloud providers, such as Google Cloud, AWS, and Azure.
- Generating high-volume documents.
- Building enterprise-scale RAG.
- Managing vector database infrastructure.
- Existing Weaviate customers.
- Updates aren't reflected immediately.
- Can be more complex to set up and manage.
- Performance can vary depending on the configuration.
cosine
dot-product
L2 squared
hamming
manhattan
- Get started quickly.
- Excellent scalability and performance.
- Focus on vector search with advanced features like filtering and a metadata search.
- Can choose among Cloud providers, such as Google Cloud, AWS, and Azure.
- Generating high-volume documents.
- Building enterprise-scale RAG.
- Managing vector database infrastructure.
- Existing Pinecone customers.
- Updates aren't reflected immediately.
- Can be more expensive than other options.
- Quotas and limits restrict scale and performance.
- Limited control over the underlying infrastructure.
cosine
euclidean
dot-product
What's next
- To create a RAG corpus, see Create a RAG corpus example .
- To list all of the RAG corpora, see List RAG corpora example .