The ScaNN index uses tree-quantization-based indexing, in which indexes learn
a search tree together with a quantization (or hashing) function. When you run
a query, the search tree is used to prune the
search space, while quantization is used to compress the index size. This pruning
speeds up the scoring of the similarity—in other words, the distance—between
the query vector and the database vectors.
To achieve both a high query-per-second rate (QPS)
and a high recall with your nearest-neighbor queries, you must partition
the tree of your ScaNN index in a way that is most appropriate to your data
and your queries.
High-dimensional embedding models can retain much of the information at much
lower dimensionality. For example, you can retain 90% of the information with
only 20% of the embedding's dimensions. To help speed up such datasets,
the AlloyDB AI ScaNN index automatically performs dimension reduction
usingPrincipal Component Analysis(PCA) on the indexed vectors, which further reduces CPU and memory usage for
the vector search. For more information, seescann.enable_pca.
Because dimension reduction causes minor recall loss in the index, the
AlloyDB AI ScaNN index compensates for recall loss
by first performing a ranking
step with a larger number of PCAed vector candidates from the index. Then,
ScaNN re-ranks the PCAed vector candidates by the original vectors.
For more information, seescann.pre_reordering_num_neighbors.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eScaNN index employs tree-quantization to accelerate vector similarity scoring by pruning the search space and compressing index size.\u003c/p\u003e\n"],["\u003cp\u003ePartitioning the ScaNN index tree appropriately is essential for achieving high query-per-second rates and recall in nearest-neighbor queries.\u003c/p\u003e\n"],["\u003cp\u003eAlloyDB ScaNN automatically reduces the dimensions of indexed vectors using Principal Component Analysis (PCA) to decrease CPU and memory usage.\u003c/p\u003e\n"],["\u003cp\u003eTo offset any recall loss from dimension reduction, AlloyDB ScaNN performs an initial ranking of a larger number of PCA'ed candidates and then re-ranks them using the original vectors.\u003c/p\u003e\n"]]],[],null,["Select a documentation version: 16.3.0keyboard_arrow_down\n\n- [Current (16.8.0)](/alloydb/omni/current/docs/ai/scann-vector-query-perf-overview)\n- [16.8.0](/alloydb/omni/16.8.0/docs/ai/scann-vector-query-perf-overview)\n- [16.3.0](/alloydb/omni/16.3.0/docs/ai/scann-vector-query-perf-overview)\n- [15.12.0](/alloydb/omni/15.12.0/docs/ai/scann-vector-query-perf-overview)\n- [15.7.1](/alloydb/omni/15.7.1/docs/ai/scann-vector-query-perf-overview)\n- [15.7.0](/alloydb/omni/15.7.0/docs/ai/scann-vector-query-perf-overview)\n\n\u003cbr /\u003e\n\nThis page provides a conceptual overview of improving vector query performance using AlloyDB AI's Scalable Nearest Neighbor (ScaNN) index. For more information, see [Create indexes and query vectors](/alloydb/omni/16.3.0/docs/ai/store-index-query-vectors?resource=scann).\n\n\u003cbr /\u003e\n\nThe ScaNN index uses tree-quantization-based indexing, in which indexes learn\na search tree together with a quantization (or hashing) function. When you run\na query, the search tree is used to prune the\nsearch space, while quantization is used to compress the index size. This pruning\nspeeds up the scoring of the similarity---in other words, the distance---between\nthe query vector and the database vectors.\n\nTo achieve both a high query-per-second rate (QPS)\nand a high recall with your nearest-neighbor queries, you must partition\nthe tree of your ScaNN index in a way that is most appropriate to your data\nand your queries.\n\nHigh-dimensional embedding models can retain much of the information at much\nlower dimensionality. For example, you can retain 90% of the information with\nonly 20% of the embedding's dimensions. To help speed up such datasets,\nthe AlloyDB AI ScaNN index automatically performs dimension reduction\nusing [Principal Component Analysis](https://en.wikipedia.org/wiki/Principal_component_analysis)\n(PCA) on the indexed vectors, which further reduces CPU and memory usage for\nthe vector search. For more information, see\n[`scann.enable_pca`](/alloydb/omni/16.3.0/docs/reference/scann-index-reference).\n\nBecause dimension reduction causes minor recall loss in the index, the\nAlloyDB AI ScaNN index compensates for recall loss\nby first performing a ranking\nstep with a larger number of PCAed vector candidates from the index. Then,\nScaNN re-ranks the PCAed vector candidates by the original vectors.\nFor more information, see [`scann.pre_reordering_num_neighbors`](/alloydb/omni/16.3.0/docs/reference/scann-index-reference).\n\nWhat's next\n\n- Learn [best practices for tuning ScaNN indexes](/alloydb/omni/16.3.0/docs/ai/best-practices-tuning-scann).\n- [Get started with vector embeddings using AlloyDB AI](https://codelabs.developers.google.com/alloydb-ai-embedding#0).\n- Learn more about the [AlloyDB AI ScaNN index](https://cloud.google.com/blog/products/databases/understanding-the-scann-index-in-alloydb?e=48754805)."]]