Additional considerations must be taken into account for best practices and testing your conversational commerce agent interface.
Implement best practices
Consider these best practices when implementing your conversational commerce agent interface:
- Visitor ID consistency: Help to ensure that a unique
visitor_idis consistently sent with each request for a given end user. This is vital for accurate personalization and model training. This identifier should ideally remain consistent for an end user across sessions and sign in or sign out states. - Branch management: While
default_branchis common, ensure you are using the correct branch ID if your product catalog is structured with multiple branches. - Search API interaction: For
SIMPLE_PRODUCT_SEARCHand any cases whererefined_searchis provided, remember to make a separate call to the core Search API (SearchService.Search) using thequeryfrom therefined_searchfield or the original query to get the actual product listings. The Conversational API primarily focuses on the conversational experience and user intent understanding rather than directly returning product results. - User interface design: Design your web interface to clearly present
conversational_text_response,followup_question, andrefined_searchoptions in an intuitive manner to guide your user.
Configure attributes for LLM generation
To ensure that the conversational commerce agent can generate responses, attributes must be set to Retrievablein your Attribute controls.
If an attribute is Indexablebut not Retrievable, the agent will find the product but cannot read the details to answer user questions. The agent often triggers a fallback response, such as Sorry, we don't have that product.
For this reason, you should set the semantic attributes ( Title
, Description
, Brand
, Specs
) to Retrievable: Trueso that the LLM can use them. Set the user interface or app attributes ( BrandID
, SkuID
) to Retrievable: Trueif your frontend requires them for rendering. Set your backend metadata to Retrievable: Falseto optimize context window usage.
Plan A/B tests
While relevance is an important input metric, Vertex AI Search for commerce also takes other variables into account with the goal of optimizing for business results:
A/B readiness checklist
These are the success metrics used:
| Item | Definition | Stage |
|---|---|---|
|
Event attribution scheme
|
Work with Google to properly segment the user events for measurement. | Pre-experiment |
|
Monitoring data inputs
|
Ability to quickly understand when training data contains anomalies that could impact performance. | Pre-experiment |
|
Event coverage
|
Are we instrumenting all possible outcomes associated with search or recommendations AI sessions? | Pre-experiment |
|
Measurable success criteria
|
Documented definition of done (in measurable terms). | Pre-experiment |
|
Ability to measure UX biases
|
Ensure consistent UX across experiment arms. | During experiment |
|
Coherency between VAIS data and consumption
|
Verify attribution tokens, filters, order by, offset, etc., are being passed from API to UserEvents. Visitor/UserIDs match between event and API requests. | During experiment |
|
Approval to tune during the experiment
|
Plan for tuning activities, document changes, adjust measurements and interpretation accordingly. | During Experiment |
Implement proof of concept or minimum viable product
Up-to-date and complete product catalog ingestion
Adherence to recommended events ingestion methods to ensure data synchronization between Google and you.Google's recommendation is for real-time event tracking, including impression data.
Pass through necessary attributes such as experiment IDs, visitor IDs, and correctly implement search tokens where applicable.
- Verify integration.
- Test a single change at a time.
- Avoid aggressive caching.
- Ensure web interface fairness between test and control.
- Ensure traffic fairness with traffic split using visitor ID.
- Ensure product data consistency.
- Apply same business rules across test & control.
Alignment on exact definitions of metrics tracked is critical to measure performance accurately.
Standard metrics tracked include:- Search CTR (results relevance)
- Null search rate (intent understanding)
- Revenue per visitor / Revenue per user
- Number of searches to convert
Example experiment cadence
- Contract
- Trained model and serving configs
- Product and event data ingestion
- Compare (client) data with Commerce search telemetry and adjust accordingly
- Align on measurement baselines
- Perform offline evaluation
- Tune configurations
- A/A test to verify traffic split
- Obtain QA sign-off
- Commit to move forward with ramp

- Continue tuning/optimization
- Test incremental features
- Analyze performance across search segments
- Make any modeling/rules adjustments
- Cross-check performance
- Identify and explain anomalies
- Initiate experiment
- Share performance metrics daily
- Perform tuning
Components of a successful experiment
- Plan time to verify catalog, user event, and API consumption coherency before official launch.
- Establish quantifiable success criteria up front (ideally, expressed as a change to RPV).
- Proactively identify and explain regressions or anomalies, then fix them.
- Share measurements often, understand and document metrics definitions across experiment arms.
- Minimize UX differences between segments (common layout and visuals, just different data).
- Be mindful of merchandising / business rules (ensure they don't introduce bias).
- Measure catalog drift.
- Properly annotate experiment outcomes (by way of user events).
Roles and experiment ownership
Event and index anomalies
- Data mapping
- Model/training adjustments
- Quality/serving anomalies
- Platform quotas/limits
- Product/client library defects
- Request augmentation (including context routing, caching, and intent processing)
- Serving configs (tuning)
- Source data enrichment
- Client performance (for example, WC threads)
- UX/API/platform/library defects
Conduct experiments in the console
-
Go to the Experiments page in the Search for commerce console.
Go to the Experiments page -
Use the console for advanced self-service analytics for Vertex AI Search for commerce onboarding and A/B testing by applying Google's attribution methodology:
-
Monitor traffic segmentation, business metrics, and search and browse performance.
-
Apply per-search visit level metrics across both keyword search and browse.
-
View experiment performance as a time-series with statistical significance metrics.
-
Use the embedded Looker platform.

