Builds trust in the Rogo AI platform among 6,000+ investment bankers and analysts
Reduces hallucination rates from 34.1% to 3.9% with Gemini 2.5 Flash compared to other models
Accelerates innovation by reducing AI modeling timelines from months to weeks
Supports 10x growth in tokens per query while reducing latency and cost
Improves performance and cost controls through provisioned throughput for 80% to 90% of peak usage
Rogo uses Gemini, Vertex AI, Dataflow, and Spanner to spark rapid innovation, completing AI modeling processes in weeks instead of months.
We needed to build out our AI solutions with greater scalability and agility to ensure that we could support our commercial growth while maintaining higher accuracy on financial tasks than other general purpose tools. Partnering with Google Cloud sets our engineering team up for success with proven technologies that enable our solutions.
Joseph Kim
VP of Engineering, Rogo
Founded in 2021 by Gabriel Stengel and John Willett, in New York City, Rogo is building the leading AI solution for Wall Street.
Rogo helps investment banks and private equity firms make the manual, automated. By unifying internal and external data, Rogo delivers a highly specialized, fine-tuned model for finance that automates research, compiles analysis, and uncovers insights, all in one secure platform.
Rogo's platform consolidates a firm's proprietary, internal data — such as memos, research, and internal files — alongside external sources including SEC filings, PitchBook, S&P Global, FactSet, Preqin, and news wires. Finance professionals can then automate full workflows such as building slide decks, generating company profiles, doing competitive benchmarking, and drafting investment memos.
When Rogo looked at AI environments to accomplish its ambitious goals, the Engineering team prioritized several capabilities including scalability, high performance, reliability, efficiency, and trustworthiness. Rogo partnered with Google Cloud to deliver on these needs.
In Google Cloud, Rogo quickly discovered a powerful platform to achieve its goals of supporting finance professionals.
Rogo's core multimodal AI models process voluminous text and visual data, and the company found Gemini models outperformed other AI model technology vendors. For example, switching to Gemini 2.5 Flash reduced hallucination rates from 34.1% to just 3.9%.
"We compared several alternative AI models and found that Gemini offers the largest effective capacity for longer context retrieval windows and document sizes," says Joseph Kim, VP of Engineering at Rogo.
Rogo's engineering team uses both Gemini 2.5 Flash and Pro models. Kim appreciates the flexibility of the controlled thinking budget in 2.5 Flash to manage how much the model "thinks" before generating a response. Currently, Rogo uses Gemini to retrieve information and process very large quantities of semantic and visual data.
Rogo also relies on the provisioned throughput capability of Vertex AI, which reserves high throughput capacity for the company's business-critical generative AI workloads in exchange for an advance commitment. Rogo uses this feature for 80% to 90% of its peak usage to ensure high performance and better control costs.
For example, when Rogo onboards a new client, it can give Google Cloud a one-week lead time for a doubling of its generative AI workloads to support excellent user experiences. "There are very few AI model providers out there that can accommodate such demanding, short-turn requests," says Kim.
The Rogo engineering team uses Dataflow with Apache Beam for massive data processing workloads. The team can quickly experiment with enhancements and new features for Rogo's core platform, such as rebuilding all its search indices from scratch, but in a slightly different way to optimize results.
"AI modeling processes that used to take weeks or months now only take a few hours with Apache Beam and Dataflow," Kim says. "We've dramatically accelerated the pace of experimentation by reducing some pipelines from a one-month runtime to just three hours."
Rogo chose Spanner as its globally distributed database to enable seamless search workflows along with vector retrieval workflows while maintaining compliance with Atomicity, Consistency, Isolation, and Durability (ACID) requirements.
"Spanner is a gem of technology that more in the engineering community would benefit from," Kim says. "It's one of the few databases that scale writes horizontally with the number of nodes. It's not just a single-threaded, single-node, vertically scaled system. Our goal was to have everything in one system and Spanner does that."
Rogo also leverages retrieval augmented generation (RAG). Kim notes that every information retrieval system needs both token-based retrieval and embedding-based retrieval capabilities. In the past, his team had to write complex logic to compensate for asynchronous search indexes and manage complicated resource allocation.
Spanner supports high-quality embedding search and token-based retrieval directly through the same interface and using the same Spanner SQL queries that Rogo uses for regular CRUD operations. When Rogo engineers want to store data, they can write it once to Spanner, which automatically handles all of the various indices.
Kim also praised Spanner's recent launch of token-based retrieval with automatic synonym expansion and full support for Chinese, Japanese, Korean (CJK) languages, including inverse text normalization (ITN) support, which helps Rogo better serve global clients.
Pleased with its results, Rogo has committed to Spanner as its primary solution for information storage, as well as for retrieval for both hybrid search embeddings and token search.
Innovation at Rogo is never bottlenecked by a lack of good ideas. Now, with Google Cloud, we’re no longer constrained by throughput and capacity. We can test and execute more ideas to drive faster product innovation and business growth.
Joseph Kim
VP of Engineering, Rogo
The integrated Google Cloud solution gives Rogo better observability and manageability along with seamless autoscaling. In addition, many of Rogo's clients have confided that they simply trust Google more than other AI vendors, with greater respect for its security features, regulatory preparedness, and established processes.
Rogo is also part of the Google for Startups Cloud Program , which provides valuable Google Cloud credits and other services. As an AI-first, scale tier startup , Rogo qualified for benefits including up to $350,000 in Cloud credits, $12,000 in Google Cloud Enhanced Support credits, and invitations to exclusive webinars and live Q&As with Google Cloud AI product managers, engineers, and developer advocates.
Since joining the ranks of Series B+ startups, Rogo has gained additional access to tailored credits and discounts for dedicated Google Cloud customer engineers and learning resources, positioning the company for continued success as a financial services AI innovator.
Rogo is a secure AI platform with full workflow automation for bankers, investors, and financial institutions, breaking open the bottleneck of iterative, manual analyses to deliver real insights, fast.
Industry: Financial Services
Location: United States
Products: Google Cloud , Dataflow , Gemini , Spanner , Vertex AI