Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, seeModel versions and lifecycle.
Stay organized with collectionsSave and categorize content based on your preferences.
Llama 4 Scout 17B-16E is a multmodal model that uses the Mixture-of-Experts
(MoE) architecture and early fusion, delivering state-of-the-art results for its
size class.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-05 UTC."],[],[],null,["# Llama 4 Scout 17B-16E is a multmodal model that uses the Mixture-of-Experts\n(MoE) architecture and early fusion, delivering state-of-the-art results for its\nsize class.\n\n\n[Try in Vertex AI](https://console.cloud.google.com/vertex-ai/generative/multimodal/create/text?model=llama-4-scout-17b-16e-instruct-maas)\n\n\n[View model card in Model Garden](https://console.cloud.google.com/vertex-ai/publishers/meta/model-garden/llama-4-maverick-17b-128e-instruct-maas)"]]