Starting April 29, 2025, Gemini 1.5 Pro and Gemini 1.5 Flash models are not available in projects that have no prior usage of these models, including new projects. For details, seeModel versions and lifecycle.
Generate videos with Veo on Vertex AI from an image
Stay organized with collectionsSave and categorize content based on your preferences.
You can use Veo on Vertex AI to generate new videos from an image and text prompt.
Supported interfaces include the Google Cloud console and the Vertex AI
API.
For more information about writing effective text prompts for video generation,
see theVeo prompt
guide.
Before you begin
Sign in to your Google Cloud account. If you're new to
Google Cloud,create an accountto evaluate how our products perform in
real-world scenarios. New customers also get $300 in free credits to
run, test, and deploy workloads.
In the Google Cloud console, on the project selector page,
select or create a Google Cloud project.
1Image generated using Imagen on Vertex AI from the prompt:A Crochet
elephant in intricate patterns walking on the savanna
You can generate novel videos using only an image as an input, or and image and
descriptive text as the inputs. The following samples show you basic
instructions to generate videos from image and text.
Console
In the Google Cloud console, go to theVertex AI Studio>Media
Studiopage.
Use the following command to send a video generation request. This
request begins a long-running operation and stores output to a
Cloud Storage bucket you specify.
Before using any of the request data,
make the following replacements:
PROJECT_ID: A string
representing your Google Cloud project ID.
MODEL_ID: A string
respresenting the model ID to use. The following are accepted values:
veo-2.0-generate-001(GA)
veo-3.0-generate-preview(Preview)
TEXT_PROMPT: The
text prompt used to guide video generation.
INPUT_IMAGE: A
base64-encoded string that represents the input image. For best quality, we
recommend that the input image's resolution be 720p (1280 x 720 pixels) or
higher, and have an aspect ratio of either 16:9 or 9:16. Images of other
aspect ratios or sizes may be resized or centrally cropped when the image is
uploaded.
MIME_TYPE: A string
representing the MIME type of the input image. Only the images of the
following MIME types are supported:
"image/jpeg"
"image/png"
OUTPUT_STORAGE_URI: Optional: A
string representing the Cloud Storage bucket to store the output videos.
If not provided, video bytes are returned in the response. For example:"gs://video-bucket/output/".
RESPONSE_COUNT:
The number of video files to generate. The accepted range of values is1-4.
DURATION: An integer
representing the length of the generated video files. The following are the
accepted values for each model:
Veo 2 models:5-8
Veo 3 models:8
Additional optional parameters
Use the following optional variables depending on your use
case. Add some or all of the following parameters in the"parameters": {}object.
ASPECT_RATIO:
Optional: A string value that describes the aspect ratio of the generated
videos. You can use the following values:
"16:9"for landscape
"9:16"for portrait
The default value is"16:9"
NEGATIVE_PROMPT: Optional: A string
value that describes content that you want to prevent the model from
generating.
PERSON_SAFETY_SETTING:
Optional: A string value that controls the safety setting for generating
people or face generation. You can use the following values:
"allow_adult": Only allow generation of adult people and
faces.
"disallow": Doesn't generate people or faces.
The default value is"allow_adult".
RESOLUTION:
Optional: A string value that controls the resolution of the generated
video. Supported byVeo 3 models only.You can use the following
values:
"720p"
"1080p"
The default value is"720p".
RESPONSE_COUNT:
Optional. An integer value that describes the number of videos to generate.
The accepted range of values is1-4.
SEED_NUMBER:
Optional. An uint32 value that the model uses to generate deterministic
videos. Specifying a seed number with your request without changing other
parameters guides the model to produce the same videos. The accepted range
of values is0-4294967295.
HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:predictLongRunning
This request returns a full operation name with a unique operation ID. Use this
full operation name to poll that status of the video generation request.
OPERATION_ID: The unique operation ID returned in the original generate video
request.
HTTP method and URL:
POST https://us-central1-aiplatform.googleapis.com/v1/projects/PROJECT_ID/locations/us-central1/publishers/google/models/MODEL_ID:fetchPredictOperation
Set environment variables to use the Gen AI SDK with Vertex AI:
# Replace the `GOOGLE_CLOUD_PROJECT` and `GOOGLE_CLOUD_LOCATION` values# with appropriate values for your project.exportGOOGLE_CLOUD_PROJECT=GOOGLE_CLOUD_PROJECTexportGOOGLE_CLOUD_LOCATION=globalexportGOOGLE_GENAI_USE_VERTEXAI=True
importtimefromgoogleimportgenaifromgoogle.genai.typesimportGenerateVideosConfig,Imageclient=genai.Client()# TODO(developer): Update and un-comment below line# output_gcs_uri = "gs://your-bucket/your-prefix"operation=client.models.generate_videos(model="veo-3.0-generate-preview",prompt="Extreme close-up of a cluster of vibrant wildflowers swaying gently in a sun-drenched meadow.",image=Image(gcs_uri="gs://cloud-samples-data/generative-ai/image/flowers.png",mime_type="image/png",),config=GenerateVideosConfig(aspect_ratio="16:9",output_gcs_uri=output_gcs_uri,),)whilenotoperation.done:time.sleep(15)operation=client.operations.get(operation)print(operation)ifoperation.response:print(operation.result.generated_videos[0].video.uri)# Example response:# gs://your-bucket/your-prefix
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-05 UTC."],[],[],null,[]]