Generate text from multimodal prompt

This sample demonstrates how to generate text from a multimodal prompt using the Gemini model. The prompt consists of three images and two text prompts. The model generates a text response that describes the images and the text prompts.

Explore further

For detailed documentation that includes this code sample, see the following:

Code sample

C#

Before trying this sample, follow the C# setup instructions in the Vertex AI quickstart using client libraries . For more information, see the Vertex AI C# API reference documentation .

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

 using Google.Api.Gax.Grpc;
using Google.Cloud.AIPlatform.V1;
using Google.Protobuf;
using System.Net.Http;
using System.Text;
using System.Threading.Tasks;

public class MultimodalMultiImage
{
    public async Task<string> GenerateContent(
        string projectId = "your-project-id",
        string location = "us-central1",
        string publisher = "google",
        string model = "gemini-1.5-flash-001"
    )
    {
        var predictionServiceClient = new PredictionServiceClientBuilder
        {
            Endpoint = $"{location}-aiplatform.googleapis.com"
        }.Build();

        ByteString colosseum = await ReadImageFileAsync(
            "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark1.png");

        ByteString forbiddenCity = await ReadImageFileAsync(
            "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark2.png");

        ByteString christRedeemer = await ReadImageFileAsync(
            "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark3.png");

        var generateContentRequest = new GenerateContentRequest
        {
            Model = $"projects/{projectId}/locations/{location}/publishers/{publisher}/models/{model}",
            Contents =
            {
                new Content
                {
                    Role = "USER",
                    Parts =
                    {
                        new Part { InlineData = new() { MimeType = "image/png", Data = colosseum }},
                        new Part { Text = "city: Rome, Landmark: the Colosseum" },
                        new Part { InlineData = new() { MimeType = "image/png", Data = forbiddenCity }},
                        new Part { Text = "city: Beijing, Landmark: Forbidden City"},
                        new Part { InlineData = new() { MimeType = "image/png", Data = christRedeemer }}
                    }
                }
            }
        };

        using PredictionServiceClient.StreamGenerateContentStream response = predictionServiceClient.StreamGenerateContent(generateContentRequest);

        StringBuilder fullText = new();

        AsyncResponseStream<GenerateContentResponse> responseStream = response.GetResponseStream();
        await foreach (GenerateContentResponse responseItem in responseStream)
        {
            fullText.Append(responseItem.Candidates[0].Content.Parts[0].Text);
        }
        return fullText.ToString();
    }

    private static async Task<ByteString> ReadImageFileAsync(string url)
    {
        using HttpClient client = new();
        using var response = await client.GetAsync(url);
        byte[] imageBytes = await response.Content.ReadAsByteArrayAsync();
        return ByteString.CopyFrom(imageBytes);
    }
} 

Go

Before trying this sample, follow the Go setup instructions in the Vertex AI quickstart using client libraries . For more information, see the Vertex AI Go API reference documentation .

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

 import (
	"context"
	"fmt"
	"io"
	"mime"
	"path/filepath"

	"cloud.google.com/go/vertexai/genai"
)

// generateMultimodalContent shows how to generate a text from a multimodal prompt using the Gemini model,
// writing the response to the provided io.Writer.
func generateMultimodalContent(w io.Writer, projectID, location, modelName string) error {
	// location := "us-central1"
	// modelName := "gemini-1.5-flash-001"
	ctx := context.Background()

	// create prompt image parts
	colosseum := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("landmark1.png")),
		FileURI:  "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png",
	}
	forbiddenCity := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("landmark2.png")),
		FileURI:  "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark2.png",
	}
	newImage := genai.FileData{
		MIMEType: mime.TypeByExtension(filepath.Ext("landmark3.png")),
		FileURI:  "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark3.png",
	}
	// create a multimodal (multipart) prompt
	prompt := []genai.Part{
		colosseum,
		genai.Text("city: Rome, Landmark: the Colosseum "),
		forbiddenCity,
		genai.Text("city: Beijing, Landmark: the Forbidden City "),
		newImage,
	}

	// generate the response
	client, err := genai.NewClient(ctx, projectID, location)
	if err != nil {
		return fmt.Errorf("unable to create client: %w", err)
	}
	defer client.Close()

	model := client.GenerativeModel(modelName)

	res, err := model.GenerateContent(ctx, prompt...)
	if err != nil {
		return fmt.Errorf("unable to generate contents: %w", err)
	}

	fmt.Fprintf(w, "generated response: %s\n", res.Candidates[0].Content.Parts[0])
	return nil
} 

Java

Before trying this sample, follow the Java setup instructions in the Vertex AI quickstart using client libraries . For more information, see the Vertex AI Java API reference documentation .

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

 import com.google.cloud.vertexai.VertexAI;
import com.google.cloud.vertexai.api.Content;
import com.google.cloud.vertexai.api.GenerateContentResponse;
import com.google.cloud.vertexai.generativeai.ContentMaker;
import com.google.cloud.vertexai.generativeai.GenerativeModel;
import com.google.cloud.vertexai.generativeai.PartMaker;
import com.google.cloud.vertexai.generativeai.ResponseHandler;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.net.HttpURLConnection;
import java.net.URL;

public class MultimodalMultiImage {

  public static void main(String[] args) throws IOException {
    // TODO(developer): Replace these variables before running the sample.
    String projectId = "your-google-cloud-project-id";
    String location = "us-central1";
    String modelName = "gemini-1.5-flash-001";

    multimodalMultiImage(projectId, location, modelName);
  }

  // Generates content from multiple input images.
  public static void multimodalMultiImage(String projectId, String location, String modelName)
      throws IOException {
    // Initialize client that will be used to send requests. This client only needs
    // to be created once, and can be reused for multiple requests.
    try (VertexAI vertexAI = new VertexAI(projectId, location)) {
      GenerativeModel model = new GenerativeModel(modelName, vertexAI);

      Content content = ContentMaker.fromMultiModalData(
          PartMaker.fromMimeTypeAndData("image/png", readImageFile(
              "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark1.png")),
          "city: Rome, Landmark: the Colosseum",
          PartMaker.fromMimeTypeAndData("image/png", readImageFile(
              "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark2.png")),
          "city: Beijing, Landmark: Forbidden City",
          PartMaker.fromMimeTypeAndData("image/png", readImageFile(
              "https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark3.png"))
      );

      GenerateContentResponse response = model.generateContent(content);

      String output = ResponseHandler.getText(response);
      System.out.println(output);
    }
  }

  // Reads the image data from the given URL.
  public static byte[] readImageFile(String url) throws IOException {
    URL urlObj = new URL(url);
    HttpURLConnection connection = (HttpURLConnection) urlObj.openConnection();
    connection.setRequestMethod("GET");

    int responseCode = connection.getResponseCode();

    if (responseCode == HttpURLConnection.HTTP_OK) {
      InputStream inputStream = connection.getInputStream();
      ByteArrayOutputStream outputStream = new ByteArrayOutputStream();

      byte[] buffer = new byte[1024];
      int bytesRead;
      while ((bytesRead = inputStream.read(buffer)) != -1) {
        outputStream.write(buffer, 0, bytesRead);
      }

      return outputStream.toByteArray();
    } else {
      throw new RuntimeException("Error fetching file: " + responseCode);
    }
  }
} 

Node.js

Before trying this sample, follow the Node.js setup instructions in the Vertex AI quickstart using client libraries . For more information, see the Vertex AI Node.js API reference documentation .

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

 const {VertexAI} = require('@google-cloud/vertexai');
const axios = require('axios');

async function getBase64(url) {
  const image = await axios.get(url, {responseType: 'arraybuffer'});
  return Buffer.from(image.data).toString('base64');
}

/**
 * TODO(developer): Update these variables before running the sample.
 */
async function sendMultiModalPromptWithImage(
  projectId = 'PROJECT_ID',
  location = 'us-central1',
  model = 'gemini-1.5-flash-001'
) {
  // For images, the SDK supports base64 strings
  const landmarkImage1 = await getBase64(
    'https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark1.png'
  );
  const landmarkImage2 = await getBase64(
    'https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark2.png'
  );
  const landmarkImage3 = await getBase64(
    'https://storage.googleapis.com/cloud-samples-data/vertex-ai/llm/prompts/landmark3.png'
  );

  // Initialize Vertex with your Cloud project and location
  const vertexAI = new VertexAI({project: projectId, location: location});

  const generativeVisionModel = vertexAI.getGenerativeModel({
    model: model,
  });

  // Pass multimodal prompt
  const request = {
    contents: [
      {
        role: 'user',
        parts: [
          {
            inlineData: {
              data: landmarkImage1,
              mimeType: 'image/png',
            },
          },
          {
            text: 'city: Rome, Landmark: the Colosseum',
          },

          {
            inlineData: {
              data: landmarkImage2,
              mimeType: 'image/png',
            },
          },
          {
            text: 'city: Beijing, Landmark: Forbidden City',
          },
          {
            inlineData: {
              data: landmarkImage3,
              mimeType: 'image/png',
            },
          },
        ],
      },
    ],
  };

  // Create the response
  const response = await generativeVisionModel.generateContent(request);
  // Wait for the response to complete
  const aggregatedResponse = await response.response;
  // Select the text from the response
  const fullTextResponse =
    aggregatedResponse.candidates[0].content.parts[0].text;

  console.log(fullTextResponse);
} 

Python

Before trying this sample, follow the Python setup instructions in the Vertex AI quickstart using client libraries . For more information, see the Vertex AI Python API reference documentation .

To authenticate to Vertex AI, set up Application Default Credentials. For more information, see Set up authentication for a local development environment .

 import vertexai

from vertexai.generative_models import GenerativeModel, Part

# TODO(developer): Update project_id and location
vertexai.init(project=PROJECT_ID, location="us-central1")

# Load images from Cloud Storage URI
image_file1 = Part.from_uri(
    "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark1.png",
    mime_type="image/png",
)
image_file2 = Part.from_uri(
    "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark2.png",
    mime_type="image/png",
)
image_file3 = Part.from_uri(
    "gs://cloud-samples-data/vertex-ai/llm/prompts/landmark3.png",
    mime_type="image/png",
)

model = GenerativeModel("gemini-1.5-flash-001")
response = model.generate_content(
    [
        image_file1,
        "city: Rome, Landmark: the Colosseum",
        image_file2,
        "city: Beijing, Landmark: Forbidden City",
        image_file3,
    ]
)
print(response.text) 

What's next

To search and filter code samples for other Google Cloud products, see the Google Cloud sample browser .