Use continuous tuning for Gemini models

Continuous tuning lets you continue tuning an already tuned model or model checkpoint by adding more epochs or training examples. Using an already tuned model or checkpoint as a base model allows for more efficient tuning experimentation.

You can use continuous tuning for the following purposes:

  • To tune with more data if an existing tuned model is underfitting .
  • To boost performance or keep the model up to date with new data.
  • To further customize an existing tuned model.

The following Gemini models support continuous tuning:

For detailed information about Gemini model versions, see Google models and Model versions and lifecycle .

Configure continuous tuning

When creating a continuous tuning job, note the following:

  • Continuous tuning is supported in the Google Gen AI SDK. It isn't supported in the Vertex AI SDK for Python.
  • You must provide a model resource name:

    • In the Google Cloud console, the model resource name appears in the Vertex AI Tuningpage, in the Tuning details > Model Namefield.
    • The model resource name uses the following format:
     projects/{project}/locations/{location}/models/{modelId}@{version_id} 
    
    • {version_id} is optional and can be either the generated version ID or a user-provided version alias. If it is omitted, the default version is used.
  • If you don't specify a model version, the default version is used.

  • If you're using a checkpoint as a base model and don't specify a checkpoint ID, the default checkpoint is used. For more information, see Use checkpoints in supervised fine-tuning for Gemini models . In the Google Cloud console, the default checkpoint can be found as follows:

    1. Go to the Model Registry page.
    2. Click the Model Namefor the model.
    3. Click View all versions.
    4. Click the desired version to view a list of checkpoints. The default checkpoint is indicated by the word default next to the checkpoint ID.
  • By default, a new model version is created under the same parent model as the pre-tuned model. If you supply a new tuned model display name, a new model is created.

  • Only supervised tuning base models that are tuned on or after July 11, 2025 can be used as base models for continuous tuning.

  • If you're using customer-managed encryption keys (CMEK) , your continuous tuning job must use the same CMEK that was used in the tuning job for the pre-tuned model.

Console

To configure continuous tuning for a pre-tuned model by using the Google Cloud console, perform the following steps:

  1. In the Vertex AI section of the Google Cloud console, go to the Vertex AI Studiopage.

    Go to Vertex AI Studio

  2. Click Create tuned model.

  3. Under Model details, configure the following:

    1. Choose Tune a pre-tuned model.
    2. In the Pre-tuned modelfield, choose the name of your pre-tuned model.
    3. If the model has at least one checkpoint, the Checkpointdrop-down field appears. Choose the desired checkpoint.
  4. Click Continue.

Google Gen AI SDK

The following example shows how to configure continuous tuning by using the Google Gen AI SDK.

  import 
  
 time 
 from 
  
 google 
  
 import 
 genai 
 from 
  
 google.genai.types 
  
 import 
 HttpOptions 
 , 
 TuningDataset 
 , 
 CreateTuningJobConfig 
 # TODO(developer): Update and un-comment below line 
 # tuned_model_name = "projects/123456789012/locations/us-central1/models/1234567890@1" 
 # checkpoint_id = "1" 
 client 
 = 
 genai 
 . 
 Client 
 ( 
 http_options 
 = 
 HttpOptions 
 ( 
 api_version 
 = 
 "v1beta1" 
 )) 
 training_dataset 
 = 
 TuningDataset 
 ( 
 gcs_uri 
 = 
 "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_train_data.jsonl" 
 , 
 ) 
 validation_dataset 
 = 
 TuningDataset 
 ( 
 gcs_uri 
 = 
 "gs://cloud-samples-data/ai-platform/generative_ai/gemini/text/sft_validation_data.jsonl" 
 , 
 ) 
 tuning_job 
 = 
 client 
 . 
 tunings 
 . 
 tune 
 ( 
 base_model 
 = 
 tuned_model_name 
 , 
 # Note: Using a Tuned Model 
 training_dataset 
 = 
 training_dataset 
 , 
 config 
 = 
 CreateTuningJobConfig 
 ( 
 tuned_model_display_name 
 = 
 "Example tuning job" 
 , 
 validation_dataset 
 = 
 validation_dataset 
 , 
 pre_tuned_model_checkpoint_id 
 = 
 checkpoint_id 
 , 
 ), 
 ) 
 running_states 
 = 
 set 
 ([ 
 "JOB_STATE_PENDING" 
 , 
 "JOB_STATE_RUNNING" 
 , 
 ]) 
 while 
 tuning_job 
 . 
 state 
 in 
 running_states 
 : 
 print 
 ( 
 tuning_job 
 . 
 state 
 ) 
 tuning_job 
 = 
 client 
 . 
 tunings 
 . 
 get 
 ( 
 name 
 = 
 tuning_job 
 . 
 name 
 ) 
 time 
 . 
 sleep 
 ( 
 60 
 ) 
 print 
 ( 
 tuning_job 
 . 
 tuned_model 
 . 
 model 
 ) 
 print 
 ( 
 tuning_job 
 . 
 tuned_model 
 . 
 endpoint 
 ) 
 print 
 ( 
 tuning_job 
 . 
 experiment 
 ) 
 # Example response: 
 # projects/123456789012/locations/us-central1/models/1234567890@2 
 # projects/123456789012/locations/us-central1/endpoints/123456789012345 
 # projects/123456789012/locations/us-central1/metadataStores/default/contexts/tuning-experiment-2025010112345678 
 if 
 tuning_job 
 . 
 tuned_model 
 . 
 checkpoints 
 : 
 for 
 i 
 , 
 checkpoint 
 in 
 enumerate 
 ( 
 tuning_job 
 . 
 tuned_model 
 . 
 checkpoints 
 ): 
 print 
 ( 
 f 
 "Checkpoint 
 { 
 i 
  
 + 
  
 1 
 } 
 : " 
 , 
 checkpoint 
 ) 
 # Example response: 
 # Checkpoint 1:  checkpoint_id='1' epoch=1 step=10 endpoint='projects/123456789012/locations/us-central1/endpoints/123456789000000' 
 # Checkpoint 2:  checkpoint_id='2' epoch=2 step=20 endpoint='projects/123456789012/locations/us-central1/endpoints/123456789012345' 
 
Design a Mobile Site
View Site in Mobile | Classic
Share by: