Understand and use safety settings

You can use safety settings to adjust the likelihood of getting responses that may be considered harmful. By default, safety settings block content with medium and/or high probability of being unsafe content across all dimensions.

Jump to Gemini safety settings Jump to Imagen safety settings

Safety settings for Gemini models

Click your Gemini API provider to view provider-specific content and code on this page.

Learn more about safety settings for Gemini models in the Gemini Developer API documentation.

Swift

You configure SafetySettings when you create a GenerativeModel instance.

Example with one safety setting:

  import 
  
 FirebaseAILogic 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 let 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 firebaseAI 
 ( 
 backend 
 : 
  
 . 
 googleAI 
 ()). 
 generativeModel 
 ( 
  
 modelName 
 : 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 [ 
  
 SafetySetting 
 ( 
 harmCategory 
 : 
  
 . 
 harassment 
 , 
  
 threshold 
 : 
  
 . 
 blockOnlyHigh 
 ) 
  
 ] 
 ) 
 // ... 
 

Example with multiple safety settings:

  import 
  
 FirebaseAILogic 
 let 
  
 harassmentSafety 
  
 = 
  
 SafetySetting 
 ( 
 harmCategory 
 : 
  
 . 
 harassment 
 , 
  
 threshold 
 : 
  
 . 
 blockOnlyHigh 
 ) 
 let 
  
 hateSpeechSafety 
  
 = 
  
 SafetySetting 
 ( 
 harmCategory 
 : 
  
 . 
 hateSpeech 
 , 
  
 threshold 
 : 
  
 . 
 blockMediumAndAbove 
 ) 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 let 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 firebaseAI 
 ( 
 backend 
 : 
  
 . 
 googleAI 
 ()). 
 generativeModel 
 ( 
  
 modelName 
 : 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 [ 
 harassmentSafety 
 , 
  
 hateSpeechSafety 
 ] 
 ) 
 // ... 
 

Kotlin

You configure SafetySettings when you create a GenerativeModel instance.

Example with one safety setting:

  import 
  
 com.google.firebase.vertexai.type.HarmBlockThreshold 
 import 
  
 com.google.firebase.vertexai.type.HarmCategory 
 import 
  
 com.google.firebase.vertexai.type.SafetySetting 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 val 
  
 model 
  
 = 
  
 Firebase 
 . 
 ai 
 ( 
 backend 
  
 = 
  
 GenerativeBackend 
 . 
 googleAI 
 ()). 
 generativeModel 
 ( 
  
 modelName 
  
 = 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
  
 = 
  
 listOf 
 ( 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HARASSMENT 
 , 
  
 HarmBlockThreshold 
 . 
 ONLY_HIGH 
 ) 
  
 ) 
 ) 
 // ... 
 

Example with multiple safety settings:

  import 
  
 com.google.firebase.vertexai.type.HarmBlockThreshold 
 import 
  
 com.google.firebase.vertexai.type.HarmCategory 
 import 
  
 com.google.firebase.vertexai.type.SafetySetting 
 val 
  
 harassmentSafety 
  
 = 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HARASSMENT 
 , 
  
 HarmBlockThreshold 
 . 
 ONLY_HIGH 
 ) 
 val 
  
 hateSpeechSafety 
  
 = 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HATE_SPEECH 
 , 
  
 HarmBlockThreshold 
 . 
 MEDIUM_AND_ABOVE 
 ) 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 val 
  
 model 
  
 = 
  
 Firebase 
 . 
 ai 
 ( 
 backend 
  
 = 
  
 GenerativeBackend 
 . 
 googleAI 
 ()). 
 generativeModel 
 ( 
  
 modelName 
  
 = 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
  
 = 
  
 listOf 
 ( 
 harassmentSafety 
 , 
  
 hateSpeechSafety 
 ) 
 ) 
 // ... 
 

Java

You configure SafetySettings when you create a GenerativeModel instance.

  SafetySetting 
  
 harassmentSafety 
  
 = 
  
 new 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HARASSMENT 
 , 
 HarmBlockThreshold 
 . 
 ONLY_HIGH 
 ); 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 GenerativeModelFutures 
  
 model 
  
 = 
  
 GenerativeModelFutures 
 . 
 from 
 ( 
  
 FirebaseAI 
 . 
 getInstance 
 ( 
 GenerativeBackend 
 . 
 googleAI 
 ()) 
  
 . 
 generativeModel 
 ( 
  
 /* modelName */ 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 /* generationConfig is optional */ 
  
 null 
 , 
  
 Collections 
 . 
 singletonList 
 ( 
 harassmentSafety 
 ) 
  
 ); 
 ); 
 // ... 
 

Example with multiple safety settings:

  SafetySetting 
  
 harassmentSafety 
  
 = 
  
 new 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HARASSMENT 
 , 
 HarmBlockThreshold 
 . 
 ONLY_HIGH 
 ); 
 SafetySetting 
  
 hateSpeechSafety 
  
 = 
  
 new 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HATE_SPEECH 
 , 
 HarmBlockThreshold 
 . 
 MEDIUM_AND_ABOVE 
 ); 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 GenerativeModelFutures 
  
 model 
  
 = 
  
 GenerativeModelFutures 
 . 
 from 
 ( 
  
 FirebaseAI 
 . 
 getInstance 
 ( 
 GenerativeBackend 
 . 
 googleAI 
 ()) 
  
 . 
 generativeModel 
 ( 
  
 /* modelName */ 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 /* generationConfig is optional */ 
  
 null 
 , 
  
 List 
 . 
 of 
 ( 
 harassmentSafety 
 , 
  
 hateSpeechSafety 
 ) 
  
 ); 
 ); 
 // ... 
 

Web

You configure SafetySettings when you create a GenerativeModel instance.

Example with one safety setting:

  import 
  
 { 
  
 HarmBlockThreshold 
 , 
  
 HarmCategory 
 , 
  
 getAI 
 , 
  
 getGenerativeModel 
 , 
  
 GoogleAIBackend 
  
 } 
  
 from 
  
 "firebase/ai" 
 ; 
 // ... 
 const 
  
 ai 
  
 = 
  
 getAI 
 ( 
 firebaseApp 
 , 
  
 { 
  
 backend 
 : 
  
 new 
  
 GoogleAIBackend 
 () 
  
 }); 
 const 
  
 safetySettings 
  
 = 
  
 [ 
  
 { 
  
 category 
 : 
  
 HarmCategory 
 . 
 HARM_CATEGORY_HARASSMENT 
 , 
  
 threshold 
 : 
  
 HarmBlockThreshold 
 . 
 BLOCK_ONLY_HIGH 
 , 
  
 }, 
 ]; 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 const 
  
 model 
  
 = 
  
 getGenerativeModel 
 ( 
 ai 
 , 
  
 { 
  
 model 
 : 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
  
 }); 
 // ... 
 

Example with multiple safety settings:

  import 
  
 { 
  
 HarmBlockThreshold 
 , 
  
 HarmCategory 
 , 
  
 getAI 
 , 
  
 getGenerativeModel 
 , 
  
 GoogleAIBackend 
  
 } 
  
 from 
  
 "firebase/ai" 
 ; 
 // ... 
 const 
  
 ai 
  
 = 
  
 getAI 
 ( 
 firebaseApp 
 , 
  
 { 
  
 backend 
 : 
  
 new 
  
 GoogleAIBackend 
 () 
  
 }); 
 const 
  
 safetySettings 
  
 = 
  
 [ 
  
 { 
  
 category 
 : 
  
 HarmCategory 
 . 
 HARM_CATEGORY_HARASSMENT 
 , 
  
 threshold 
 : 
  
 HarmBlockThreshold 
 . 
 BLOCK_ONLY_HIGH 
 , 
  
 }, 
  
 { 
  
 category 
 : 
  
 HarmCategory 
 . 
 HARM_CATEGORY_HATE_SPEECH 
 , 
  
 threshold 
 : 
  
 HarmBlockThreshold 
 . 
 BLOCK_MEDIUM_AND_ABOVE 
 , 
  
 }, 
 ]; 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 const 
  
 model 
  
 = 
  
 getGenerativeModel 
 ( 
 ai 
 , 
  
 { 
  
 model 
 : 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
  
 }); 
 // ... 
 

Dart

You configure SafetySettings when you create a GenerativeModel instance.

Example with one safety setting:

  // ... 
 final 
  
 safetySettings 
  
 = 
  
 [ 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 harassment 
 , 
  
 HarmBlockThreshold 
 . 
 high 
 ) 
 ]; 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 final 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 googleAI 
 (). 
 generativeModel 
 ( 
  
 model: 
  
 ' GEMINI_MODEL_NAME 
' 
 , 
  
 safetySettings: 
  
 safetySettings 
 , 
 ); 
 // ... 
 

Example with multiple safety settings:

  // ... 
 final 
  
 safetySettings 
  
 = 
  
 [ 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 harassment 
 , 
  
 HarmBlockThreshold 
 . 
 high 
 ), 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 hateSpeech 
 , 
  
 HarmBlockThreshold 
 . 
 high 
 ), 
 ]; 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 final 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 googleAI 
 (). 
 generativeModel 
 ( 
  
 model: 
  
 ' GEMINI_MODEL_NAME 
' 
 , 
  
 safetySettings: 
  
 safetySettings 
 , 
 ); 
 // ... 
 

Unity

You configure SafetySettings when you create a GenerativeModel instance.

Example with one safety setting:

  // ... 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 var 
  
 ai 
  
 = 
  
 FirebaseAI 
 . 
 GetInstance 
 ( 
 FirebaseAI 
 . 
 Backend 
 . 
 GoogleAI 
 ()); 
 var 
  
 model 
  
 = 
  
 ai 
 . 
 GetGenerativeModel 
 ( 
  
 modelName 
 : 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 new 
  
 SafetySetting 
 [] 
  
 { 
  
 new 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 Harassment 
 , 
  
 SafetySetting 
 . 
 HarmBlockThreshold 
 . 
 OnlyHigh 
 ) 
  
 } 
 ); 
 // ... 
 

Example with multiple safety settings:

  // ... 
 var 
  
 harassmentSafety 
  
 = 
  
 new 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 Harassment 
 , 
  
 SafetySetting 
 . 
 HarmBlockThreshold 
 . 
 OnlyHigh 
 ); 
 var 
  
 hateSpeechSafety 
  
 = 
  
 new 
  
 SafetySetting 
 ( 
 HarmCategory 
 . 
 HateSpeech 
 , 
  
 SafetySetting 
 . 
 HarmBlockThreshold 
 . 
 MediumAndAbove 
 ); 
 // Specify the safety settings as part of creating the `GenerativeModel` instance 
 var 
  
 ai 
  
 = 
  
 FirebaseAI 
 . 
 GetInstance 
 ( 
 FirebaseAI 
 . 
 Backend 
 . 
 GoogleAI 
 ()); 
 var 
  
 model 
  
 = 
  
 ai 
 . 
 GetGenerativeModel 
 ( 
  
 modelName 
 : 
  
 " GEMINI_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 new 
  
 SafetySetting 
 [] 
  
 { 
  
 harassmentSafety 
 , 
  
 hateSpeechSafety 
  
 } 
 ); 
 // ... 
 

Safety settings for Imagen models

Click your Gemini API provider to view provider-specific content and code on this page.

Learn about all the supported safety settings and their available values for Imagen models in the Google Cloud documentation.

Swift

You configure ImagenSafetySettings when you create an ImagenModel instance.

  import 
  
 FirebaseAILogic 
 // Specify the safety settings as part of creating the `ImagenModel` instance 
 let 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 firebaseAI 
 ( 
 backend 
 : 
  
 . 
 googleAI 
 ()). 
 imagenModel 
 ( 
  
 modelName 
 : 
  
 " IMAGEN_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 ImagenSafetySettings 
 ( 
  
 safetyFilterLevel 
 : 
  
 . 
 blockLowAndAbove 
 , 
  
 personFilterLevel 
 : 
  
 . 
 allowAdult 
  
 ) 
 ) 
 // ... 
 

Kotlin

You configure ImagenSafetySettings when you create an ImagenModel instance.

  // Specify the safety settings as part of creating the `ImagenModel` instance 
 val 
  
 model 
  
 = 
  
 Firebase 
 . 
 ai 
 ( 
 backend 
  
 = 
  
 GenerativeBackend 
 . 
 googleAI 
 ()). 
 imagenModel 
 ( 
  
 modelName 
  
 = 
  
 " IMAGEN_MODEL_NAME 
" 
 , 
  
 safetySettings 
  
 = 
  
 ImagenSafetySettings 
 ( 
  
 safetyFilterLevel 
  
 = 
  
 ImagenSafetyFilterLevel 
 . 
 BLOCK_LOW_AND_ABOVE 
 , 
  
 personFilterLevel 
  
 = 
  
 ImagenPersonFilterLevel 
 . 
 BLOCK_ALL 
  
 ) 
 ) 
 // ... 
 

Java

You configure ImagenSafetySettings when you create an ImagenModel instance.

  // Specify the safety settings as part of creating the `ImagenModel` instance 
 ImagenModelFutures 
  
 model 
  
 = 
  
 ImagenModelFutures 
 . 
 from 
 ( 
  
 FirebaseAI 
 . 
 getInstance 
 ( 
 GenerativeBackend 
 . 
 googleAI 
 ()) 
  
 . 
 imagenModel 
 ( 
  
 /* modelName */ 
  
 " IMAGEN_MODEL_NAME 
" 
 , 
  
 /* imageGenerationConfig */ 
  
 null 
 ); 
 ); 
 // ... 
 

Web

You configure ImagenSafetySettings when you create an ImagenModel instance.

  // ... 
 const 
  
 ai 
  
 = 
  
 getAI 
 ( 
 firebaseApp 
 , 
  
 { 
  
 backend 
 : 
  
 new 
  
 GoogleAIBackend 
 () 
  
 }); 
 // Specify the safety settings as part of creating the `ImagenModel` instance 
 const 
  
 model 
  
 = 
  
 getImagenModel 
 ( 
  
 ai 
 , 
  
 { 
  
 model 
 : 
  
 " IMAGEN_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 { 
  
 safetyFilterLevel 
 : 
  
 ImagenSafetyFilterLevel 
 . 
 BLOCK_LOW_AND_ABOVE 
 , 
  
 personFilterLevel 
 : 
  
 ImagenPersonFilterLevel 
 . 
 ALLOW_ADULT 
 , 
  
 } 
  
 } 
 ); 
 // ... 
 

Dart

You configure ImagenSafetySettings when you create an ImagenModel instance.

  // ... 
 // Specify the safety settings as part of creating the `ImagenModel` instance 
 final 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 googleAI 
 (). 
 imagenModel 
 ( 
  
 model: 
  
 ' IMAGEN_MODEL_NAME 
' 
 , 
  
 safetySettings: 
  
 ImagenSafetySettings 
 ( 
  
 ImagenSafetyFilterLevel 
 . 
 blockLowAndAbove 
 , 
  
 ImagenPersonFilterLevel 
 . 
 allowAdult 
 , 
  
 ), 
 ); 
 // ... 
 

Unity

You configure ImagenSafetySettings when you create an ImagenModel instance.

  using 
  
 Firebase.AI 
 ; 
 // Specify the safety settings as part of creating the `ImagenModel` instance 
 var 
  
 model 
  
 = 
  
 FirebaseAI 
 . 
 GetInstance 
 ( 
 FirebaseAI 
 . 
 Backend 
 . 
 GoogleAI 
 ()). 
 GetImagenModel 
 ( 
  
 modelName 
 : 
  
 " IMAGEN_MODEL_NAME 
" 
 , 
  
 safetySettings 
 : 
  
 new 
  
 ImagenSafetySettings 
 ( 
  
 safetyFilterLevel 
 : 
  
 ImagenSafetySettings 
 . 
 SafetyFilterLevel 
 . 
 BlockLowAndAbove 
 , 
  
 personFilterLevel 
 : 
  
 ImagenSafetySettings 
 . 
 PersonFilterLevel 
 . 
 AllowAdult 
  
 ) 
 ); 
 // ... 
 

Other options to control content generation

  • Learn more about prompt design so that you can influence the model to generate output specific to your needs.
  • Configure model parameters to control how the model generates a response. For Gemini models, these parameters include max output tokens, temperature, topK, and topP. For Imagen models, these include aspect ratio, person generation, watermarking, etc.
  • Set system instructions to steer the behavior of the model. This feature is like a preamble that you add before the model gets exposed to any further instructions from the end user.
  • Pass a response schema along with the prompt to specify a specific output schema. This feature is most commonly used when generating JSON output , but it can also be used for classification tasks (like when you want the model to use specific labels or tags).
Design a Mobile Site
View Site in Mobile | Classic
Share by: