Model Armor screens the prompts and responses for your LLM applications. The monitoring dashboard provide you with the data, metrics, and visualizations generated from this screening process. Use these insights to understand how Model Armor protects your AI applications, identifies attempted prompt injections and other malicious activities.
Before you begin
Required permissions
To get the permissions that you need to access the monitoring dashboard, ask your administrator to grant you the IAM role on the project. For more information about granting roles, see Manage access to projects, folders, and organizations .
This predefined role contains the permissions required to access the monitoring dashboard. To see the exact permissions that are required, expand the Required permissionssection:
Required permissions
The following permissions are required to access the monitoring dashboard:
-
monitoring.monitoredResourceDescriptors.list
-
monitoring.metricDescriptors.list
You might also be able to get these permissions with custom roles or other predefined roles .
Access the monitoring dashboard
-
In the Google Cloud console, go to the Model Armorpage.
-
Verify that you are viewing the project that you activated Model Armor on.
-
Go to the Monitoringtab.
From this page, you can do the following:
- View interactions during the selected date and time.
- Filter interactions based on the associated templates or floor settings, locations, integration points, and input types.
- Download data to a PNG or CSV file.
- Track violations over time with a trend chart.
- View templates used across the project.
-
Inspect related logs to see the logs during the selected date and time. To do that, click More chart options > Inspect related logs.
To inspect related logs, you must enable logging in templates and floor settings , and you must have IAM permissions to view logs .
Key metrics
The key metrics on the monitoring dashboard include the following:
- Total interactions scanned: The total volume of prompts and responses analyzed by Model Armor.
- Flagged interactions: The number of interactions that violated at least one configured policy in your Model Armor template or floor settings.
- Blocked interactions: The number of interactions blocked (if you have configured Model Armor in the inspect and block mode ).
- Violations by category:
- All detectors: Content violated by all detectors.
- Sensitive data violation: Presence of personally identifiable information (PII), financial data, or custom data types you have defined.
- Responsible AI: Content violating safety filters like hate speech, dangerous content, harassment, or sexually explicit.
What's next
- Learn about Model Armor .
- Learn about Model Armor templates .
- Learn about Model Armor floor settings .
- Learn about Model Armor endpoints .
- Sanitize prompts and responses .
- Learn about Model Armor audit logging .
- Configure logging for Model Armor
- Troubleshoot Model Armor issues .