Collect Vectra XDR logs
This document explains how to ingest Vectra XDR logs to Google Security Operations using Google Cloud Storage V2.
Vectra XDR is an extended detection and response platform that correlates threats across network, identity, cloud, and SaaS environments. The Vectra AI Platform REST API v3 provides programmatic access to detection, scoring, lockdown, audit, and health logs.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- A GCP project with Cloud Storage API enabled
- Permissions to create and manage GCS buckets
- Permissions to manage IAM policies on GCS buckets
- Permissions to create Cloud Run services, Pub/Sub topics, and Cloud Scheduler jobs
- Privileged access to the Vectra AI Platform with administrator permissions
- An API client with Client ID and Client Secret configured in the Vectra AI Platform
Create Google Cloud Storage bucket
- Go to the Google Cloud Console .
- Select your project or create a new one.
- In the navigation menu, go to Cloud Storage > Buckets.
- Click Create bucket.
-
Provide the following configuration details:
Setting Value Name your bucket Enter a globally unique name (for example, vectra-xdr-logs)Location type Choose based on your needs (Region, Dual-region, Multi-region) Location Select the location (for example, us-central1)Storage class Standard (recommended for frequently accessed logs) Access control Uniform (recommended) Protection tools Optional: Enable object versioning or retention policy -
Click Create.
Collect Vectra XDR API credentials
Create API client
- Sign in to your Vectra AI Platform instance (for example,
https://your-tenant.vectra.ai). - Go to Manage > API Clients.
- Click Add API Client.
- Enter a name for the API client (for example,
Google SecOps Integration). - Select the required role (for example, Read-Onlyor a custom role with detection and audit access).
- Click Generate Credentials.
-
Copy and save the following details in a secure location:
- Client ID: The API client ID
- Client Secret: The API client secret
Determine API base URL
The Vectra API base URL is your tenant URL:
| Format | Example |
|---|---|
| Tenant URL | https://your-tenant.vectra.ai
|
Test API access
-
Test your credentials before proceeding with the integration:
# Replace with your actual credentials VECTRA_CLIENT_ID = "your-client-id" VECTRA_CLIENT_SECRET = "your-client-secret" VECTRA_BASE_URL = "https://your-tenant.vectra.ai" # Get access token using HTTP Basic Auth TOKEN = $( curl -s -X POST " ${ VECTRA_BASE_URL } /oauth2/token" \ -u " ${ VECTRA_CLIENT_ID } : ${ VECTRA_CLIENT_SECRET } " \ -H "Content-Type: application/x-www-form-urlencoded" \ -d "grant_type=client_credentials" \ | jq -r '.access_token' ) # Test API access - list detection events curl -s -X GET " ${ VECTRA_BASE_URL } /api/v3.4/events/detections/?limit=1" \ -H "Authorization: Bearer ${ TOKEN } "
Create service account for Cloud Run function
The Cloud Run function needs a service account with permissions to write to GCS bucket and be invoked by Pub/Sub.
Create service account
- In the GCP Console, go to IAM & Admin > Service Accounts.
- Click Create Service Account.
- Provide the following configuration details:
- Service account name: Enter
vectra-xdr-logs-collector-sa - Service account description: Enter
Service account for Cloud Run function to collect Vectra XDR logs
- Service account name: Enter
- Click Create and Continue.
- In the Grant this service account access to projectsection, add the following roles:
- Click Select a role.
- Search for and select Storage Object Admin.
- Click + Add another role.
- Search for and select Cloud Run Invoker.
- Click + Add another role.
- Search for and select Cloud Functions Invoker.
- Click Continue.
- Click Done.
These roles are required for:
- Storage Object Admin: Write logs to GCS bucket and manage state files
- Cloud Run Invoker: Allow Pub/Sub to invoke the function
- Cloud Functions Invoker: Allow function invocation
Grant IAM permissions on GCS bucket
Grant the service account write permissions on the GCS bucket:
- Go to Cloud Storage > Buckets.
- Click your bucket name (for example,
vectra-xdr-logs). - Go to the Permissionstab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Enter the service account email (for example,
vectra-xdr-logs-collector-sa@PROJECT_ID.iam.gserviceaccount.com) - Assign roles: Select Storage Object Admin
- Add principals: Enter the service account email (for example,
- Click Save.
Create Pub/Sub topic
Create a Pub/Sub topic that Cloud Scheduler will publish to and the Cloud Run function will subscribe to.
- In the GCP Console, go to Pub/Sub > Topics.
- Click Create topic.
- Provide the following configuration details:
- Topic ID: Enter
vectra-xdr-logs-trigger - Leave other settings as default
- Topic ID: Enter
- Click Create.
Create Cloud Run function to collect logs
The Cloud Run function will be triggered by Pub/Sub messages from Cloud Scheduler to fetch logs from the Vectra AI Platform REST API and write them to GCS.
- In the GCP Console, go to Cloud Run.
- Click Create service.
- Select Function(use an inline editor to create a function).
-
In the Configuresection, provide the following configuration details:
Setting Value Service name vectra-xdr-logs-collectorRegion Select region matching your GCS bucket (for example, us-central1)Runtime Select Python 3.12or later -
In the Trigger (optional)section:
- Click + Add trigger.
- Select Cloud Pub/Sub.
- In Select a Cloud Pub/Sub topic, choose the topic
vectra-xdr-logs-trigger. - Click Save.
-
In the Authenticationsection:
- Select Require authentication.
- Check Identity and Access Management (IAM).
-
Scroll to and expand Containers, Networking, Security.
-
Go to the Securitytab:
- Service account: Select the service account
vectra-xdr-logs-collector-sa.
- Service account: Select the service account
-
Go to the Containerstab:
- Click Variables & Secrets.
-
Click + Add variablefor each environment variable:
Variable Name Example Value Description GCS_BUCKETvectra-xdr-logsGCS bucket name GCS_PREFIXvectraPrefix for log files STATE_KEYvectra/state.jsonState file path VECTRA_CLIENT_IDyour-client-idVectra API client ID VECTRA_CLIENT_SECRETyour-client-secretVectra API client secret VECTRA_BASE_URLhttps://your-tenant.vectra.aiVectra tenant URL MAX_RECORDS5000Max records per run BATCH_SIZE500Events per API request (max 1000) LOOKBACK_HOURS24Initial lookback period
-
In the Variables & Secretssection, scroll to Requests:
- Request timeout: Enter
600seconds (10 minutes)
- Request timeout: Enter
-
Go to the Settingstab:
- In the Resourcessection:
- Memory: Select 512 MiBor higher
- CPU: Select 1
- In the Resourcessection:
-
In the Revision scalingsection:
- Minimum number of instances: Enter
0 - Maximum number of instances: Enter
100(or adjust based on expected load)
- Minimum number of instances: Enter
-
Click Create.
-
Wait for the service to be created (1-2 minutes).
-
After the service is created, the inline code editoropens automatically.
Add function code
- Enter mainin the Entry pointfield.
-
In the inline code editor, create two files:
-
First file: main.py:
import functions_framework from google.cloud import storage import base64 import json import os import urllib3 from datetime import datetime , timezone , timedelta import time # Initialize HTTP client with timeouts http = urllib3 . PoolManager ( timeout = urllib3 . Timeout ( connect = 5.0 , read = 30.0 ), retries = False , ) # Initialize Storage client storage_client = storage . Client () # Environment variables GCS_BUCKET = os . environ . get ( 'GCS_BUCKET' ) GCS_PREFIX = os . environ . get ( 'GCS_PREFIX' , 'vectra' ) STATE_KEY = os . environ . get ( 'STATE_KEY' , 'vectra/state.json' ) VECTRA_CLIENT_ID = os . environ . get ( 'VECTRA_CLIENT_ID' ) VECTRA_CLIENT_SECRET = os . environ . get ( 'VECTRA_CLIENT_SECRET' ) VECTRA_BASE_URL = os . environ . get ( 'VECTRA_BASE_URL' ) MAX_RECORDS = int ( os . environ . get ( 'MAX_RECORDS' , '5000' )) BATCH_SIZE = int ( os . environ . get ( 'BATCH_SIZE' , '500' )) LOOKBACK_HOURS = int ( os . environ . get ( 'LOOKBACK_HOURS' , '24' )) def get_access_token (): """ Obtain OAuth2 access token using HTTP Basic Auth with client credentials grant per Vectra API docs. """ base_url = VECTRA_BASE_URL . rstrip ( '/' ) token_url = f " { base_url } /oauth2/token" credentials = base64 . b64encode ( f " { VECTRA_CLIENT_ID } : { VECTRA_CLIENT_SECRET } " . encode () ) . decode () headers = { 'Content-Type' : 'application/x-www-form-urlencoded' , 'Authorization' : f 'Basic { credentials } ' , 'Accept' : 'application/json' } body = 'grant_type=client_credentials' backoff = 1.0 for attempt in range ( 3 ): response = http . request ( 'POST' , token_url , body = body , headers = headers ) if response . status == 429 : retry_after = int ( response . headers . get ( 'Retry-After' , str ( int ( backoff )))) print ( f "Rate limited (429) on token request. Retrying after { retry_after } s..." ) time . sleep ( retry_after ) backoff = min ( backoff * 2 , 30.0 ) continue if response . status != 200 : raise RuntimeError ( f "Failed to get access token: { response . status } - { response . data . decode ( 'utf-8' ) } " ) data = json . loads ( response . data . decode ( 'utf-8' )) return data [ 'access_token' ] raise RuntimeError ( "Failed to get access token after 3 retries" ) @functions_framework . cloud_event def main ( cloud_event ): """ Cloud Run function triggered by Pub/Sub to fetch Vectra XDR detection and audit event logs and write to GCS. Uses the Vectra REST API v3.4 Events endpoints with checkpoint-based pagination. """ if not all ([ GCS_BUCKET , VECTRA_CLIENT_ID , VECTRA_CLIENT_SECRET , VECTRA_BASE_URL ]): print ( 'Error: Missing required environment variables' ) return try : bucket = storage_client . bucket ( GCS_BUCKET ) # Load state (stores per-endpoint checkpoints) state = load_state ( bucket , STATE_KEY ) now = datetime . now ( timezone . utc ) token = get_access_token () # Fetch detection events and audit events all_records = [] new_state = dict ( state ) for event_type in [ 'detections' , 'audits' ]: checkpoint = state . get ( f ' { event_type } _checkpoint' ) last_time = state . get ( f ' { event_type } _last_time' ) # On first run, use lookback window start_time = None if not checkpoint and not last_time : start_time = now - timedelta ( hours = LOOKBACK_HOURS ) records , next_checkpoint = fetch_events ( token = token , event_type = event_type , checkpoint = checkpoint , start_time = start_time , batch_size = BATCH_SIZE , max_records = MAX_RECORDS , ) all_records . extend ( records ) # Save checkpoint for next run if next_checkpoint is not None : new_state [ f ' { event_type } _checkpoint' ] = next_checkpoint new_state [ f ' { event_type } _last_time' ] = now . isoformat () if not all_records : print ( "No new event records found." ) save_state ( bucket , STATE_KEY , new_state ) return # Write to GCS as NDJSON timestamp = now . strftime ( '%Y%m %d _%H%M%S' ) object_key = f " { GCS_PREFIX } /logs_ { timestamp } .ndjson" blob = bucket . blob ( object_key ) ndjson = ' \n ' . join ( json . dumps ( record , ensure_ascii = False ) for record in all_records ) + ' \n ' blob . upload_from_string ( ndjson , content_type = 'application/x-ndjson' ) print ( f "Wrote { len ( all_records ) } records to gs:// { GCS_BUCKET } / { object_key } " ) save_state ( bucket , STATE_KEY , new_state ) print ( f "Successfully processed { len ( all_records ) } records" ) except Exception as e : print ( f 'Error processing logs: { str ( e ) } ' ) raise def load_state ( bucket , key ): """Load state from GCS.""" try : blob = bucket . blob ( key ) if blob . exists (): return json . loads ( blob . download_as_text ()) except Exception as e : print ( f "Warning: Could not load state: { e } " ) return {} def save_state ( bucket , key , state : dict ): """Save state to GCS.""" try : blob = bucket . blob ( key ) blob . upload_from_string ( json . dumps ( state , indent = 2 ), content_type = 'application/json' ) print ( f "Saved state: { state } " ) except Exception as e : print ( f "Warning: Could not save state: { e } " ) def fetch_events ( token : str , event_type : str , checkpoint : int = None , start_time : datetime = None , batch_size : int = 500 , max_records : int = 5000 ): """ Fetch events from Vectra AI Platform REST API v3.4 using checkpoint-based pagination. Endpoints: - /api/v3.4/events/detections/ - /api/v3.4/events/audits/ Args: token: OAuth2 access token event_type: 'detections' or 'audits' checkpoint: Resume from this event ID (from previous run) start_time: Only used on first run when no checkpoint exists batch_size: Number of events per request (max 1000) max_records: Maximum total events to fetch per run Returns: Tuple of (events list, next_checkpoint int or None) """ base_url = VECTRA_BASE_URL . rstrip ( '/' ) endpoint = f " { base_url } /api/v3.4/events/ { event_type } /" headers = { 'Authorization' : f 'Bearer { token } ' , 'Accept' : 'application/json' , 'User-Agent' : 'GoogleSecOps-VectraXDRCollector/1.0' } records = [] batch_num = 0 backoff = 1.0 next_checkpoint = checkpoint while True : batch_num += 1 if len ( records ) > = max_records : print ( f "Reached max_records limit ( { max_records } ) for { event_type } " ) break # Build query parameters params = { 'limit' : min ( batch_size , max_records - len ( records )), 'ordering' : 'event_timestamp' , } if next_checkpoint is not None : params [ 'from' ] = next_checkpoint elif start_time is not None : params [ 'event_timestamp_gte' ] = start_time . strftime ( '%Y-%m- %d T%H:%M:%SZ' ) # Include INFO detections (excluded by default in v3.4) if event_type == 'detections' : params [ 'include_info_category' ] = 'true' query_string = '&' . join ( f ' { k } = { v } ' for k , v in params . items ()) url = f " { endpoint } ? { query_string } " try : response = http . request ( 'GET' , url , headers = headers ) if response . status == 429 : retry_after = int ( response . headers . get ( 'Retry-After' , str ( int ( backoff )))) print ( f "Rate limited (429). Retrying after { retry_after } s..." ) time . sleep ( retry_after ) backoff = min ( backoff * 2 , 30.0 ) continue backoff = 1.0 if response . status != 200 : print ( f "HTTP { response . status } : { response . data . decode ( 'utf-8' ) } " ) break data = json . loads ( response . data . decode ( 'utf-8' )) events = data . get ( 'events' , []) remaining = data . get ( 'remaining_count' , 0 ) batch_checkpoint = data . get ( 'next_checkpoint' ) if not events : print ( f "No more events for { event_type } " ) break print ( f " { event_type } batch { batch_num } : { len ( events ) } events, { remaining } remaining" ) # Tag events with source type for event in events : event [ '_vectra_event_type' ] = event_type records . extend ( events ) # Update checkpoint for next batch/run if batch_checkpoint is not None : next_checkpoint = batch_checkpoint # No more events remaining if remaining == 0 : print ( f "All events fetched for { event_type } " ) break except Exception as e : print ( f "Error fetching { event_type } events: { e } " ) break print ( f "Retrieved { len ( records ) } total { event_type } events from { batch_num } batches" ) return records , next_checkpoint -
Second file: requirements.txt:
functions-framework==3.* google-cloud-storage==2.* urllib3>=2.0.0
-
-
Click Deployto save and deploy the function.
-
Wait for deployment to complete (2-3 minutes).
Create Cloud Scheduler job
Cloud Scheduler will publish messages to the Pub/Sub topic at regular intervals, triggering the Cloud Run function.
- In the GCP Console, go to Cloud Scheduler.
- Click Create Job.
-
Provide the following configuration details:
Setting Value Name vectra-xdr-logs-collector-hourlyRegion Select same region as Cloud Run function Frequency 0 * * * *(every hour, on the hour)Timezone Select timezone (UTC recommended) Target type Pub/Sub Topic Select the topic vectra-xdr-logs-triggerMessage body {}(empty JSON object) -
Click Create.
Schedule frequency options
Choose frequency based on log volume and latency requirements:
| Frequency | Cron Expression | Use Case |
|---|---|---|
|
Every 5 minutes
|
*/5 * * * *
|
High-volume, low-latency |
|
Every 15 minutes
|
*/15 * * * *
|
Medium volume |
|
Every hour
|
0 * * * *
|
Standard (recommended) |
|
Every 6 hours
|
0 */6 * * *
|
Low volume, batch processing |
|
Daily
|
0 0 * * *
|
Historical data collection |
Test the integration
- In the Cloud Schedulerconsole, find your job.
- Click Force runto trigger the job manually.
- Wait a few seconds.
- Go to Cloud Run > Services.
- Click
vectra-xdr-logs-collector. - Click the Logstab.
-
Verify the function executed successfully. Look for:
detections batch 1: X events, Y remaining All events fetched for detections audits batch 1: X events, Y remaining All events fetched for audits Wrote X records to gs://vectra-xdr-logs/vectra/logs_YYYYMMDD_HHMMSS.ndjson Successfully processed X records -
Go to Cloud Storage > Buckets.
-
Click your bucket name (
vectra-xdr-logs). -
Navigate to the
vectra/folder. -
Verify that a new
.ndjsonfile was created with the current timestamp.
If you see errors in the logs:
- HTTP 401: Check client credentials in environment variables
- HTTP 403: Verify API client has required permissions in Vectra AI Platform
- HTTP 429: Rate limiting - function will automatically retry with backoff
- Missing environment variables: Check all required variables are set
Configure a feed in Google SecOps to ingest Vectra XDR logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed namefield, enter a name for the feed (for example,
Vectra XDR Logs). - Select Google Cloud Storage V2as the Source type.
- Select Vectra XDRas the Log type.
-
Click Get Service Account. A unique service account email will be displayed, for example:
chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.com -
Copy this email address.
-
Click Next.
-
Specify values for the following input parameters:
-
Storage bucket URL: Enter the GCS bucket URI with the prefix path:
gs://vectra-xdr-logs/vectra/- Replace:
-
vectra-xdr-logs: Your GCS bucket name. -
vectra: Optional prefix/folder path where logs are stored (leave empty for root).
-
- Replace:
-
Source deletion option: Select the deletion option according to your preference:
- Never: Never deletes any files after transfers (recommended for testing).
- Delete transferred files: Deletes files after successful transfer.
-
Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.
-
Maximum File Age: Include files modified in the last number of days (default is 180 days)
-
Asset namespace: The asset namespace
-
Ingestion labels: The label to be applied to the events from this feed
-
-
Click Next.
-
Review your new feed configuration in the Finalizescreen, and then click Submit.
Grant IAM permissions to the Google SecOps service account
The Google SecOps service account needs Storage Object Viewerrole on your GCS bucket.
- Go to Cloud Storage > Buckets.
- Click your bucket name.
- Go to the Permissionstab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Paste the Google SecOps service account email
- Assign roles: Select Storage Object Viewer
- Click Save.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
|
log_type_lable, id_label, triaged_label, detail_label, properties_label, event_object_label
|
additional.fields | Merged with various labels based on log type |
|
msg, d_type_vname
|
extensions.auth.auth_details | Set to msg for USER_LOGIN, d_type_vname for other user logins |
|
d_type_vname, category, type, msg
|
metadata.event_type | Set to GENERIC_EVENT initially; overridden to specific types based on log type and conditions |
|
d_type_vname, category, type, event_action
|
metadata.product_event_type | Set to d_type_vname in Detection, category in Scoring, type in Lockdown, event_action in Audit |
|
detection_id, id
|
metadata.product_log_id | Set to detection_id in Detection, id in Lockdown and Audit |
|
version, system.version.vectra_version
|
metadata.product_version | Set to version in Audit, system.version.vectra_version in Health |
|
_sensor.serial_number
|
observer.asset_id | Value copied directly |
|
_sensor.name
|
observer.hostname | Value copied directly |
|
_sensor.ip_address
|
observer.ip | Value copied directly |
|
_sensor.location
|
observer.location.name | Value copied directly |
|
entity_id
|
principal.asset_id | Value copied directly, prefixed with "Vectra:" if not colon-separated |
|
entity_uid
|
principal.hostname | Value copied directly if matches pattern |
|
src_ip, source_ip
|
principal.ip | Set from src_ip in Detection if valid IP, source_ip in Audit if valid IP |
|
var_entity_uid, var_username, var_api_client_id, var_user_type, var_entity_name
|
principal.user.attribute.labels | Merged with various var_ based on log type |
|
user_role
|
principal.user.attribute.roles | Value copied directly |
|
entity_uid, name, entity_name, username
|
principal.user.email_addresses | Set from entity_uid in Detection if email pattern, name in Scoring if email pattern, entity_name in Lockdown if email pattern, username in Audit if email pattern |
|
locked_by
|
principal.user.user_display_name | Value copied directly |
|
entity_id, user_id
|
principal.user.userid | Set from entity_id in Detection/Scoring/Lockdown, user_id in Audit |
|
result_status
|
security_result.action | Set to ALLOW if success, FAIL if failure |
|
result_status
|
security_result.action_details | Value copied directly |
|
mitre
|
security_result.attack_details.techniques | Merged from mitre array |
|
category
|
security_result.category | Mapped to category_temp based on regex matches for COMMAND & CONTROL, BOTNET ACTIVITY, etc. |
|
category
|
security_result.category_details | Value copied directly |
|
certainty
|
security_result.confidence_score | Converted to float |
|
urgency_reason, msg
|
security_result.description | Set to urgency_reason in Scoring, msg in Audit |
|
var_type, var_detection_type, var_triaged, var_id, var_active_detection_types, var_breadth_contrib, var_attack_rating, var_importance, var_last_detection_id, var_last_detection_type, var_last_detection_url, var_urgency_score, var_velocity_contrib, unlock_event_timestamp_label, var_detection_updated_at, var_sensor_connectivity_sensors_error, var_connectivity_sensors_affected_metadata_hours, var_connectivity_sensors_status, var_connectivity_updated_at, var_cpu_idle_percent, var_cpu_nice_percent, var_cpu_system_percent, var_cpu_updated_at, var_cpu_user_percent, var_disk_disk_utilization_free_bytes, var_disk_raid_disks_missing_output_label, var_disk_raid_disks_missing_error, var_disk_raid_disks_missing_status, var_disk_degraded_raid_volume_output, var_disk_degraded_raid_volume_error, var_disk_degraded_raid_volume_status, var_disk_disk_raid_error, var_disk_disk_raid_status, var_disk_disk_raid_output, var_disk_disk_utilization_total_bytes, var_disk_disk_utilization_usage_percent, var_disk_disk_utilization_used_bytes, var_disk_updated_at, var_hostid_artifact_counts_arsenic, var_hostid_artifact_counts_carbon_black, var_hostid_artifact_counts_cb_cloud, var_hostid_artifact_counts_clear_state, var_hostid_artifact_counts_cookie, var_hostid_artifact_counts_crowdstrike, var_hostid_artifact_counts_cybereason, var_hostid_artifact_counts_dhcp, var_hostid_artifact_counts_dns, var_hostid_artifact_counts_end_time, var_hostid_artifact_counts_fireeye, var_hostid_artifact_counts_generic_edr, var_hostid_artifact_counts_idle_end, var_hostid_artifact_counts_idle_start, var_hostid_artifact_counts_invalid, var_hostid_artifact_counts_kerberos_user, var_hostid_artifact_counts_kerberos, var_hostid_artifact_counts_mdns, var_hostid_artifact_counts_netbios, var_hostid_artifact_counts_proxy_ip, var_hostid_artifact_counts_rdns, var_hostid_artifact_counts_sentinelone, var_hostid_artifact_counts_split, var_hostid_artifact_counts_src_port, var_hostid_artifact_counts_static_ip, var_hostid_artifact_counts_TestEDR, var_hostid_artifact_counts_total, var_hostid_artifact_counts_uagent, var_hostid_artifact_counts_vmachine_info, var_hostid_artifact_counts_windows_defender, var_hostid_artifact_counts_zpa_user, var_hostid_ip_always_percent, var_hostid_ip_never_percent, var_hostid_ip_sometimes_percent, var_hostid_updated_at, var_dimm_stat_dimm, var_dimm_stat_status, var_memory_free_bytes, var_memory_total_bytes, var_memory_updated_at, var_memory_usage_percent, var_memory_used_bytes, network_interfaces_brain_label, var_network_interfaces_sensors_w4ftj0a8_eth0_link, var_network_traffic_brain_aggregated_peak_traffic_mbps, var_network_traffic_brain_interface_peak_traffic_label, var_network_traffic_sensors_edr_sensor_aggregated_peak_traffic_mbps, var_sensor_interface_peak_traffic_eth0_peak_traffic_mbps, var_network_updated_at, var_network_vlans_count, var_network_vlans_vlan_ids, var_power_error, var_power_status, var_power_updated_at, var_power_power_supply, var_sensors_headend_uri, var_sensors_id, var_sensors_last_seen, var_sensors_luid, var_sensors_mode, var_sensors_original_version, var_sensors_product_name, var_sensors_public_key, var_sensors_ssh_tunnel_port, var_sensors_status, var_sensors_update_count, var_sensors_version, var_trafficdrop_sensors_error, var_trafficdrop_sensors_ip_address, var_trafficdrop_sensors_luid, var_trafficdrop_sensors_name, var_trafficdrop_sensors_output_end, var_trafficdrop_sensors_output_interface_cutoff, var_trafficdrop_sensors_output_start, var_trafficdrop_sensors_output_interface_baseline, var_trafficdrop_sensors_output_interface_name, var_trafficdrop_sensors_output_interface_traffic, var_trafficdrop_sensors_serial_number, var_trafficdrop_sensors_status, var_trafficdrop_updated_at
|
security_result.detection_fields | Merged with various var_ and labels based on log type |
|
is_prioritized
|
security_result.priority | Set to HIGH if true, LOW if false |
|
is_prioritized
|
security_result.priority_details | Value copied directly |
|
threat
|
security_result.risk_score | Converted to float |
|
severity
|
security_result.severity | Set to HIGH if matches (?i)high, MEDIUM if matches (?i)medium, LOW if matches (?i)low |
|
severity
|
security_result.severity_details | Value copied directly |
|
detection_href, url
|
security_result.url_back_to_product | Set to detection_href in Detection, url in Scoring |
|
entity_id
|
target.asset_id | Value copied directly, prefixed with "Vectra:" if not colon-separated |
|
entity_name, name
|
target.hostname | Set from entity_name in Lockdown, name in Scoring if not IP |
|
target_ip_temp
|
target.ip | Value copied directly |
|
lock_event_timestamp
|
target.user.account_lockout_time | Converted from string to timestamp |
|
var_entity_name
|
target.user.attribute.labels | Merged with var_entity_name |
|
name, entity_name
|
target.user.email_addresses | Set from name in Scoring if email pattern, entity_name in Lockdown if email pattern |
|
entity_id
|
target.user.userid | Value copied directly |
| |
metadata.product_name | Set to "XDR" |
| |
metadata.vendor_name | Set to "Vectra" |
Need more help? Get answers from Community members and Google SecOps professionals.

