Collect Symantec WSS logs

Supported in:

This document explains how to ingest Symantec Web Security Service (WSS) logs to Google Security Operations using Amazon S3. The parser first attempts to parse the log message as JSON. If that fails, it uses a series of increasingly specific grok patterns to extract fields from the raw text, ultimately mapping the extracted data to the Unified Data Model (UDM).

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance.
  • Privileged access to Symantec Web Security Service.
  • Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).

Collect Symantec WSS prerequisites (IDs, API keys, org IDs, tokens)

  1. Sign in to the Symantec Web Security Service Portalas an administrator.
  2. Go to Account > API Credentials.
  3. Click Add.
  4. Provide the following configuration details:
    • API Name: Enter a descriptive name (for example, Google SecOps Integration ).
    • Description: Enter description for the API credentials.
  5. Click Saveand copy the generated API credentials securely.
  6. Record your WSS Portal URLand Sync API endpoint.
  7. Copy and save in a secure location the following details:
    • WSS_API_USERNAME.
    • WSS_API_PASSWORD.
    • WSS_SYNC_URL.

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, symantec-wss-logs ).
  3. Create a Userfollowing this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select Security credentialstab.
  6. Click Create Access Keyin section Access Keys.
  7. Select Third-party serviceas Use case.
  8. Click Next.
  9. Optional: Add a description tag.
  10. Click Create access key.
  11. Click Download CSV fileto save the Access Keyand Secret Access Keyfor future reference.
  12. Click Done.
  13. Select Permissionstab.
  14. Click Add permissionsin section Permissions policies.
  15. Select Add permissions.
  16. Select Attach policies directly.
  17. Search for AmazonS3FullAccesspolicy.
  18. Select the policy.
  19. Click Next.
  20. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies.
  2. Click Create policy > JSON tab.
  3. Copy and paste the following policy.
  4. Policy JSON(replace symantec-wss-logs if you entered a different bucket name):

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:PutObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::symantec-wss-logs/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowGetStateObject" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:GetObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::symantec-wss-logs/symantec/wss/state.json" 
      
     } 
      
     ] 
     } 
     
    
  5. Click Next > Create policy.

  6. Go to IAM > Roles > Create role > AWS service > Lambda.

  7. Attach the newly created policy.

  8. Name the role SymantecWssToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name symantec_wss_to_s3
    Runtime Python 3.13
    Architecture x86_64
    Execution role SymantecWssToS3Role
  4. After the function is created, open the Codetab, delete the stub and paste the following code ( symantec_wss_to_s3.py ).

      #!/usr/bin/env python3 
     # Lambda: Pull Symantec WSS logs and store raw payloads to S3 
     # - Time window via millisecond timestamps for WSS Sync API. 
     # - Preserves vendor-native format (CSV/JSON/ZIP). 
     # - Retries with exponential backoff; unique S3 keys to avoid overwrites. 
     import 
      
     os 
     , 
      
     json 
     , 
      
     time 
     , 
      
     uuid 
     from 
      
     urllib.request 
      
     import 
     Request 
     , 
     urlopen 
     from 
      
     urllib.error 
      
     import 
     URLError 
     , 
     HTTPError 
     import 
      
     boto3 
     S3_BUCKET 
     = 
     os 
     . 
     environ 
     [ 
     "S3_BUCKET" 
     ] 
     S3_PREFIX 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "S3_PREFIX" 
     , 
     "symantec/wss/" 
     ) 
     STATE_KEY 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "STATE_KEY" 
     , 
     "symantec/wss/state.json" 
     ) 
     WINDOW_SEC 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "WINDOW_SECONDS" 
     , 
     "3600" 
     )) 
     # default 1h 
     HTTP_TIMEOUT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "HTTP_TIMEOUT" 
     , 
     "60" 
     )) 
     WSS_SYNC_URL 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "WSS_SYNC_URL" 
     , 
     "https://portal.threatpulse.com/reportpod/logs/sync" 
     ) 
     API_USERNAME 
     = 
     os 
     . 
     environ 
     [ 
     "WSS_API_USERNAME" 
     ] 
     API_PASSWORD 
     = 
     os 
     . 
     environ 
     [ 
     "WSS_API_PASSWORD" 
     ] 
     TOKEN_PARAM 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "WSS_TOKEN_PARAM" 
     , 
     "none" 
     ) 
     MAX_RETRIES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "MAX_RETRIES" 
     , 
     "3" 
     )) 
     USER_AGENT 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "USER_AGENT" 
     , 
     "symantec-wss-to-s3/1.0" 
     ) 
     s3 
     = 
     boto3 
     . 
     client 
     ( 
     "s3" 
     ) 
     def 
      
     _load_state 
     (): 
     try 
     : 
     obj 
     = 
     s3 
     . 
     get_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     ) 
     return 
     json 
     . 
     loads 
     ( 
     obj 
     [ 
     "Body" 
     ] 
     . 
     read 
     ()) 
     except 
     Exception 
     : 
     return 
     {} 
     def 
      
     _save_state 
     ( 
     st 
     ): 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     st 
     , 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     )) 
     . 
     encode 
     ( 
     "utf-8" 
     ), 
     ContentType 
     = 
     "application/json" 
     , 
     ) 
     def 
      
     _ms_timestamp 
     ( 
     ts 
     : 
     float 
     ) 
     - 
    > int 
     : 
      
     """Convert Unix timestamp to milliseconds for WSS API""" 
     return 
     int 
     ( 
     ts 
     * 
     1000 
     ) 
     def 
      
     _fetch_wss_logs 
     ( 
     start_ms 
     : 
     int 
     , 
     end_ms 
     : 
     int 
     ) 
     - 
    > tuple 
     [ 
     bytes 
     , 
     str 
     , 
     str 
     ]: 
     # WSS Sync API parameters 
     params 
     = 
     f 
     "startDate= 
     { 
     start_ms 
     } 
    & endDate= 
     { 
     end_ms 
     } 
    & token= 
     { 
     TOKEN_PARAM 
     } 
     " 
     url 
     = 
     f 
     " 
     { 
     WSS_SYNC_URL 
     } 
     ? 
     { 
     params 
     } 
     " 
     attempt 
     = 
     0 
     while 
     True 
     : 
     req 
     = 
     Request 
     ( 
     url 
     , 
     method 
     = 
     "GET" 
     ) 
     req 
     . 
     add_header 
     ( 
     "User-Agent" 
     , 
     USER_AGENT 
     ) 
     req 
     . 
     add_header 
     ( 
     "X-APIUsername" 
     , 
     API_USERNAME 
     ) 
     req 
     . 
     add_header 
     ( 
     "X-APIPassword" 
     , 
     API_PASSWORD 
     ) 
     try 
     : 
     with 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     HTTP_TIMEOUT 
     ) 
     as 
     r 
     : 
     blob 
     = 
     r 
     . 
     read 
     () 
     content_type 
     = 
     r 
     . 
     headers 
     . 
     get 
     ( 
     "Content-Type" 
     , 
     "application/octet-stream" 
     ) 
     content_encoding 
     = 
     r 
     . 
     headers 
     . 
     get 
     ( 
     "Content-Encoding" 
     , 
     "" 
     ) 
     return 
     blob 
     , 
     content_type 
     , 
     content_encoding 
     except 
     ( 
     HTTPError 
     , 
     URLError 
     ) 
     as 
     e 
     : 
     attempt 
     += 
     1 
     print 
     ( 
     f 
     "HTTP error on attempt 
     { 
     attempt 
     } 
     : 
     { 
     e 
     } 
     " 
     ) 
     if 
     attempt 
    > MAX_RETRIES 
     : 
     raise 
     # exponential backoff with jitter 
     time 
     . 
     sleep 
     ( 
     min 
     ( 
     60 
     , 
     2 
     ** 
     attempt 
     ) 
     + 
     ( 
     time 
     . 
     time 
     () 
     % 
     1 
     )) 
     def 
      
     _determine_extension 
     ( 
     content_type 
     : 
     str 
     , 
     content_encoding 
     : 
     str 
     ) 
     - 
    > str 
     : 
      
     """Determine file extension based on content type and encoding""" 
     if 
     "zip" 
     in 
     content_type 
     . 
     lower 
     (): 
     return 
     ".zip" 
     if 
     "gzip" 
     in 
     content_type 
     . 
     lower 
     () 
     or 
     content_encoding 
     . 
     lower 
     () 
     == 
     "gzip" 
     : 
     return 
     ".gz" 
     if 
     "json" 
     in 
     content_type 
     . 
     lower 
     (): 
     return 
     ".json" 
     if 
     "csv" 
     in 
     content_type 
     . 
     lower 
     (): 
     return 
     ".csv" 
     return 
     ".bin" 
     def 
      
     _put_wss_data 
     ( 
     blob 
     : 
     bytes 
     , 
     content_type 
     : 
     str 
     , 
     content_encoding 
     : 
     str 
     , 
     from_ts 
     : 
     float 
     , 
     to_ts 
     : 
     float 
     ) 
     - 
    > str 
     : 
     # Create unique S3 key for WSS data 
     ts_path 
     = 
     time 
     . 
     strftime 
     ( 
     "%Y/%m/ 
     %d 
     " 
     , 
     time 
     . 
     gmtime 
     ( 
     to_ts 
     )) 
     uniq 
     = 
     f 
     " 
     { 
     int 
     ( 
     time 
     . 
     time 
     () 
     * 
     1e6 
     ) 
     } 
     _ 
     { 
     uuid 
     . 
     uuid4 
     () 
     . 
     hex 
     [: 
     8 
     ] 
     } 
     " 
     ext 
     = 
     _determine_extension 
     ( 
     content_type 
     , 
     content_encoding 
     ) 
     key 
     = 
     f 
     " 
     { 
     S3_PREFIX 
     }{ 
     ts_path 
     } 
     /symantec_wss_ 
     { 
     int 
     ( 
     from_ts 
     ) 
     } 
     _ 
     { 
     int 
     ( 
     to_ts 
     ) 
     } 
     _ 
     { 
     uniq 
     }{ 
     ext 
     } 
     " 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     key 
     , 
     Body 
     = 
     blob 
     , 
     ContentType 
     = 
     content_type 
     , 
     Metadata 
     = 
     { 
     'source' 
     : 
     'symantec-wss' 
     , 
     'from_timestamp' 
     : 
     str 
     ( 
     int 
     ( 
     from_ts 
     )), 
     'to_timestamp' 
     : 
     str 
     ( 
     int 
     ( 
     to_ts 
     )), 
     'content_encoding' 
     : 
     content_encoding 
     } 
     ) 
     return 
     key 
     def 
      
     lambda_handler 
     ( 
     event 
     = 
     None 
     , 
     context 
     = 
     None 
     ): 
     st 
     = 
     _load_state 
     () 
     now 
     = 
     time 
     . 
     time 
     () 
     from_ts 
     = 
     float 
     ( 
     st 
     . 
     get 
     ( 
     "last_to_ts" 
     ) 
     or 
     ( 
     now 
     - 
     WINDOW_SEC 
     )) 
     to_ts 
     = 
     now 
     # Convert to milliseconds for WSS API 
     start_ms 
     = 
     _ms_timestamp 
     ( 
     from_ts 
     ) 
     end_ms 
     = 
     _ms_timestamp 
     ( 
     to_ts 
     ) 
     print 
     ( 
     f 
     "Fetching Symantec WSS logs from 
     { 
     start_ms 
     } 
     to 
     { 
     end_ms 
     } 
     " 
     ) 
     blob 
     , 
     content_type 
     , 
     content_encoding 
     = 
     _fetch_wss_logs 
     ( 
     start_ms 
     , 
     end_ms 
     ) 
     print 
     ( 
     f 
     "Retrieved 
     { 
     len 
     ( 
     blob 
     ) 
     } 
     bytes with content-type: 
     { 
     content_type 
     } 
     " 
     ) 
     if 
     content_encoding 
     : 
     print 
     ( 
     f 
     "Content encoding: 
     { 
     content_encoding 
     } 
     " 
     ) 
     key 
     = 
     _put_wss_data 
     ( 
     blob 
     , 
     content_type 
     , 
     content_encoding 
     , 
     from_ts 
     , 
     to_ts 
     ) 
     st 
     [ 
     "last_to_ts" 
     ] 
     = 
     to_ts 
     st 
     [ 
     "last_successful_run" 
     ] 
     = 
     now 
     _save_state 
     ( 
     st 
     ) 
     return 
     { 
     "statusCode" 
     : 
     200 
     , 
     "body" 
     : 
     { 
     "success" 
     : 
     True 
     , 
     "s3_key" 
     : 
     key 
     , 
     "content_type" 
     : 
     content_type 
     , 
     "content_encoding" 
     : 
     content_encoding 
     , 
     "from_timestamp" 
     : 
     from_ts 
     , 
     "to_timestamp" 
     : 
     to_ts 
     , 
     "bytes_retrieved" 
     : 
     len 
     ( 
     blob 
     ) 
     } 
     } 
     if 
     __name__ 
     == 
     "__main__" 
     : 
     print 
     ( 
     lambda_handler 
     ()) 
     
    
  5. Go to Configuration > Environment variables.

  6. Click Edit > Add new environment variable.

  7. Enter the environment variables provided in the following table, replacing with the example values with your values.

    Environment variables

    Key Example value
    S3_BUCKET symantec-wss-logs
    S3_PREFIX symantec/wss/
    STATE_KEY symantec/wss/state.json
    WINDOW_SECONDS 3600
    HTTP_TIMEOUT 60
    MAX_RETRIES 3
    USER_AGENT symantec-wss-to-s3/1.0
    WSS_SYNC_URL https://portal.threatpulse.com/reportpod/logs/sync
    WSS_API_USERNAME your-api-username (from step 2)
    WSS_API_PASSWORD your-api-password (from step 2)
    WSS_TOKEN_PARAM none
  8. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  9. Select the Configurationtab.

  10. In the General configurationpanel, click Edit.

  11. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 1 hour ).
    • Target: Your Lambda function symantec_wss_to_s3 .
    • Name: symantec-wss-1h .
  3. Click Create schedule.

(Optional) Create read-only IAM user and keys for Google SecOps

  1. In the AWS Console, go to IAM > Users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader .
    • Access type: Select Access key – Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. JSON:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::symantec-wss-logs/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::symantec-wss-logs" 
      
     } 
      
     ] 
     } 
     
    
  7. Name = secops-reader-policy .

  8. Click Create policy > search/select > Next > Add permissions.

  9. Create access key for secops-reader : Security credentials > Access keys.

  10. Click Create access key.

  11. Download the CSV. (You'll paste these values into the feed).

Configure a feed in Google SecOps to ingest Symantec WSS logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Symantec WSS logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Symantec WSSas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://symantec-wss-logs/symantec/wss/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

UDM mapping table

Log field UDM mapping Logic
category_id
read_only_udm.metadata.product_event_type If category_id is 1, then read_only_udm.metadata.product_event_type is set to Security . If category_id is 5, then read_only_udm.metadata.product_event_type is set to Policy
collector_device_ip
read_only_udm.principal.ip, read_only_udm.principal.asset.ip Value of collector_device_ip field
connection.bytes_download
read_only_udm.network.received_bytes Value of connection.bytes_download field converted to integer
connection.bytes_upload
read_only_udm.network.sent_bytes Value of connection.bytes_upload field converted to integer
connection.dst_ip
read_only_udm.target.ip Value of connection.dst_ip field
connection.dst_location.country
read_only_udm.target.location.country_or_region Value of connection.dst_location.country field
connection.dst_name
read_only_udm.target.hostname Value of connection.dst_name field
connection.dst_port
read_only_udm.target.port Value of connection.dst_port field converted to integer
connection.http_status
read_only_udm.network.http.response_code Value of connection.http_status field converted to integer
connection.http_user_agent
read_only_udm.network.http.user_agent Value of connection.http_user_agent field
connection.src_ip
read_only_udm.principal.ip, read_only_udm.src.ip Value of connection.src_ip field. If src_ip or collector_device_ip is not empty, then it is mapped to read_only_udm.src.ip
connection.tls.version
read_only_udm.network.tls.version_protocol Value of connection.tls.version field
connection.url.host
read_only_udm.target.hostname Value of connection.url.host field
connection.url.method
read_only_udm.network.http.method Value of connection.url.method field
connection.url.path
read_only_udm.target.url Value of connection.url.path field
connection.url.text
read_only_udm.target.url Value of connection.url.text field
cs_connection_negotiated_cipher
read_only_udm.network.tls.cipher Value of cs_connection_negotiated_cipher field
cs_icap_status
read_only_udm.security_result.description Value of cs_icap_status field
device_id
read_only_udm.target.resource.id, read_only_udm.target.resource.product_object_id Value of device_id field
device_ip
read_only_udm.intermediary.ip, read_only_udm.intermediary.asset.ip Value of device_ip field
device_time
read_only_udm.metadata.collected_timestamp, read_only_udm.metadata.event_timestamp Value of device_time field converted to string. If when is empty, then it is mapped to read_only_udm.metadata.event_timestamp
hostname
read_only_udm.principal.hostname, read_only_udm.principal.asset.hostname Value of hostname field
log_time
read_only_udm.metadata.event_timestamp Value of log_time field converted to timestamp. If when and device_time are empty, then it is mapped to read_only_udm.metadata.event_timestamp
msg_desc
read_only_udm.metadata.description Value of msg_desc field
os_details
read_only_udm.target.asset.platform_software.platform, read_only_udm.target.asset.platform_software.platform_version Value of os_details field. If os_details is not empty, then it is parsed to extract os_name and os_ver. If os_name contains Windows , then read_only_udm.target.asset.platform_software.platform is set to WINDOWS . os_ver is mapped to read_only_udm.target.asset.platform_software.platform_version
product_data.cs(Referer)
read_only_udm.network.http.referral_url Value of product_data.cs(Referer) field
product_data.r-supplier-country
read_only_udm.principal.location.country_or_region Value of product_data.r-supplier-country field
product_data.s-supplier-ip
read_only_udm.intermediary.ip, read_only_udm.intermediary.asset.ip Value of product_data.s-supplier-ip field
product_data.x-bluecoat-application-name
read_only_udm.target.application Value of product_data.x-bluecoat-application-name field
product_data.x-bluecoat-transaction-uuid
read_only_udm.metadata.product_log_id Value of product_data.x-bluecoat-transaction-uuid field
product_data.x-client-agent-sw
read_only_udm.observer.platform_version Value of product_data.x-client-agent-sw field
product_data.x-client-agent-type
read_only_udm.observer.application Value of product_data.x-client-agent-type field
product_data.x-client-device-id
read_only_udm.target.resource.type, read_only_udm.target.resource.id, read_only_udm.target.resource.product_object_id If not empty, read_only_udm.target.resource.type is set to DEVICE . Value of product_data.x-client-device-id field is mapped to read_only_udm.target.resource.id and read_only_udm.target.resource.product_object_id
product_data.x-client-device-name
read_only_udm.src.hostname, read_only_udm.src.asset.hostname Value of product_data.x-client-device-name field
product_data.x-cs-client-ip-country
read_only_udm.target.location.country_or_region Value of product_data.x-cs-client-ip-country field
product_data.x-cs-connection-negotiated-cipher
read_only_udm.network.tls.cipher Value of product_data.x-cs-connection-negotiated-cipher field
product_data.x-cs-connection-negotiated-ssl-version
read_only_udm.network.tls.version_protocol Value of product_data.x-cs-connection-negotiated-ssl-version field
product_data.x-exception-id
read_only_udm.security_result.summary Value of product_data.x-exception-id field
product_data.x-rs-certificate-hostname
read_only_udm.network.tls.client.server_name Value of product_data.x-rs-certificate-hostname field
product_data.x-rs-certificate-hostname-categories
read_only_udm.security_result.category_details Value of product_data.x-rs-certificate-hostname-categories field
product_data.x-rs-certificate-observed-errors
read_only_udm.network.tls.server.certificate.issuer Value of product_data.x-rs-certificate-observed-errors field
product_data.x-rs-certificate-validate-status
read_only_udm.network.tls.server.certificate.subject Value of product_data.x-rs-certificate-validate-status field
product_name
read_only_udm.metadata.product_name Value of product_name field
product_ver
read_only_udm.metadata.product_version Value of product_ver field
proxy_connection.src_ip
read_only_udm.intermediary.ip, read_only_udm.intermediary.asset.ip Value of proxy_connection.src_ip field
received_bytes
read_only_udm.network.received_bytes Value of received_bytes field converted to integer
ref_uid
read_only_udm.metadata.product_log_id Value of ref_uid field
s_action
read_only_udm.metadata.description Value of s_action field
sent_bytes
read_only_udm.network.sent_bytes Value of sent_bytes field converted to integer
severity_id
read_only_udm.security_result.severity If severity_id is 1 or 2, then read_only_udm.security_result.severity is set to LOW . If severity_id is 3 or 4, then read_only_udm.security_result.severity is set to MEDIUM . If severity_id is 5 or 6, then read_only_udm.security_result.severity is set to HIGH
supplier_country
read_only_udm.principal.location.country_or_region Value of supplier_country field
target_ip
read_only_udm.target.ip, read_only_udm.target.asset.ip Value of target_ip field
user.full_name
read_only_udm.principal.user.user_display_name Value of user.full_name field
user.name
read_only_udm.principal.user.user_display_name Value of user.name field
user_name
read_only_udm.principal.user.user_display_name Value of user_name field
uuid
read_only_udm.metadata.product_log_id Value of uuid field
when
read_only_udm.metadata.event_timestamp Value of when field converted to timestamp
read_only_udm.metadata.event_type
Set to NETWORK_UNCATEGORIZED if hostname is empty and connection.dst_ip is not empty. Set to SCAN_NETWORK if hostname is not empty. Set to NETWORK_CONNECTION if has_principal and has_target are true . Set to STATUS_UPDATE if has_principal is true and has_target is false . Set to GENERIC_EVENT if has_principal and has_target are false
read_only_udm.metadata.log_type
Always set to SYMANTEC_WSS
read_only_udm.metadata.vendor_name
Always set to SYMANTEC
read_only_udm.security_result.action
Set to ALLOW if product_data.sc-filter_result is OBSERVED or PROXIED . Set to BLOCK if product_data.sc-filter_result is DENIED
read_only_udm.security_result.action_details
Value of product_data.sc-filter_result field
read_only_udm.target.resource.type
Set to DEVICE if product_data.x-client-device-id is not empty

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: