Collect Delinea SSO logs

Supported in:

This document explains how to ingest Delinea (formerly Centrify) Single Sign-On (SSO) logs to Google Security Operations using Amazon S3. The parser extracts the logs, handling both JSON and syslog formats. It parses key-value pairs, timestamps, and other relevant fields, mapping them to the UDM model, with specific logic for handling login failures, user agents, severity levels, authentication mechanisms, and various event types. It prioritizes FailUserName over NormalizedUser for target email addresses in failure events.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance.
  • Privileged access to Delinea (Centrify) SSOtenant.
  • Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).

Collect Delinea (Centrify) SSO prerequisites (IDs, API keys, org IDs, tokens)

  1. Sign in to the Delinea Admin Portal.
  2. Go to Apps > Add Apps.
  3. Search for OAuth2 Clientand click Add.
  4. Click Yesin the Add Web Appdialog.
  5. Click Closein the Add Web Appsdialog.
  6. On the Application Configurationpage, configure the following:
    • Generaltab:
      • Application ID: Enter a unique identifier (for example, secops-oauth-client )
      • Application Name: Enter a descriptive name (for example, SecOps Data Export )
      • Application Description: Enter description (for example, OAuth client for exporting audit events to SecOps )
    • Trusttab:
      • Application is Confidential: Check this option
      • Client ID Type: Select Confidential
      • Issued Client ID: Copy and save this value
      • Issued Client Secret: Copy and save this value
    • Tokenstab:
      • Auth methods: Select Client Creds
      • Token Type: Select Jwt RS256
    • Scopetab:
      • Add scope siemwith the description SIEM Integration Access.
      • Add scope redrock/querywith the description Query API Access.
  7. Click Saveto create the OAuth client.
  8. Go to Core Services > Users > Add User.
  9. Configure the service user:
    • Login Name: Enter the Client IDfrom step 6.
    • Email Address: Enter a valid email (required field).
    • Display Name: Enter a descriptive name (for example, SecOps Service User ).
    • Passwordand Confirm Password: Enter the Client Secretfrom step 6
    • Status: Select Is OAuth confidential client.
  10. Click Create User.
  11. Go to Access > Rolesand assign the service user to a role with appropriate permissions to query audit events.
  12. Copy and save in a secure location the following details:
    • Tenant URL: Your Centrify tenant URL (for example, https://yourtenant.my.centrify.com )
    • Client ID: From step 6
    • Client Secret: From step 6
    • OAuth Application ID: From the Application Configuration

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket .
  2. Save bucket Nameand Regionfor future reference (for example, delinea-centrify-logs-bucket ).
  3. Create a Userfollowing this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select Security credentialstab.
  6. Click Create Access Keyin section Access Keys.
  7. Select Third-party serviceas Use case.
  8. Click Next.
  9. Optional: Add a description tag.
  10. Click Create access key.
  11. Click Download .CSV fileto save the Access Keyand Secret Access Keyfor future reference.
  12. Click Done.
  13. Select Permissionstab.
  14. Click Add permissionsin section Permissions policies.
  15. Select Add permissions.
  16. Select Attach policies directly.
  17. Search for AmazonS3FullAccesspolicy.
  18. Select the policy.
  19. Click Next.
  20. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies.
  2. Click Create policy > JSON tab.
  3. Copy and paste the following policy.
  4. Policy JSON(replace delinea-centrify-logs-bucket if you entered a different bucket name):

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:PutObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::delinea-centrify-logs-bucket/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowGetStateObject" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:GetObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::delinea-centrify-logs-bucket/centrify-sso-logs/state.json" 
      
     } 
      
     ] 
     } 
     
    
  5. Click Next > Create policy.

  6. Go to IAM > Roles.

  7. Click Create role > AWS service > Lambda.

  8. Attach the newly created policy and the managed policy AWSLambdaBasicExecutionRole(for CloudWatch logging).

  9. Name the role CentrifySSOLogExportRole and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name CentrifySSOLogExport
    Runtime Python 3.13
    Architecture x86_64
    Execution role CentrifySSOLogExportRole
  4. After the function is created, open the Codetab, delete the stub and paste the following code ( CentrifySSOLogExport.py ).

      import 
      
     json 
     import 
      
     boto3 
     import 
      
     requests 
     import 
      
     base64 
     from 
      
     datetime 
      
     import 
     datetime 
     , 
     timedelta 
     import 
      
     os 
     from 
      
     typing 
      
     import 
     Dict 
     , 
     List 
     , 
     Optional 
     def 
      
     lambda_handler 
     ( 
     event 
     , 
     context 
     ): 
      
     """ 
     Lambda function to fetch Delinea Centrify SSO audit events and store them in S3 
     """ 
     # Environment variables 
     S3_BUCKET 
     = 
     os 
     . 
     environ 
     [ 
     'S3_BUCKET' 
     ] 
     S3_PREFIX 
     = 
     os 
     . 
     environ 
     [ 
     'S3_PREFIX' 
     ] 
     STATE_KEY 
     = 
     os 
     . 
     environ 
     [ 
     'STATE_KEY' 
     ] 
     # Centrify API credentials 
     TENANT_URL 
     = 
     os 
     . 
     environ 
     [ 
     'TENANT_URL' 
     ] 
     CLIENT_ID 
     = 
     os 
     . 
     environ 
     [ 
     'CLIENT_ID' 
     ] 
     CLIENT_SECRET 
     = 
     os 
     . 
     environ 
     [ 
     'CLIENT_SECRET' 
     ] 
     OAUTH_APP_ID 
     = 
     os 
     . 
     environ 
     [ 
     'OAUTH_APP_ID' 
     ] 
     # Optional parameters 
     PAGE_SIZE 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     'PAGE_SIZE' 
     , 
     '1000' 
     )) 
     MAX_PAGES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     'MAX_PAGES' 
     , 
     '10' 
     )) 
     s3_client 
     = 
     boto3 
     . 
     client 
     ( 
     's3' 
     ) 
     try 
     : 
     # Get last execution state 
     last_timestamp 
     = 
     get_last_state 
     ( 
     s3_client 
     , 
     S3_BUCKET 
     , 
     STATE_KEY 
     ) 
     # Get OAuth access token 
     access_token 
     = 
     get_oauth_token 
     ( 
     TENANT_URL 
     , 
     CLIENT_ID 
     , 
     CLIENT_SECRET 
     , 
     OAUTH_APP_ID 
     ) 
     # Fetch audit events 
     events 
     = 
     fetch_audit_events 
     ( 
     TENANT_URL 
     , 
     access_token 
     , 
     last_timestamp 
     , 
     PAGE_SIZE 
     , 
     MAX_PAGES 
     ) 
     if 
     events 
     : 
     # Store events in S3 
     current_timestamp 
     = 
     datetime 
     . 
     utcnow 
     () 
     filename 
     = 
     f 
     " 
     { 
     S3_PREFIX 
     } 
     centrify-sso-events- 
     { 
     current_timestamp 
     . 
     strftime 
     ( 
     '%Y%m 
     %d 
     _%H%M%S' 
     ) 
     } 
     .json" 
     store_events_to_s3 
     ( 
     s3_client 
     , 
     S3_BUCKET 
     , 
     filename 
     , 
     events 
     ) 
     # Update state with latest timestamp 
     latest_timestamp 
     = 
     get_latest_event_timestamp 
     ( 
     events 
     ) 
     update_state 
     ( 
     s3_client 
     , 
     S3_BUCKET 
     , 
     STATE_KEY 
     , 
     latest_timestamp 
     ) 
     print 
     ( 
     f 
     "Successfully processed 
     { 
     len 
     ( 
     events 
     ) 
     } 
     events and stored to 
     { 
     filename 
     } 
     " 
     ) 
     else 
     : 
     print 
     ( 
     "No new events found" 
     ) 
     return 
     { 
     'statusCode' 
     : 
     200 
     , 
     'body' 
     : 
     json 
     . 
     dumps 
     ( 
     f 
     'Successfully processed 
     { 
     len 
     ( 
     events 
     ) 
      
     if 
      
     events 
      
     else 
      
     0 
     } 
     events' 
     ) 
     } 
     except 
     Exception 
     as 
     e 
     : 
     print 
     ( 
     f 
     "Error processing Centrify SSO logs: 
     { 
     str 
     ( 
     e 
     ) 
     } 
     " 
     ) 
     return 
     { 
     'statusCode' 
     : 
     500 
     , 
     'body' 
     : 
     json 
     . 
     dumps 
     ( 
     f 
     'Error: 
     { 
     str 
     ( 
     e 
     ) 
     } 
     ' 
     ) 
     } 
     def 
      
     get_oauth_token 
     ( 
     tenant_url 
     : 
     str 
     , 
     client_id 
     : 
     str 
     , 
     client_secret 
     : 
     str 
     , 
     oauth_app_id 
     : 
     str 
     ) 
     - 
    > str 
     : 
      
     """ 
     Get OAuth access token using client credentials flow 
     """ 
     # Create basic auth token 
     credentials 
     = 
     f 
     " 
     { 
     client_id 
     } 
     : 
     { 
     client_secret 
     } 
     " 
     basic_auth 
     = 
     base64 
     . 
     b64encode 
     ( 
     credentials 
     . 
     encode 
     ()) 
     . 
     decode 
     () 
     token_url 
     = 
     f 
     " 
     { 
     tenant_url 
     } 
     /oauth2/token/ 
     { 
     oauth_app_id 
     } 
     " 
     headers 
     = 
     { 
     'Authorization' 
     : 
     f 
     'Basic 
     { 
     basic_auth 
     } 
     ' 
     , 
     'X-CENTRIFY-NATIVE-CLIENT' 
     : 
     'True' 
     , 
     'Content-Type' 
     : 
     'application/x-www-form-urlencoded' 
     } 
     data 
     = 
     { 
     'grant_type' 
     : 
     'client_credentials' 
     , 
     'scope' 
     : 
     'siem redrock/query' 
     } 
     response 
     = 
     requests 
     . 
     post 
     ( 
     token_url 
     , 
     headers 
     = 
     headers 
     , 
     data 
     = 
     data 
     ) 
     response 
     . 
     raise_for_status 
     () 
     token_data 
     = 
     response 
     . 
     json 
     () 
     return 
     token_data 
     [ 
     'access_token' 
     ] 
     def 
      
     fetch_audit_events 
     ( 
     tenant_url 
     : 
     str 
     , 
     access_token 
     : 
     str 
     , 
     last_timestamp 
     : 
     str 
     , 
     page_size 
     : 
     int 
     , 
     max_pages 
     : 
     int 
     ) 
     - 
    > List 
     [ 
     Dict 
     ]: 
      
     """ 
     Fetch audit events from Centrify using the Redrock/query API 
     """ 
     query_url 
     = 
     f 
     " 
     { 
     tenant_url 
     } 
     /Redrock/query" 
     headers 
     = 
     { 
     'Authorization' 
     : 
     f 
     'Bearer 
     { 
     access_token 
     } 
     ' 
     , 
     'X-CENTRIFY-NATIVE-CLIENT' 
     : 
     'True' 
     , 
     'Content-Type' 
     : 
     'application/json' 
     } 
     # Build SQL query with timestamp filter 
     if 
     last_timestamp 
     : 
     sql_query 
     = 
     f 
     "Select * from Event where WhenOccurred > ' 
     { 
     last_timestamp 
     } 
     ' ORDER BY WhenOccurred ASC" 
     else 
     : 
     # First run - get events from last 24 hours 
     sql_query 
     = 
     "Select * from Event where WhenOccurred > datefunc('now', '-1') ORDER BY WhenOccurred ASC" 
     payload 
     = 
     { 
     "Script" 
     : 
     sql_query 
     , 
     "args" 
     : 
     { 
     "PageSize" 
     : 
     page_size 
     , 
     "Limit" 
     : 
     page_size 
     * 
     max_pages 
     , 
     "Caching" 
     : 
     - 
     1 
     } 
     } 
     response 
     = 
     requests 
     . 
     post 
     ( 
     query_url 
     , 
     headers 
     = 
     headers 
     , 
     json 
     = 
     payload 
     ) 
     response 
     . 
     raise_for_status 
     () 
     response_data 
     = 
     response 
     . 
     json 
     () 
     if 
     not 
     response_data 
     . 
     get 
     ( 
     'success' 
     , 
     False 
     ): 
     raise 
     Exception 
     ( 
     f 
     "API query failed: 
     { 
     response_data 
     . 
     get 
     ( 
     'Message' 
     , 
      
     'Unknown error' 
     ) 
     } 
     " 
     ) 
     # Parse the response 
     result 
     = 
     response_data 
     . 
     get 
     ( 
     'Result' 
     , 
     {}) 
     columns 
     = 
     { 
     col 
     [ 
     'Name' 
     ]: 
     i 
     for 
     i 
     , 
     col 
     in 
     enumerate 
     ( 
     result 
     . 
     get 
     ( 
     'Columns' 
     , 
     []))} 
     raw_results 
     = 
     result 
     . 
     get 
     ( 
     'Results' 
     , 
     []) 
     events 
     = 
     [] 
     for 
     raw_event 
     in 
     raw_results 
     : 
     event 
     = 
     {} 
     row_data 
     = 
     raw_event 
     . 
     get 
     ( 
     'Row' 
     , 
     {}) 
     # Map column names to values 
     for 
     col_name 
     , 
     col_index 
     in 
     columns 
     . 
     items 
     (): 
     if 
     col_name 
     in 
     row_data 
     and 
     row_data 
     [ 
     col_name 
     ] 
     is 
     not 
     None 
     : 
     event 
     [ 
     col_name 
     ] 
     = 
     row_data 
     [ 
     col_name 
     ] 
     # Add metadata 
     event 
     [ 
     '_source' 
     ] 
     = 
     'centrify_sso' 
     event 
     [ 
     '_collected_at' 
     ] 
     = 
     datetime 
     . 
     utcnow 
     () 
     . 
     isoformat 
     () 
     + 
     'Z' 
     events 
     . 
     append 
     ( 
     event 
     ) 
     return 
     events 
     def 
      
     get_last_state 
     ( 
     s3_client 
     , 
     bucket 
     : 
     str 
     , 
     state_key 
     : 
     str 
     ) 
     - 
    > Optional 
     [ 
     str 
     ]: 
      
     """ 
     Get the last processed timestamp from S3 state file 
     """ 
     try 
     : 
     response 
     = 
     s3_client 
     . 
     get_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     state_key 
     ) 
     state_data 
     = 
     json 
     . 
     loads 
     ( 
     response 
     [ 
     'Body' 
     ] 
     . 
     read 
     () 
     . 
     decode 
     ( 
     'utf-8' 
     )) 
     return 
     state_data 
     . 
     get 
     ( 
     'last_timestamp' 
     ) 
     except 
     s3_client 
     . 
     exceptions 
     . 
     NoSuchKey 
     : 
     print 
     ( 
     "No previous state found, starting from 24 hours ago" 
     ) 
     return 
     None 
     except 
     Exception 
     as 
     e 
     : 
     print 
     ( 
     f 
     "Error reading state: 
     { 
     e 
     } 
     " 
     ) 
     return 
     None 
     def 
      
     update_state 
     ( 
     s3_client 
     , 
     bucket 
     : 
     str 
     , 
     state_key 
     : 
     str 
     , 
     timestamp 
     : 
     str 
     ): 
      
     """ 
     Update the state file with the latest processed timestamp 
     """ 
     state_data 
     = 
     { 
     'last_timestamp' 
     : 
     timestamp 
     , 
     'updated_at' 
     : 
     datetime 
     . 
     utcnow 
     () 
     . 
     isoformat 
     () 
     + 
     'Z' 
     } 
     s3_client 
     . 
     put_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     state_key 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     state_data 
     ), 
     ContentType 
     = 
     'application/json' 
     ) 
     def 
      
     store_events_to_s3 
     ( 
     s3_client 
     , 
     bucket 
     : 
     str 
     , 
     key 
     : 
     str 
     , 
     events 
     : 
     List 
     [ 
     Dict 
     ]): 
      
     """ 
     Store events as JSONL (one JSON object per line) in S3 
     """ 
     # Convert to JSONL format (one JSON object per line) 
     jsonl_content 
     = 
     'n' 
     . 
     join 
     ( 
     json 
     . 
     dumps 
     ( 
     event 
     , 
     default 
     = 
     str 
     ) 
     for 
     event 
     in 
     events 
     ) 
     s3_client 
     . 
     put_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     key 
     , 
     Body 
     = 
     jsonl_content 
     , 
     ContentType 
     = 
     'application/x-ndjson' 
     ) 
     def 
      
     get_latest_event_timestamp 
     ( 
     events 
     : 
     List 
     [ 
     Dict 
     ]) 
     - 
    > str 
     : 
      
     """ 
     Get the latest timestamp from the events for state tracking 
     """ 
     if 
     not 
     events 
     : 
     return 
     datetime 
     . 
     utcnow 
     () 
     . 
     isoformat 
     () 
     + 
     'Z' 
     latest 
     = 
     None 
     for 
     event 
     in 
     events 
     : 
     when_occurred 
     = 
     event 
     . 
     get 
     ( 
     'WhenOccurred' 
     ) 
     if 
     when_occurred 
     : 
     if 
     latest 
     is 
     None 
     or 
     when_occurred 
    > latest 
     : 
     latest 
     = 
     when_occurred 
     return 
     latest 
     or 
     datetime 
     . 
     utcnow 
     () 
     . 
     isoformat 
     () 
     + 
     'Z' 
     
    
  5. Go to Configuration > Environment variables.

  6. Click Edit > Add new environment variable.

  7. Enter the environment variables provided in the following table, replacing the example values with your values.

    Environment variables

    Key Example value
    S3_BUCKET delinea-centrify-logs-bucket
    S3_PREFIX centrify-sso-logs/
    STATE_KEY centrify-sso-logs/state.json
    TENANT_URL https://yourtenant.my.centrify.com
    CLIENT_ID your-client-id
    CLIENT_SECRET your-client-secret
    OAUTH_APP_ID your-oauth-application-id
    OAUTH_SCOPE siem
    PAGE_SIZE 1000
    MAX_PAGES 10
  8. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  9. Select the Configurationtab.

  10. In the General configurationpanel click Edit.

  11. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 1 hour ).
    • Target: your Lambda function CentrifySSOLogExport .
    • Name: CentrifySSOLogExport-1h .
  3. Click Create schedule.

(Optional) Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console, go to IAM > Users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader .
    • Access type: Select Access key – Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions.
  6. Click Add permissions > Attach policies directly.
  7. Select Create policy.
  8. JSON:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::delinea-centrify-logs-bucket/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::delinea-centrify-logs-bucket" 
      
     } 
      
     ] 
     } 
     
    
  9. Name = secops-reader-policy .

  10. Click Create policy > search/select > Next.

  11. Click Add permissions.

  12. Create access key for secops-reader : Security credentials > Access keys.

  13. Click Create access key.

  14. Download the .CSV . (You'll paste these values into the feed).

Configure a feed in Google SecOps to ingest Delinea (Centrify) SSO logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Delinea Centrify SSO logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Centrifyas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://delinea-centrify-logs-bucket/centrify-sso-logs/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: the asset namespace .
    • Ingestion labels: the label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

UDM mapping table

Log field UDM mapping Logic
AccountID
security_result.detection_fields.value The value of AccountID from the raw log is assigned to a security_result.detection_fields object with key : Account ID .
ApplicationName
target.application The value of ApplicationName from the raw log is assigned to the target.application field.
AuthorityFQDN
target.asset.network_domain The value of AuthorityFQDN from the raw log is assigned to the target.asset.network_domain field.
AuthorityID
target.asset.asset_id The value of AuthorityID from the raw log is assigned to the target.asset.asset_id field, prefixed with "AuthorityID:".
AzDeploymentId
security_result.detection_fields.value The value of AzDeploymentId from the raw log is assigned to a security_result.detection_fields object with key : AzDeploymentId .
AzRoleId
additional.fields.value.string_value The value of AzRoleId from the raw log is assigned to an additional.fields object with key : AzRole Id .
AzRoleName
target.user.attribute.roles.name The value of AzRoleName from the raw log is assigned to the target.user.attribute.roles.name field.
ComputerFQDN
principal.asset.network_domain The value of ComputerFQDN from the raw log is assigned to the principal.asset.network_domain field.
ComputerID
principal.asset.asset_id The value of ComputerID from the raw log is assigned to the principal.asset.asset_id field, prefixed with "ComputerId:".
ComputerName
about.hostname The value of ComputerName from the raw log is assigned to the about.hostname field.
CredentialId
security_result.detection_fields.value The value of CredentialId from the raw log is assigned to a security_result.detection_fields object with key : Credential Id .
DirectoryServiceName
security_result.detection_fields.value The value of DirectoryServiceName from the raw log is assigned to a security_result.detection_fields object with key : Directory Service Name .
DirectoryServiceNameLocalized
security_result.detection_fields.value The value of DirectoryServiceNameLocalized from the raw log is assigned to a security_result.detection_fields object with key : Directory Service Name Localized .
DirectoryServiceUuid
security_result.detection_fields.value The value of DirectoryServiceUuid from the raw log is assigned to a security_result.detection_fields object with key : Directory Service Uuid .
EventMessage
security_result.summary The value of EventMessage from the raw log is assigned to the security_result.summary field.
EventType
metadata.product_event_type The value of EventType from the raw log is assigned to the metadata.product_event_type field. It is also used to determine the metadata.event_type .
FailReason
security_result.summary The value of FailReason from the raw log is assigned to the security_result.summary field when present.
FailUserName
target.user.email_addresses The value of FailUserName from the raw log is assigned to the target.user.email_addresses field when present.
FromIPAddress
principal.ip The value of FromIPAddress from the raw log is assigned to the principal.ip field.
ID
security_result.detection_fields.value The value of ID from the raw log is assigned to a security_result.detection_fields object with key : ID .
InternalTrackingID
metadata.product_log_id The value of InternalTrackingID from the raw log is assigned to the metadata.product_log_id field.
JumpType
additional.fields.value.string_value The value of JumpType from the raw log is assigned to an additional.fields object with key : Jump Type .
NormalizedUser
target.user.email_addresses The value of NormalizedUser from the raw log is assigned to the target.user.email_addresses field.
OperationMode
additional.fields.value.string_value The value of OperationMode from the raw log is assigned to an additional.fields object with key : Operation Mode .
ProxyId
security_result.detection_fields.value The value of ProxyId from the raw log is assigned to a security_result.detection_fields object with key : Proxy Id .
RequestUserAgent
network.http.user_agent The value of RequestUserAgent from the raw log is assigned to the network.http.user_agent field.
SessionGuid
network.session_id The value of SessionGuid from the raw log is assigned to the network.session_id field.
Tenant
additional.fields.value.string_value The value of Tenant from the raw log is assigned to an additional.fields object with key : Tenant .
ThreadType
additional.fields.value.string_value The value of ThreadType from the raw log is assigned to an additional.fields object with key : Thread Type .
UserType
principal.user.attribute.roles.name The value of UserType from the raw log is assigned to the principal.user.attribute.roles.name field.
WhenOccurred
metadata.event_timestamp The value of WhenOccurred from the raw log is parsed and assigned to the metadata.event_timestamp field. This field also populates the top-level timestamp field. Hardcoded value "SSO". Determined by the EventType field. Defaults to STATUS_UPDATE if EventType is not present or doesn't match any specific criteria. Can be USER_LOGIN , USER_CREATION , USER_RESOURCE_ACCESS , USER_LOGOUT , or USER_CHANGE_PASSWORD . Hardcoded value "CENTRIFY_SSO". Hardcoded value "SSO". Hardcoded value "Centrify". If message field contains a session ID, it is extracted and used. Otherwise defaults to "1". Extracted from the host field if available, which comes from the syslog header. Extracted from the pid field if available, which comes from the syslog header. If UserGuid is present, its value is used. Otherwise, if message field contains a user ID, it is extracted and used. Set to "ALLOW" if Level is "Info", and "BLOCK" if FailReason is present. Set to "AUTH_VIOLATION" if FailReason is present. Determined by the Level field. Set to "INFORMATIONAL" if Level is "Info", "MEDIUM" if Level is "Warning", and "ERROR" if Level is "Error".

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: