Collect CrowdStrike FileVantage logs

Supported in:

This document explains how to ingest CrowdStrike FileVantage logs to Google Security Operations using Amazon S3.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance.
  • Privileged access to CrowdStrike Falcon Console.
  • Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).

Collect CrowdStrike FileVantage prerequisites (API credentials)

  1. Sign in to the CrowdStrike Falcon Console.
  2. Go to Support and resources > API clients and keys.
  3. Click Add new API client.
  4. Provide the following configuration details:
    • Client name: Enter a descriptive name (for example, Google SecOps FileVantage Integration ).
    • Description: Enter a brief description of the integration purpose.
    • API scopes: Select Falcon FileVantage:read.
  5. Click Addto complete the process.
  6. Copy and save in a secure location the following details:
    • Client ID
    • Client Secret
    • Base URL(determines your cloud region)

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, crowdstrike-filevantage-logs ).
  3. Create a Userfollowing this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select Security credentialstab.
  6. Click Create Access Keyin section Access Keys.
  7. Select Third-party serviceas Use case.
  8. Click Next.
  9. Optional: Add a description tag.
  10. Click Create access key.
  11. Click Download .CSV fileto save the Access Keyand Secret Access Keyfor future reference.
  12. Click Done.
  13. Select Permissionstab.
  14. Click Add permissionsin section Permissions policies.
  15. Select Add permissions.
  16. Select Attach policies directly.
  17. Search for AmazonS3FullAccesspolicy.
  18. Select the policy.
  19. Click Next.
  20. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies.
  2. Click Create policy > JSON tab.
  3. Copy and paste the following policy.
  4. Policy JSON(replace crowdstrike-filevantage-logs if you entered a different bucket name):

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:PutObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::crowdstrike-filevantage-logs/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowGetStateObject" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:GetObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::crowdstrike-filevantage-logs/filevantage/state.json" 
      
     } 
      
     ] 
     } 
     
    
  5. Click Next > Create policy.

  6. Go to IAM > Roles > Create role > AWS service > Lambda.

  7. Attach the newly created policy.

  8. Name the role CrowdStrikeFileVantageRole and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name crowdstrike-filevantage-logs
    Runtime Python 3.13
    Architecture x86_64
    Execution role CrowdStrikeFileVantageRole
  4. After the function is created, open the Codetab, delete the stub and paste the following code ( crowdstrike-filevantage-logs.py ).

      import 
      
     os 
     import 
      
     json 
     import 
      
     boto3 
     import 
      
     urllib3 
     from 
      
     datetime 
      
     import 
     datetime 
     , 
     timezone 
     from 
      
     urllib.parse 
      
     import 
     urlencode 
     def 
      
     lambda_handler 
     ( 
     event 
     , 
     context 
     ): 
      
     """ 
     Lambda function to fetch CrowdStrike FileVantage logs and store them in S3 
     """ 
     # Environment variables 
     s3_bucket 
     = 
     os 
     . 
     environ 
     [ 
     'S3_BUCKET' 
     ] 
     s3_prefix 
     = 
     os 
     . 
     environ 
     [ 
     'S3_PREFIX' 
     ] 
     state_key 
     = 
     os 
     . 
     environ 
     [ 
     'STATE_KEY' 
     ] 
     client_id 
     = 
     os 
     . 
     environ 
     [ 
     'FALCON_CLIENT_ID' 
     ] 
     client_secret 
     = 
     os 
     . 
     environ 
     [ 
     'FALCON_CLIENT_SECRET' 
     ] 
     base_url 
     = 
     os 
     . 
     environ 
     [ 
     'FALCON_BASE_URL' 
     ] 
     # Initialize clients 
     s3_client 
     = 
     boto3 
     . 
     client 
     ( 
     's3' 
     ) 
     http 
     = 
     urllib3 
     . 
     PoolManager 
     () 
     try 
     : 
     # Get OAuth token 
     token_url 
     = 
     f 
     " 
     { 
     base_url 
     } 
     /oauth2/token" 
     token_headers 
     = 
     { 
     'Content-Type' 
     : 
     'application/x-www-form-urlencoded' 
     , 
     'Accept' 
     : 
     'application/json' 
     } 
     token_data 
     = 
     urlencode 
     ({ 
     'client_id' 
     : 
     client_id 
     , 
     'client_secret' 
     : 
     client_secret 
     , 
     'grant_type' 
     : 
     'client_credentials' 
     }) 
     token_response 
     = 
     http 
     . 
     request 
     ( 
     'POST' 
     , 
     token_url 
     , 
     body 
     = 
     token_data 
     , 
     headers 
     = 
     token_headers 
     ) 
     if 
     token_response 
     . 
     status 
     != 
     200 
     : 
     print 
     ( 
     f 
     "Failed to get OAuth token: 
     { 
     token_response 
     . 
     status 
     } 
     " 
     ) 
     return 
     { 
     'statusCode' 
     : 
     500 
     , 
     'body' 
     : 
     'Authentication failed' 
     } 
     token_data 
     = 
     json 
     . 
     loads 
     ( 
     token_response 
     . 
     data 
     . 
     decode 
     ( 
     'utf-8' 
     )) 
     access_token 
     = 
     token_data 
     [ 
     'access_token' 
     ] 
     # Get last checkpoint 
     last_timestamp 
     = 
     get_last_checkpoint 
     ( 
     s3_client 
     , 
     s3_bucket 
     , 
     state_key 
     ) 
     # Fetch file changes 
     changes_url 
     = 
     f 
     " 
     { 
     base_url 
     } 
     /filevantage/queries/changes/v1" 
     headers 
     = 
     { 
     'Authorization' 
     : 
     f 
     'Bearer 
     { 
     access_token 
     } 
     ' 
     , 
     'Accept' 
     : 
     'application/json' 
     } 
     # Build query parameters 
     params 
     = 
     { 
     'limit' 
     : 
     500 
     , 
     'sort' 
     : 
     'action_timestamp.asc' 
     } 
     if 
     last_timestamp 
     : 
     params 
     [ 
     'filter' 
     ] 
     = 
     f 
     "action_timestamp:>' 
     { 
     last_timestamp 
     } 
     '" 
     query_url 
     = 
     f 
     " 
     { 
     changes_url 
     } 
     ? 
     { 
     urlencode 
     ( 
     params 
     ) 
     } 
     " 
     response 
     = 
     http 
     . 
     request 
     ( 
     'GET' 
     , 
     query_url 
     , 
     headers 
     = 
     headers 
     ) 
     if 
     response 
     . 
     status 
     != 
     200 
     : 
     print 
     ( 
     f 
     "Failed to query changes: 
     { 
     response 
     . 
     status 
     } 
     " 
     ) 
     return 
     { 
     'statusCode' 
     : 
     500 
     , 
     'body' 
     : 
     'Failed to fetch changes' 
     } 
     response_data 
     = 
     json 
     . 
     loads 
     ( 
     response 
     . 
     data 
     . 
     decode 
     ( 
     'utf-8' 
     )) 
     change_ids 
     = 
     response_data 
     . 
     get 
     ( 
     'resources' 
     , 
     []) 
     if 
     not 
     change_ids 
     : 
     print 
     ( 
     "No new changes found" 
     ) 
     return 
     { 
     'statusCode' 
     : 
     200 
     , 
     'body' 
     : 
     'No new changes' 
     } 
     # Get detailed change information 
     details_url 
     = 
     f 
     " 
     { 
     base_url 
     } 
     /filevantage/entities/changes/v1" 
     batch_size 
     = 
     100 
     all_changes 
     = 
     [] 
     latest_timestamp 
     = 
     last_timestamp 
     for 
     i 
     in 
     range 
     ( 
     0 
     , 
     len 
     ( 
     change_ids 
     ), 
     batch_size 
     ): 
     batch_ids 
     = 
     change_ids 
     [ 
     i 
     : 
     i 
     + 
     batch_size 
     ] 
     details_params 
     = 
     { 
     'ids' 
     : 
     batch_ids 
     } 
     details_query_url 
     = 
     f 
     " 
     { 
     details_url 
     } 
     ? 
     { 
     urlencode 
     ( 
     details_params 
     , 
      
     doseq 
     = 
     True 
     ) 
     } 
     " 
     details_response 
     = 
     http 
     . 
     request 
     ( 
     'GET' 
     , 
     details_query_url 
     , 
     headers 
     = 
     headers 
     ) 
     if 
     details_response 
     . 
     status 
     == 
     200 
     : 
     details_data 
     = 
     json 
     . 
     loads 
     ( 
     details_response 
     . 
     data 
     . 
     decode 
     ( 
     'utf-8' 
     )) 
     changes 
     = 
     details_data 
     . 
     get 
     ( 
     'resources' 
     , 
     []) 
     all_changes 
     . 
     extend 
     ( 
     changes 
     ) 
     # Track latest timestamp 
     for 
     change 
     in 
     changes 
     : 
     change_time 
     = 
     change 
     . 
     get 
     ( 
     'action_timestamp' 
     ) 
     if 
     change_time 
     and 
     ( 
     not 
     latest_timestamp 
     or 
     change_time 
    > latest_timestamp 
     ): 
     latest_timestamp 
     = 
     change_time 
     if 
     all_changes 
     : 
     # Store logs in S3 
     timestamp 
     = 
     datetime 
     . 
     now 
     ( 
     timezone 
     . 
     utc 
     ) 
     . 
     strftime 
     ( 
     '%Y%m 
     %d 
     _%H%M%S' 
     ) 
     s3_key 
     = 
     f 
     " 
     { 
     s3_prefix 
     } 
     filevantage_changes_ 
     { 
     timestamp 
     } 
     .json" 
     s3_client 
     . 
     put_object 
     ( 
     Bucket 
     = 
     s3_bucket 
     , 
     Key 
     = 
     s3_key 
     , 
     Body 
     = 
     ' 
     \n 
     ' 
     . 
     join 
     ( 
     json 
     . 
     dumps 
     ( 
     change 
     ) 
     for 
     change 
     in 
     all_changes 
     ), 
     ContentType 
     = 
     'application/json' 
     ) 
     # Update checkpoint 
     save_checkpoint 
     ( 
     s3_client 
     , 
     s3_bucket 
     , 
     state_key 
     , 
     latest_timestamp 
     ) 
     print 
     ( 
     f 
     "Stored 
     { 
     len 
     ( 
     all_changes 
     ) 
     } 
     changes in S3: 
     { 
     s3_key 
     } 
     " 
     ) 
     return 
     { 
     'statusCode' 
     : 
     200 
     , 
     'body' 
     : 
     f 
     'Processed 
     { 
     len 
     ( 
     all_changes 
     ) 
     } 
     changes' 
     } 
     except 
     Exception 
     as 
     e 
     : 
     print 
     ( 
     f 
     "Error: 
     { 
     str 
     ( 
     e 
     ) 
     } 
     " 
     ) 
     return 
     { 
     'statusCode' 
     : 
     500 
     , 
     'body' 
     : 
     f 
     'Error: 
     { 
     str 
     ( 
     e 
     ) 
     } 
     ' 
     } 
     def 
      
     get_last_checkpoint 
     ( 
     s3_client 
     , 
     bucket 
     , 
     key 
     ): 
      
     """Get the last processed timestamp from S3 state file""" 
     try 
     : 
     response 
     = 
     s3_client 
     . 
     get_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     key 
     ) 
     state 
     = 
     json 
     . 
     loads 
     ( 
     response 
     [ 
     'Body' 
     ] 
     . 
     read 
     () 
     . 
     decode 
     ( 
     'utf-8' 
     )) 
     return 
     state 
     . 
     get 
     ( 
     'last_timestamp' 
     ) 
     except 
     s3_client 
     . 
     exceptions 
     . 
     NoSuchKey 
     : 
     return 
     None 
     except 
     Exception 
     as 
     e 
     : 
     print 
     ( 
     f 
     "Error reading checkpoint: 
     { 
     e 
     } 
     " 
     ) 
     return 
     None 
     def 
      
     save_checkpoint 
     ( 
     s3_client 
     , 
     bucket 
     , 
     key 
     , 
     timestamp 
     ): 
      
     """Save the last processed timestamp to S3 state file""" 
     try 
     : 
     state 
     = 
     { 
     'last_timestamp' 
     : 
     timestamp 
     , 
     'updated_at' 
     : 
     datetime 
     . 
     now 
     ( 
     timezone 
     . 
     utc 
     ) 
     . 
     isoformat 
     () 
     } 
     s3_client 
     . 
     put_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     key 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     state 
     ), 
     ContentType 
     = 
     'application/json' 
     ) 
     except 
     Exception 
     as 
     e 
     : 
     print 
     ( 
     f 
     "Error saving checkpoint: 
     { 
     e 
     } 
     " 
     ) 
     
    
  5. Go to Configuration > Environment variables.

  6. Click Edit > Add new environment variable.

  7. Enter the environment variables provided in the following table, replacing the example values with your values.

    Environment variables

    Key Example value
    S3_BUCKET crowdstrike-filevantage-logs
    S3_PREFIX filevantage/
    STATE_KEY filevantage/state.json
    FALCON_CLIENT_ID <your-client-id>
    FALCON_CLIENT_SECRET <your-client-secret>
    FALCON_BASE_URL https://api.crowdstrike.com (US-1) / https://api.us-2.crowdstrike.com (US-2) / https://api.eu-1.crowdstrike.com (EU-1)
  8. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  9. Select the Configurationtab.

  10. In the General configurationpanel click Edit.

  11. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 1 hour ).
    • Target: your Lambda function crowdstrike-filevantage-logs .
    • Name: crowdstrike-filevantage-logs-1h .
  3. Click Create schedule.

(Optional) Create read-only IAM user and keys for Google SecOps

  1. Go to AWS Console > IAM > Users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader .
    • Access type: Select Access key – Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. JSON:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::crowdstrike-filevantage-logs/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::crowdstrike-filevantage-logs" 
      
     } 
      
     ] 
     } 
     
    
  7. Name = secops-reader-policy .

  8. Click Create policy > search/select > Next > Add permissions.

  9. Create access key for secops-reader : Security credentials > Access keys.

  10. Click Create access key.

  11. Download the .CSV . (You'll paste these values into the feed).

Configure a feed in Google SecOps to ingest CrowdStrike FileVantage logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, CrowdStrike FileVantage logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select CrowdStrike Filevantageas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://crowdstrike-filevantage-logs/filevantage/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: