Collect CrowdStrike FileVantage logs
This document explains how to ingest CrowdStrike FileVantage logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to CrowdStrike Falcon Console.
- Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).
Collect CrowdStrike FileVantage prerequisites (API credentials)
- Sign in to the CrowdStrike Falcon Console.
- Go to Support and resources > API clients and keys.
- Click Add new API client.
- Provide the following configuration details:
- Client name: Enter a descriptive name (for example,
Google SecOps FileVantage Integration). - Description: Enter a brief description of the integration purpose.
- API scopes: Select Falcon FileVantage:read.
- Client name: Enter a descriptive name (for example,
- Click Addto complete the process.
- Copy and save in a secure location the following details:
- Client ID
- Client Secret
- Base URL(determines your cloud region)
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
crowdstrike-filevantage-logs). - Create a Userfollowing this user guide: Creating an IAM user .
- Select the created User.
- Select Security credentialstab.
- Click Create Access Keyin section Access Keys.
- Select Third-party serviceas Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download .CSV fileto save the Access Keyand Secret Access Keyfor future reference.
- Click Done.
- Select Permissionstab.
- Click Add permissionsin section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccesspolicy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
- Copy and paste the following policy.
-
Policy JSON(replace
crowdstrike-filevantage-logsif you entered a different bucket name):{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::crowdstrike-filevantage-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::crowdstrike-filevantage-logs/filevantage/state.json" } ] } -
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
CrowdStrikeFileVantageRoleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name crowdstrike-filevantage-logsRuntime Python 3.13 Architecture x86_64 Execution role CrowdStrikeFileVantageRole -
After the function is created, open the Codetab, delete the stub and paste the following code (
crowdstrike-filevantage-logs.py).import os import json import boto3 import urllib3 from datetime import datetime , timezone from urllib.parse import urlencode def lambda_handler ( event , context ): """ Lambda function to fetch CrowdStrike FileVantage logs and store them in S3 """ # Environment variables s3_bucket = os . environ [ 'S3_BUCKET' ] s3_prefix = os . environ [ 'S3_PREFIX' ] state_key = os . environ [ 'STATE_KEY' ] client_id = os . environ [ 'FALCON_CLIENT_ID' ] client_secret = os . environ [ 'FALCON_CLIENT_SECRET' ] base_url = os . environ [ 'FALCON_BASE_URL' ] # Initialize clients s3_client = boto3 . client ( 's3' ) http = urllib3 . PoolManager () try : # Get OAuth token token_url = f " { base_url } /oauth2/token" token_headers = { 'Content-Type' : 'application/x-www-form-urlencoded' , 'Accept' : 'application/json' } token_data = urlencode ({ 'client_id' : client_id , 'client_secret' : client_secret , 'grant_type' : 'client_credentials' }) token_response = http . request ( 'POST' , token_url , body = token_data , headers = token_headers ) if token_response . status != 200 : print ( f "Failed to get OAuth token: { token_response . status } " ) return { 'statusCode' : 500 , 'body' : 'Authentication failed' } token_data = json . loads ( token_response . data . decode ( 'utf-8' )) access_token = token_data [ 'access_token' ] # Get last checkpoint last_timestamp = get_last_checkpoint ( s3_client , s3_bucket , state_key ) # Fetch file changes changes_url = f " { base_url } /filevantage/queries/changes/v1" headers = { 'Authorization' : f 'Bearer { access_token } ' , 'Accept' : 'application/json' } # Build query parameters params = { 'limit' : 500 , 'sort' : 'action_timestamp.asc' } if last_timestamp : params [ 'filter' ] = f "action_timestamp:>' { last_timestamp } '" query_url = f " { changes_url } ? { urlencode ( params ) } " response = http . request ( 'GET' , query_url , headers = headers ) if response . status != 200 : print ( f "Failed to query changes: { response . status } " ) return { 'statusCode' : 500 , 'body' : 'Failed to fetch changes' } response_data = json . loads ( response . data . decode ( 'utf-8' )) change_ids = response_data . get ( 'resources' , []) if not change_ids : print ( "No new changes found" ) return { 'statusCode' : 200 , 'body' : 'No new changes' } # Get detailed change information details_url = f " { base_url } /filevantage/entities/changes/v1" batch_size = 100 all_changes = [] latest_timestamp = last_timestamp for i in range ( 0 , len ( change_ids ), batch_size ): batch_ids = change_ids [ i : i + batch_size ] details_params = { 'ids' : batch_ids } details_query_url = f " { details_url } ? { urlencode ( details_params , doseq = True ) } " details_response = http . request ( 'GET' , details_query_url , headers = headers ) if details_response . status == 200 : details_data = json . loads ( details_response . data . decode ( 'utf-8' )) changes = details_data . get ( 'resources' , []) all_changes . extend ( changes ) # Track latest timestamp for change in changes : change_time = change . get ( 'action_timestamp' ) if change_time and ( not latest_timestamp or change_time > latest_timestamp ): latest_timestamp = change_time if all_changes : # Store logs in S3 timestamp = datetime . now ( timezone . utc ) . strftime ( '%Y%m %d _%H%M%S' ) s3_key = f " { s3_prefix } filevantage_changes_ { timestamp } .json" s3_client . put_object ( Bucket = s3_bucket , Key = s3_key , Body = ' \n ' . join ( json . dumps ( change ) for change in all_changes ), ContentType = 'application/json' ) # Update checkpoint save_checkpoint ( s3_client , s3_bucket , state_key , latest_timestamp ) print ( f "Stored { len ( all_changes ) } changes in S3: { s3_key } " ) return { 'statusCode' : 200 , 'body' : f 'Processed { len ( all_changes ) } changes' } except Exception as e : print ( f "Error: { str ( e ) } " ) return { 'statusCode' : 500 , 'body' : f 'Error: { str ( e ) } ' } def get_last_checkpoint ( s3_client , bucket , key ): """Get the last processed timestamp from S3 state file""" try : response = s3_client . get_object ( Bucket = bucket , Key = key ) state = json . loads ( response [ 'Body' ] . read () . decode ( 'utf-8' )) return state . get ( 'last_timestamp' ) except s3_client . exceptions . NoSuchKey : return None except Exception as e : print ( f "Error reading checkpoint: { e } " ) return None def save_checkpoint ( s3_client , bucket , key , timestamp ): """Save the last processed timestamp to S3 state file""" try : state = { 'last_timestamp' : timestamp , 'updated_at' : datetime . now ( timezone . utc ) . isoformat () } s3_client . put_object ( Bucket = bucket , Key = key , Body = json . dumps ( state ), ContentType = 'application/json' ) except Exception as e : print ( f "Error saving checkpoint: { e } " ) -
Go to Configuration > Environment variables.
-
Click Edit > Add new environment variable.
-
Enter the environment variables provided in the following table, replacing the example values with your values.
Environment variables
Key Example value S3_BUCKETcrowdstrike-filevantage-logsS3_PREFIXfilevantage/STATE_KEYfilevantage/state.jsonFALCON_CLIENT_ID<your-client-id>FALCON_CLIENT_SECRET<your-client-secret>FALCON_BASE_URLhttps://api.crowdstrike.com(US-1) /https://api.us-2.crowdstrike.com(US-2) /https://api.eu-1.crowdstrike.com(EU-1) -
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour). - Target: your Lambda function
crowdstrike-filevantage-logs. - Name:
crowdstrike-filevantage-logs-1h.
- Recurring schedule: Rate(
- Click Create schedule.
(Optional) Create read-only IAM user and keys for Google SecOps
- Go to AWS Console > IAM > Users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
JSON:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::crowdstrike-filevantage-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::crowdstrike-filevantage-logs" } ] } -
Name =
secops-reader-policy. -
Click Create policy > search/select > Next > Add permissions.
-
Create access key for
secops-reader: Security credentials > Access keys. -
Click Create access key.
-
Download the
.CSV. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest CrowdStrike FileVantage logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
CrowdStrike FileVantage logs). - Select Amazon S3 V2as the Source type.
- Select CrowdStrike Filevantageas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://crowdstrike-filevantage-logs/filevantage/ - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.

