Collect Swimlane Platform logs
This document explains how to ingest Swimlane Platform logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- Privileged access to Swimlane(Account Admin capable of generating a Personal Access Token)
- Privileged access to AWS(S3, IAM, Lambda, EventBridge)
Collect Swimlane Platform prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the Swimlane Platformas an Account Admin.
- Go to Profile Options.
- Click Profileto open the profile editor.
- Navigate to the Personal Access Tokensection.
- Click Generate tokento create a new Personal Access Token.
- Copy the token immediately and store it securely (it won't be shown again).
- Record the following details for the integration:
- Personal Access Token (PAT): Used in the
Private-Tokenheader for API calls. - Account ID: Required for the Audit Log API path
/api/public/audit/account/{ACCOUNT_ID}/auditlogs. Contact your Swimlane administrator if you don't know your Account ID. - Base URL: Your Swimlane domain (for example,
https://eu.swimlane.app,https://us.swimlane.app).
- Personal Access Token (PAT): Used in the
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
swimlane-audit). - Create a user following this user guide: Creating an IAM user .
- Select the created User.
- Select the Security credentialstab.
- Click Create Access Keyin the Access Keyssection.
- Select Third-party serviceas the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
- Click Done.
- Select the Permissionstab.
- Click Add permissionsin the Permissions policiessection.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccesspolicy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
-
Enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutSwimlaneAuditObjects" , "Effect" : "Allow" , "Action" : [ "s3:PutObject" ], "Resource" : "arn:aws:s3:::swimlane-audit/swimlane/audit/*" }, { "Sid" : "AllowStateReadWrite" , "Effect" : "Allow" , "Action" : [ "s3:GetObject" , "s3:PutObject" ], "Resource" : "arn:aws:s3:::swimlane-audit/swimlane/audit/state.json" } ] }- Replace
swimlane-auditif you entered a different bucket name.
- Replace
-
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy and the AWS managed policy:
- The custom policy created above
-
service-role/AWSLambdaBasicExecutionRole(CloudWatch Logs)
-
Name the role
WriteSwimlaneAuditToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name swimlane_audit_to_s3Runtime Python 3.13 Architecture x86_64 Execution role WriteSwimlaneAuditToS3Role -
After the function is created, open the Codetab, delete the stub and enter the following code (
swimlane_audit_to_s3.py):#!/usr/bin/env python3 import os , json , gzip , io , uuid , datetime as dt , urllib.parse , urllib.request import boto3 # ---- Environment ---- S3_BUCKET = os . environ [ "S3_BUCKET" ] S3_PREFIX = os . environ . get ( "S3_PREFIX" , "swimlane/audit/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , S3_PREFIX + "state.json" ) BASE_URL = os . environ [ "SWIMLANE_BASE_URL" ] . rstrip ( "/" ) # e.g., https://eu.swimlane.app ACCOUNT_ID = os . environ [ "SWIMLANE_ACCOUNT_ID" ] TENANT_LIST = os . environ . get ( "SWIMLANE_TENANT_LIST" , "" ) # comma-separated; optional INCLUDE_ACCOUNT = os . environ . get ( "INCLUDE_ACCOUNT" , "true" ) . lower () == "true" PAGE_SIZE = int ( os . environ . get ( "PAGE_SIZE" , "100" )) # max 100 WINDOW_MINUTES = int ( os . environ . get ( "WINDOW_MINUTES" , "15" )) # time range per run PAT_TOKEN = os . environ [ "SWIMLANE_PAT_TOKEN" ] # Personal Access Token TIMEOUT = int ( os . environ . get ( "TIMEOUT" , "30" )) AUDIT_URL = f " { BASE_URL } /api/public/audit/account/ { ACCOUNT_ID } /auditlogs" s3 = boto3 . client ( "s3" ) # ---- Helpers ---- def _http ( req : urllib . request . Request ): return urllib . request . urlopen ( req , timeout = TIMEOUT ) def _now (): return dt . datetime . utcnow () def get_state () - > dict : try : obj = s3 . get_object ( Bucket = S3_BUCKET , Key = STATE_KEY ) return json . loads ( obj [ "Body" ] . read ()) except Exception : return {} def put_state ( state : dict ) - > None : state [ "updated_at" ] = _now () . isoformat () + "Z" s3 . put_object ( Bucket = S3_BUCKET , Key = STATE_KEY , Body = json . dumps ( state ) . encode ()) def build_url ( from_dt : dt . datetime , to_dt : dt . datetime , page : int ) - > str : params = { "pageNumber" : str ( page ), "pageSize" : str ( PAGE_SIZE ), "includeAccount" : str ( INCLUDE_ACCOUNT ) . lower (), "fromdate" : from_dt . replace ( microsecond = 0 ) . isoformat () + "Z" , "todate" : to_dt . replace ( microsecond = 0 ) . isoformat () + "Z" , } if TENANT_LIST : params [ "tenantList" ] = TENANT_LIST return AUDIT_URL + "?" + urllib . parse . urlencode ( params ) def fetch_page ( url : str ) - > dict : headers = { "Accept" : "application/json" , "Private-Token" : PAT_TOKEN , } req = urllib . request . Request ( url , headers = headers ) with _http ( req ) as r : return json . loads ( r . read ()) def write_chunk ( items : list [ dict ], ts : dt . datetime ) - > str : key = f " { S3_PREFIX }{ ts : %Y/%m/%d } /swimlane-audit- { uuid . uuid4 () } .json.gz" buf = io . BytesIO () with gzip . GzipFile ( fileobj = buf , mode = "w" ) as gz : for rec in items : gz . write (( json . dumps ( rec ) + "n" ) . encode ()) buf . seek ( 0 ) s3 . upload_fileobj ( buf , S3_BUCKET , key ) return key def lambda_handler ( event = None , context = None ): state = get_state () # determine window to_dt = _now () from_dt = to_dt - dt . timedelta ( minutes = WINDOW_MINUTES ) if ( prev := state . get ( "last_to_dt" )): try : from_dt = dt . datetime . fromisoformat ( prev . replace ( "Z" , "+00:00" )) except Exception : pass page = int ( state . get ( "page" , 1 )) total_written = 0 while True : url = build_url ( from_dt , to_dt , page ) resp = fetch_page ( url ) items = resp . get ( "auditlogs" , []) or [] if items : write_chunk ( items , _now ()) total_written += len ( items ) next_path = resp . get ( "next" ) if not next_path : break page += 1 state [ "page" ] = page # advance state window state [ "last_to_dt" ] = to_dt . replace ( microsecond = 0 ) . isoformat () + "Z" state [ "page" ] = 1 put_state ( state ) return { "ok" : True , "written" : total_written , "from" : from_dt . isoformat () + "Z" , "to" : to_dt . isoformat () + "Z" } if __name__ == "__main__" : print ( lambda_handler ()) -
Go to Configuration > Environment variables.
-
Click Edit > Add new environment variable.
-
Enter the following environment variables, replacing with your values.
Key Example value S3_BUCKETswimlane-auditS3_PREFIXswimlane/audit/STATE_KEYswimlane/audit/state.jsonSWIMLANE_BASE_URLhttps://eu.swimlane.appSWIMLANE_ACCOUNT_IDxxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxxSWIMLANE_TENANT_LISTtenantA,tenantB(optional)INCLUDE_ACCOUNTtruePAGE_SIZE100WINDOW_MINUTES15SWIMLANE_PAT_TOKEN<your-personal-access-token>TIMEOUT30 -
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
15 min) - Target: Your Lambda function
swimlane_audit_to_s3 - Name:
swimlane-audit-schedule-15min
- Recurring schedule: Rate(
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- In the AWS Console. go to IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User:
secops-reader - Access type: Access key — Programmatic access
- User:
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::swimlane-audit/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::swimlane-audit" } ] } -
Set the name to
secops-reader-policy. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest Swimlane Platform logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Swimlane Platform logs). - Select Amazon S3 V2as the Source type.
- Select Swimlane Platformas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://swimlane-audit/swimlane/audit/ - Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.

