Collect Swimlane Platform logs

Supported in:

This document explains how to ingest Swimlane Platform logs to Google Security Operations using Amazon S3.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance
  • Privileged access to Swimlane(Account Admin capable of generating a Personal Access Token)
  • Privileged access to AWS(S3, IAM, Lambda, EventBridge)

Collect Swimlane Platform prerequisites (IDs, API keys, org IDs, tokens)

  1. Sign in to the Swimlane Platformas an Account Admin.
  2. Go to Profile Options.
  3. Click Profileto open the profile editor.
  4. Navigate to the Personal Access Tokensection.
  5. Click Generate tokento create a new Personal Access Token.
  6. Copy the token immediately and store it securely (it won't be shown again).
  7. Record the following details for the integration:
    • Personal Access Token (PAT): Used in the Private-Token header for API calls.
    • Account ID: Required for the Audit Log API path /api/public/audit/account/{ACCOUNT_ID}/auditlogs . Contact your Swimlane administrator if you don't know your Account ID.
    • Base URL: Your Swimlane domain (for example, https://eu.swimlane.app , https://us.swimlane.app ).

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, swimlane-audit ).
  3. Create a user following this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select the Security credentialstab.
  6. Click Create Access Keyin the Access Keyssection.
  7. Select Third-party serviceas the Use case.
  8. Click Next.
  9. Optional: add a description tag.
  10. Click Create access key.
  11. Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
  12. Click Done.
  13. Select the Permissionstab.
  14. Click Add permissionsin the Permissions policiessection.
  15. Select Add permissions.
  16. Select Attach policies directly
  17. Search for and select the AmazonS3FullAccesspolicy.
  18. Click Next.
  19. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies > Create policy > JSON tab.
  2. Enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutSwimlaneAuditObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:PutObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::swimlane-audit/swimlane/audit/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowStateReadWrite" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     , 
      
     "s3:PutObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::swimlane-audit/swimlane/audit/state.json" 
      
     } 
      
     ] 
     } 
     
    
    • Replace swimlane-audit if you entered a different bucket name.
  3. Click Next > Create policy.

  4. Go to IAM > Roles > Create role > AWS service > Lambda.

  5. Attach the newly created policy and the AWS managed policy:

    • The custom policy created above
    • service-role/AWSLambdaBasicExecutionRole (CloudWatch Logs)
  6. Name the role WriteSwimlaneAuditToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name swimlane_audit_to_s3
    Runtime Python 3.13
    Architecture x86_64
    Execution role WriteSwimlaneAuditToS3Role
  4. After the function is created, open the Codetab, delete the stub and enter the following code ( swimlane_audit_to_s3.py ):

      #!/usr/bin/env python3 
     import 
      
     os 
     , 
      
     json 
     , 
      
     gzip 
     , 
      
     io 
     , 
      
     uuid 
     , 
      
     datetime 
      
     as 
      
     dt 
     , 
      
     urllib.parse 
     , 
      
     urllib.request 
     import 
      
     boto3 
     # ---- Environment ---- 
     S3_BUCKET 
     = 
     os 
     . 
     environ 
     [ 
     "S3_BUCKET" 
     ] 
     S3_PREFIX 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "S3_PREFIX" 
     , 
     "swimlane/audit/" 
     ) 
     STATE_KEY 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "STATE_KEY" 
     , 
     S3_PREFIX 
     + 
     "state.json" 
     ) 
     BASE_URL 
     = 
     os 
     . 
     environ 
     [ 
     "SWIMLANE_BASE_URL" 
     ] 
     . 
     rstrip 
     ( 
     "/" 
     ) 
     # e.g., https://eu.swimlane.app 
     ACCOUNT_ID 
     = 
     os 
     . 
     environ 
     [ 
     "SWIMLANE_ACCOUNT_ID" 
     ] 
     TENANT_LIST 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "SWIMLANE_TENANT_LIST" 
     , 
     "" 
     ) 
     # comma-separated; optional 
     INCLUDE_ACCOUNT 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "INCLUDE_ACCOUNT" 
     , 
     "true" 
     ) 
     . 
     lower 
     () 
     == 
     "true" 
     PAGE_SIZE 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "PAGE_SIZE" 
     , 
     "100" 
     )) 
     # max 100 
     WINDOW_MINUTES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "WINDOW_MINUTES" 
     , 
     "15" 
     )) 
     # time range per run 
     PAT_TOKEN 
     = 
     os 
     . 
     environ 
     [ 
     "SWIMLANE_PAT_TOKEN" 
     ] 
     # Personal Access Token 
     TIMEOUT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "TIMEOUT" 
     , 
     "30" 
     )) 
     AUDIT_URL 
     = 
     f 
     " 
     { 
     BASE_URL 
     } 
     /api/public/audit/account/ 
     { 
     ACCOUNT_ID 
     } 
     /auditlogs" 
     s3 
     = 
     boto3 
     . 
     client 
     ( 
     "s3" 
     ) 
     # ---- Helpers ---- 
     def 
      
     _http 
     ( 
     req 
     : 
     urllib 
     . 
     request 
     . 
     Request 
     ): 
     return 
     urllib 
     . 
     request 
     . 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     TIMEOUT 
     ) 
     def 
      
     _now 
     (): 
     return 
     dt 
     . 
     datetime 
     . 
     utcnow 
     () 
     def 
      
     get_state 
     () 
     - 
    > dict 
     : 
     try 
     : 
     obj 
     = 
     s3 
     . 
     get_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     ) 
     return 
     json 
     . 
     loads 
     ( 
     obj 
     [ 
     "Body" 
     ] 
     . 
     read 
     ()) 
     except 
     Exception 
     : 
     return 
     {} 
     def 
      
     put_state 
     ( 
     state 
     : 
     dict 
     ) 
     - 
    > None 
     : 
     state 
     [ 
     "updated_at" 
     ] 
     = 
     _now 
     () 
     . 
     isoformat 
     () 
     + 
     "Z" 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     state 
     ) 
     . 
     encode 
     ()) 
     def 
      
     build_url 
     ( 
     from_dt 
     : 
     dt 
     . 
     datetime 
     , 
     to_dt 
     : 
     dt 
     . 
     datetime 
     , 
     page 
     : 
     int 
     ) 
     - 
    > str 
     : 
     params 
     = 
     { 
     "pageNumber" 
     : 
     str 
     ( 
     page 
     ), 
     "pageSize" 
     : 
     str 
     ( 
     PAGE_SIZE 
     ), 
     "includeAccount" 
     : 
     str 
     ( 
     INCLUDE_ACCOUNT 
     ) 
     . 
     lower 
     (), 
     "fromdate" 
     : 
     from_dt 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     + 
     "Z" 
     , 
     "todate" 
     : 
     to_dt 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     + 
     "Z" 
     , 
     } 
     if 
     TENANT_LIST 
     : 
     params 
     [ 
     "tenantList" 
     ] 
     = 
     TENANT_LIST 
     return 
     AUDIT_URL 
     + 
     "?" 
     + 
     urllib 
     . 
     parse 
     . 
     urlencode 
     ( 
     params 
     ) 
     def 
      
     fetch_page 
     ( 
     url 
     : 
     str 
     ) 
     - 
    > dict 
     : 
     headers 
     = 
     { 
     "Accept" 
     : 
     "application/json" 
     , 
     "Private-Token" 
     : 
     PAT_TOKEN 
     , 
     } 
     req 
     = 
     urllib 
     . 
     request 
     . 
     Request 
     ( 
     url 
     , 
     headers 
     = 
     headers 
     ) 
     with 
     _http 
     ( 
     req 
     ) 
     as 
     r 
     : 
     return 
     json 
     . 
     loads 
     ( 
     r 
     . 
     read 
     ()) 
     def 
      
     write_chunk 
     ( 
     items 
     : 
     list 
     [ 
     dict 
     ], 
     ts 
     : 
     dt 
     . 
     datetime 
     ) 
     - 
    > str 
     : 
     key 
     = 
     f 
     " 
     { 
     S3_PREFIX 
     }{ 
     ts 
     : 
     %Y/%m/%d 
     } 
     /swimlane-audit- 
     { 
     uuid 
     . 
     uuid4 
     () 
     } 
     .json.gz" 
     buf 
     = 
     io 
     . 
     BytesIO 
     () 
     with 
     gzip 
     . 
     GzipFile 
     ( 
     fileobj 
     = 
     buf 
     , 
     mode 
     = 
     "w" 
     ) 
     as 
     gz 
     : 
     for 
     rec 
     in 
     items 
     : 
     gz 
     . 
     write 
     (( 
     json 
     . 
     dumps 
     ( 
     rec 
     ) 
     + 
     "n" 
     ) 
     . 
     encode 
     ()) 
     buf 
     . 
     seek 
     ( 
     0 
     ) 
     s3 
     . 
     upload_fileobj 
     ( 
     buf 
     , 
     S3_BUCKET 
     , 
     key 
     ) 
     return 
     key 
     def 
      
     lambda_handler 
     ( 
     event 
     = 
     None 
     , 
     context 
     = 
     None 
     ): 
     state 
     = 
     get_state 
     () 
     # determine window 
     to_dt 
     = 
     _now 
     () 
     from_dt 
     = 
     to_dt 
     - 
     dt 
     . 
     timedelta 
     ( 
     minutes 
     = 
     WINDOW_MINUTES 
     ) 
     if 
     ( 
     prev 
     := 
     state 
     . 
     get 
     ( 
     "last_to_dt" 
     )): 
     try 
     : 
     from_dt 
     = 
     dt 
     . 
     datetime 
     . 
     fromisoformat 
     ( 
     prev 
     . 
     replace 
     ( 
     "Z" 
     , 
     "+00:00" 
     )) 
     except 
     Exception 
     : 
     pass 
     page 
     = 
     int 
     ( 
     state 
     . 
     get 
     ( 
     "page" 
     , 
     1 
     )) 
     total_written 
     = 
     0 
     while 
     True 
     : 
     url 
     = 
     build_url 
     ( 
     from_dt 
     , 
     to_dt 
     , 
     page 
     ) 
     resp 
     = 
     fetch_page 
     ( 
     url 
     ) 
     items 
     = 
     resp 
     . 
     get 
     ( 
     "auditlogs" 
     , 
     []) 
     or 
     [] 
     if 
     items 
     : 
     write_chunk 
     ( 
     items 
     , 
     _now 
     ()) 
     total_written 
     += 
     len 
     ( 
     items 
     ) 
     next_path 
     = 
     resp 
     . 
     get 
     ( 
     "next" 
     ) 
     if 
     not 
     next_path 
     : 
     break 
     page 
     += 
     1 
     state 
     [ 
     "page" 
     ] 
     = 
     page 
     # advance state window 
     state 
     [ 
     "last_to_dt" 
     ] 
     = 
     to_dt 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     + 
     "Z" 
     state 
     [ 
     "page" 
     ] 
     = 
     1 
     put_state 
     ( 
     state 
     ) 
     return 
     { 
     "ok" 
     : 
     True 
     , 
     "written" 
     : 
     total_written 
     , 
     "from" 
     : 
     from_dt 
     . 
     isoformat 
     () 
     + 
     "Z" 
     , 
     "to" 
     : 
     to_dt 
     . 
     isoformat 
     () 
     + 
     "Z" 
     } 
     if 
     __name__ 
     == 
     "__main__" 
     : 
     print 
     ( 
     lambda_handler 
     ()) 
     
    
  5. Go to Configuration > Environment variables.

  6. Click Edit > Add new environment variable.

  7. Enter the following environment variables, replacing with your values.

    Key Example value
    S3_BUCKET swimlane-audit
    S3_PREFIX swimlane/audit/
    STATE_KEY swimlane/audit/state.json
    SWIMLANE_BASE_URL https://eu.swimlane.app
    SWIMLANE_ACCOUNT_ID xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
    SWIMLANE_TENANT_LIST tenantA,tenantB (optional)
    INCLUDE_ACCOUNT true
    PAGE_SIZE 100
    WINDOW_MINUTES 15
    SWIMLANE_PAT_TOKEN <your-personal-access-token>
    TIMEOUT 30
  8. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  9. Select the Configurationtab.

  10. In the General configurationpanel click Edit.

  11. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 15 min )
    • Target: Your Lambda function swimlane_audit_to_s3
    • Name: swimlane-audit-schedule-15min
  3. Click Create schedule.

Optional: Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console. go to IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: secops-reader
    • Access type: Access key — Programmatic access
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. In the JSON editor, enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::swimlane-audit/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::swimlane-audit" 
      
     } 
      
     ] 
     } 
     
    
  7. Set the name to secops-reader-policy .

  8. Go to Create policy > search/select > Next > Add permissions.

  9. Go to Security credentials > Access keys > Create access key.

  10. Download the CSV(these values are entered into the feed).

Configure a feed in Google SecOps to ingest Swimlane Platform logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Swimlane Platform logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Swimlane Platformas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://swimlane-audit/swimlane/audit/
    • Source deletion options: Select the deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default 180 Days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: