Collect Slack audit logs

Supported in:

This document explains how to ingest Slack Audit Logs to Google Security Operations using Amazon S3. The parser first normalizes boolean values and clears predefined fields. Then, it parses the "message" field as JSON, handling non-JSON messages by dropping them. Depending on the presence of specific fields ( date_create and user_id ), the parser applies different logic to map raw log fields to the UDM, including metadata, principal, network, target, and about information, and constructs a security result.

Before you begin

Make sure you have the following prerequisites:

  • Google SecOps instance
  • Privileged access to Slack Enterprise Gridtenant and Admin Console
  • Privileged access to AWS(S3, IAM, Lambda, EventBridge)

Collect Slack prerequisites (App ID, OAuth Token, Organization ID)

  1. Sign in to the SlackAdmin Console.
  2. Go to https://api.slack.com/appsand click Create New App > From scratch.
  3. Enter a unique App Nameand select your Slack Workspace.
  4. Click Create App.
  5. Navigate to OAuth & Permissionsin the left sidebar.
  6. Go to the Scopessection and add the following User Token Scope - auditlogs:read
  7. Click Install to Workspace > Allow.
  8. Once installed, go to Org Level Apps.
  9. Click Install to Organization.
  10. Authorize the app with an Organization Owner/Adminaccount.
  11. Copy and securely save the User OAuth Tokenthat starts with xoxp- (this is your SLACK_AUDIT_TOKEN).
  12. Note the Organization IDwhich can be found in the Slack Admin Console under Settings & Permissions > Organization settings.

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, slack-audit-logs ).
  3. Create a user following this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select the Security credentialstab.
  6. Click Create Access Keyin the Access Keyssection.
  7. Select Third-party serviceas the Use case.
  8. Click Next.
  9. Optional: add a description tag.
  10. Click Create access key.
  11. Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
  12. Click Done.
  13. Select the Permissionstab.
  14. Click Add permissionsin the Permissions policiessection.
  15. Select Add permissions.
  16. Select Attach policies directly
  17. Search for and select the AmazonS3FullAccesspolicy.
  18. Click Next.
  19. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies > Create policy > JSON tab.
  2. Enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:PutObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::slack-audit-logs/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowGetStateObject" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:GetObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::slack-audit-logs/slack/audit/state.json" 
      
     } 
      
     ] 
     } 
     
    
    • Replace slack-audit-logs if you entered a different bucket name.
  3. Click Next > Create policy.

  4. Go to IAM > Roles > Create role > AWS service > Lambda.

  5. Attach the newly created policy.

  6. Name the role SlackAuditToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:
Setting Value
Name slack_audit_to_s3
Runtime Python 3.13
Architecture x86_64
Execution role SlackAuditToS3Role
  1. After the function is created, open the Codetab, delete the stub and enter the following ( slack_audit_to_s3.py ):

      #!/usr/bin/env python3 
     # Lambda: Pull Slack Audit Logs (Enterprise Grid) to S3 (no transform) 
     import 
      
     os 
     , 
      
     json 
     , 
      
     time 
     , 
      
     urllib.parse 
     from 
      
     urllib.request 
      
     import 
     Request 
     , 
     urlopen 
     from 
      
     urllib.error 
      
     import 
     HTTPError 
     , 
     URLError 
     import 
      
     boto3 
     BASE_URL 
     = 
     "https://api.slack.com/audit/v1/logs" 
     TOKEN 
     = 
     os 
     . 
     environ 
     [ 
     "SLACK_AUDIT_TOKEN" 
     ] 
     # org-level user token with auditlogs:read 
     BUCKET 
     = 
     os 
     . 
     environ 
     [ 
     "S3_BUCKET" 
     ] 
     PREFIX 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "S3_PREFIX" 
     , 
     "slack/audit/" 
     ) 
     STATE_KEY 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "STATE_KEY" 
     , 
     "slack/audit/state.json" 
     ) 
     LIMIT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "LIMIT" 
     , 
     "200" 
     )) 
     # Slack recommends <= 200 
     MAX_PAGES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "MAX_PAGES" 
     , 
     "20" 
     )) 
     LOOKBACK_SEC 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "LOOKBACK_SECONDS" 
     , 
     "3600" 
     )) 
     # First-run window 
     HTTP_TIMEOUT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "HTTP_TIMEOUT" 
     , 
     "60" 
     )) 
     HTTP_RETRIES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "HTTP_RETRIES" 
     , 
     "3" 
     )) 
     RETRY_AFTER_DEFAULT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "RETRY_AFTER_DEFAULT" 
     , 
     "2" 
     )) 
     # Optional server-side filters (comma-separated "action" values), empty means no filter 
     ACTIONS 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "ACTIONS" 
     , 
     "" 
     ) 
     . 
     strip 
     () 
     s3 
     = 
     boto3 
     . 
     client 
     ( 
     "s3" 
     ) 
     def 
      
     _get_state 
     () 
     - 
    > dict 
     : 
     try 
     : 
     obj 
     = 
     s3 
     . 
     get_object 
     ( 
     Bucket 
     = 
     BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     ) 
     st 
     = 
     json 
     . 
     loads 
     ( 
     obj 
     [ 
     "Body" 
     ] 
     . 
     read 
     () 
     or 
     b 
     " 
     {} 
     " 
     ) 
     return 
     { 
     "cursor" 
     : 
     st 
     . 
     get 
     ( 
     "cursor" 
     )} 
     except 
     Exception 
     : 
     return 
     { 
     "cursor" 
     : 
     None 
     } 
     def 
      
     _put_state 
     ( 
     state 
     : 
     dict 
     ) 
     - 
    > None 
     : 
     body 
     = 
     json 
     . 
     dumps 
     ( 
     state 
     , 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     )) 
     . 
     encode 
     ( 
     "utf-8" 
     ) 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     , 
     Body 
     = 
     body 
     , 
     ContentType 
     = 
     "application/json" 
     ) 
     def 
      
     _http_get 
     ( 
     params 
     : 
     dict 
     ) 
     - 
    > dict 
     : 
     qs 
     = 
     urllib 
     . 
     parse 
     . 
     urlencode 
     ( 
     params 
     , 
     doseq 
     = 
     True 
     ) 
     url 
     = 
     f 
     " 
     { 
     BASE_URL 
     } 
     ? 
     { 
     qs 
     } 
     " 
     if 
     qs 
     else 
     BASE_URL 
     req 
     = 
     Request 
     ( 
     url 
     , 
     method 
     = 
     "GET" 
     ) 
     req 
     . 
     add_header 
     ( 
     "Authorization" 
     , 
     f 
     "Bearer 
     { 
     TOKEN 
     } 
     " 
     ) 
     req 
     . 
     add_header 
     ( 
     "Accept" 
     , 
     "application/json" 
     ) 
     attempt 
     = 
     0 
     while 
     True 
     : 
     try 
     : 
     with 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     HTTP_TIMEOUT 
     ) 
     as 
     r 
     : 
     return 
     json 
     . 
     loads 
     ( 
     r 
     . 
     read 
     () 
     . 
     decode 
     ( 
     "utf-8" 
     )) 
     except 
     HTTPError 
     as 
     e 
     : 
     # Respect Retry-After on 429/5xx 
     if 
     e 
     . 
     code 
     in 
     ( 
     429 
     , 
     500 
     , 
     502 
     , 
     503 
     , 
     504 
     ) 
     and 
     attempt 
    < HTTP_RETRIES 
     : 
     retry_after 
     = 
     0 
     try 
     : 
     retry_after 
     = 
     int 
     ( 
     e 
     . 
     headers 
     . 
     get 
     ( 
     "Retry-After" 
     , 
     RETRY_AFTER_DEFAULT 
     )) 
     except 
     Exception 
     : 
     retry_after 
     = 
     RETRY_AFTER_DEFAULT 
     time 
     . 
     sleep 
     ( 
     max 
     ( 
     1 
     , 
     retry_after 
     )) 
     attempt 
     += 
     1 
     continue 
     # Re-raise other HTTP errors 
     raise 
     except 
     URLError 
     : 
     if 
     attempt 
    < HTTP_RETRIES 
     : 
     time 
     . 
     sleep 
     ( 
     RETRY_AFTER_DEFAULT 
     ) 
     attempt 
     += 
     1 
     continue 
     raise 
     def 
      
     _write_page 
     ( 
     payload 
     : 
     dict 
     , 
     page_idx 
     : 
     int 
     ) 
     - 
    > str 
     : 
     ts 
     = 
     time 
     . 
     strftime 
     ( 
     "%Y/%m/ 
     %d 
     /%H%M%S" 
     , 
     time 
     . 
     gmtime 
     ()) 
     key 
     = 
     f 
     " 
     { 
     PREFIX 
     } 
     / 
     { 
     ts 
     } 
     -slack-audit-p 
     { 
     page_idx 
     : 
     05d 
     } 
     .json" 
     body 
     = 
     json 
     . 
     dumps 
     ( 
     payload 
     , 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     )) 
     . 
     encode 
     ( 
     "utf-8" 
     ) 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     BUCKET 
     , 
     Key 
     = 
     key 
     , 
     Body 
     = 
     body 
     , 
     ContentType 
     = 
     "application/json" 
     ) 
     return 
     key 
     def 
      
     lambda_handler 
     ( 
     event 
     = 
     None 
     , 
     context 
     = 
     None 
     ): 
     state 
     = 
     _get_state 
     () 
     cursor 
     = 
     state 
     . 
     get 
     ( 
     "cursor" 
     ) 
     params 
     = 
     { 
     "limit" 
     : 
     LIMIT 
     } 
     if 
     ACTIONS 
     : 
     params 
     [ 
     "action" 
     ] 
     = 
     [ 
     a 
     . 
     strip 
     () 
     for 
     a 
     in 
     ACTIONS 
     . 
     split 
     ( 
     "," 
     ) 
     if 
     a 
     . 
     strip 
     ()] 
     if 
     cursor 
     : 
     params 
     [ 
     "cursor" 
     ] 
     = 
     cursor 
     else 
     : 
     # First run (or reset): fetch a recent window by time 
     params 
     [ 
     "oldest" 
     ] 
     = 
     int 
     ( 
     time 
     . 
     time 
     ()) 
     - 
     LOOKBACK_SEC 
     pages 
     = 
     0 
     total 
     = 
     0 
     last_cursor 
     = 
     None 
     while 
     pages 
    < MAX_PAGES 
     : 
     data 
     = 
     _http_get 
     ( 
     params 
     ) 
     _write_page 
     ( 
     data 
     , 
     pages 
     ) 
     entries 
     = 
     data 
     . 
     get 
     ( 
     "entries" 
     ) 
     or 
     [] 
     total 
     += 
     len 
     ( 
     entries 
     ) 
     # Cursor for next page 
     meta 
     = 
     data 
     . 
     get 
     ( 
     "response_metadata" 
     ) 
     or 
     {} 
     next_cursor 
     = 
     meta 
     . 
     get 
     ( 
     "next_cursor" 
     ) 
     or 
     data 
     . 
     get 
     ( 
     "next_cursor" 
     ) 
     if 
     next_cursor 
     : 
     params 
     = 
     { 
     "limit" 
     : 
     LIMIT 
     , 
     "cursor" 
     : 
     next_cursor 
     } 
     if 
     ACTIONS 
     : 
     params 
     [ 
     "action" 
     ] 
     = 
     [ 
     a 
     . 
     strip 
     () 
     for 
     a 
     in 
     ACTIONS 
     . 
     split 
     ( 
     "," 
     ) 
     if 
     a 
     . 
     strip 
     ()] 
     last_cursor 
     = 
     next_cursor 
     pages 
     += 
     1 
     continue 
     break 
     if 
     last_cursor 
     : 
     _put_state 
     ({ 
     "cursor" 
     : 
     last_cursor 
     }) 
     return 
     { 
     "ok" 
     : 
     True 
     , 
     "pages" 
     : 
     pages 
     + 
     ( 
     1 
     if 
     total 
     or 
     last_cursor 
     else 
     0 
     ), 
     "entries" 
     : 
     total 
     , 
     "cursor" 
     : 
     last_cursor 
     } 
     if 
     __name__ 
     == 
     "__main__" 
     : 
     print 
     ( 
     lambda_handler 
     ()) 
     
    
  2. Go to Configuration > Environment variables > Edit > Add new environment variable.

  3. Enter the following environment variables, replacing with your values:

    Key Example value
    S3_BUCKET slack-audit-logs
    S3_PREFIX slack/audit/
    STATE_KEY slack/audit/state.json
    SLACK_AUDIT_TOKEN xoxp-*** (org-level user token with auditlogs:read )
    LIMIT 200
    MAX_PAGES 20
    LOOKBACK_SECONDS 3600
    HTTP_TIMEOUT 60
    HTTP_RETRIES 3
    RETRY_AFTER_DEFAULT 2
    ACTIONS (optional, CSV) user_login,app_installed
  4. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  5. Select the Configurationtab.

  6. In the General configurationpanel click Edit.

  7. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 1 hour ).
    • Target: Your Lambda function slack_audit_to_s3 .
    • Name: slack-audit-1h .
  3. Click Create schedule.

Optional: Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console. go to IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: secops-reader .
    • Access type: Access key — Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. In the JSON editor, enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::slack-audit-logs/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::slack-audit-logs" 
      
     } 
      
     ] 
     } 
     
    
  7. Set the name to secops-reader-policy .

  8. Go to Create policy > search/select > Next > Add permissions.

  9. Go to Security credentials > Access keys > Create access key.

  10. Download the CSV(these values are entered into the feed).

Configure a feed in Google SecOps to ingest Slack Audit Logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Slack Audit Logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Slack Auditas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://slack-audit-logs/slack/audit/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

UDM Mapping Table

Log Field UDM Mapping Logic
action
metadata.product_event_type Directly mapped from the action field in the raw log.
actor.type
principal.labels.value Directly mapped from the actor.type field, with the key actor.type added.
actor.user.email
principal.user.email_addresses Directly mapped from the actor.user.email field.
actor.user.id
principal.user.product_object_id Directly mapped from the actor.user.id field.
actor.user.id
principal.user.userid Directly mapped from the actor.user.id field.
actor.user.name
principal.user.user_display_name Directly mapped from the actor.user.name field.
actor.user.team
principal.user.group_identifiers Directly mapped from the actor.user.team field.
context.ip_address
principal.ip Directly mapped from the context.ip_address field.
context.location.domain
about.resource.attribute.labels.value Directly mapped from the context.location.domain field, with the key context.location.domain added.
context.location.id
about.resource.id Directly mapped from the context.location.id field.
context.location.name
about.resource.name Directly mapped from the context.location.name field.
context.location.name
about.resource.attribute.labels.value Directly mapped from the context.location.name field, with the key context.location.name added.
context.location.type
about.resource.resource_subtype Directly mapped from the context.location.type field.
context.session_id
network.session_id Directly mapped from the context.session_id field.
context.ua
network.http.user_agent Directly mapped from the context.ua field.
context.ua
network.http.parsed_user_agent Parsed user agent information derived from the context.ua field using the parseduseragent filter.
country
principal.location.country_or_region Directly mapped from the country field.
date_create
metadata.event_timestamp.seconds The epoch timestamp from the date_create field is converted to a timestamp object.
details.inviter.email
target.user.email_addresses Directly mapped from the details.inviter.email field.
details.inviter.id
target.user.product_object_id Directly mapped from the details.inviter.id field.
details.inviter.name
target.user.user_display_name Directly mapped from the details.inviter.name field.
details.inviter.team
target.user.group_identifiers Directly mapped from the details.inviter.team field.
details.reason
security_result.description Directly mapped from the details.reason field, or if it's an array, concatenated with commas.
details.type
about.resource.attribute.labels.value Directly mapped from the details.type field, with the key details.type added.
details.type
security_result.summary Directly mapped from the details.type field.
entity.app.id
target.resource.id Directly mapped from the entity.app.id field.
entity.app.name
target.resource.name Directly mapped from the entity.app.name field.
entity.channel.id
target.resource.id Directly mapped from the entity.channel.id field.
entity.channel.name
target.resource.name Directly mapped from the entity.channel.name field.
entity.channel.privacy
target.resource.attribute.labels.value Directly mapped from the entity.channel.privacy field, with the key entity.channel.privacy added.
entity.file.filetype
target.resource.attribute.labels.value Directly mapped from the entity.file.filetype field, with the key entity.file.filetype added.
entity.file.id
target.resource.id Directly mapped from the entity.file.id field.
entity.file.name
target.resource.name Directly mapped from the entity.file.name field.
entity.file.title
target.resource.attribute.labels.value Directly mapped from the entity.file.title field, with the key entity.file.title added.
entity.huddle.date_end
about.resource.attribute.labels.value Directly mapped from the entity.huddle.date_end field, with the key entity.huddle.date_end added.
entity.huddle.date_start
about.resource.attribute.labels.value Directly mapped from the entity.huddle.date_start field, with the key entity.huddle.date_start added.
entity.huddle.id
about.resource.attribute.labels.value Directly mapped from the entity.huddle.id field, with the key entity.huddle.id added.
entity.huddle.participants.0
about.resource.attribute.labels.value Directly mapped from the entity.huddle.participants.0 field, with the key entity.huddle.participants.0 added.
entity.huddle.participants.1
about.resource.attribute.labels.value Directly mapped from the entity.huddle.participants.1 field, with the key entity.huddle.participants.1 added.
entity.type
target.resource.resource_subtype Directly mapped from the entity.type field.
entity.user.email
target.user.email_addresses Directly mapped from the entity.user.email field.
entity.user.id
target.user.product_object_id Directly mapped from the entity.user.id field.
entity.user.name
target.user.user_display_name Directly mapped from the entity.user.name field.
entity.user.team
target.user.group_identifiers Directly mapped from the entity.user.team field.
entity.workflow.id
target.resource.id Directly mapped from the entity.workflow.id field.
entity.workflow.name
target.resource.name Directly mapped from the entity.workflow.name field.
id
metadata.product_log_id Directly mapped from the id field.
ip
principal.ip Directly mapped from the ip field. Determined by logic based on the action field. Defaults to USER_COMMUNICATION , but changes to other values like USER_CREATION , USER_LOGIN , USER_LOGOUT , USER_RESOURCE_ACCESS , USER_RESOURCE_UPDATE_PERMISSIONS , or USER_CHANGE_PERMISSIONS based on the value of action . Hardcoded to "SLACK_AUDIT". Set to "Enterprise Grid" if date_create exists, otherwise set to "Audit Logs" if user_id exists. Hardcoded to "Slack". Hardcoded to "REMOTE". Set to "SSO" if action contains "user_login" or "user_logout". Otherwise, set to "MACHINE". Not mapped in the provided examples. Defaults to "ALLOW", but set to "BLOCK" if action is "user_login_failed". Set to "Slack" if date_create exists, otherwise set to "SLACK" if user_id exists.
user_agent
network.http.user_agent Directly mapped from the user_agent field.
user_id
principal.user.product_object_id Directly mapped from the user_id field.
username
principal.user.product_object_id Directly mapped from the username field.

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: