Collect Zoom operation logs

Supported in:

This document explains how to ingest Zoom operation logs to Google Security Operations using Amazon S3. The parser transforms the raw logs into a unified data model (UDM). It extracts fields from the raw log message, performs data cleaning and normalization, and maps the extracted information to corresponding UDM fields, ultimately enriching the data for analysis and correlation within a SIEM system.

Before you begin

Make sure you have the following prerequisites:

  • Google SecOps instance
  • Privileged access to Zoom
  • Privileged access to AWS(S3, IAM, Lambda, EventBridge)

Collect Zoom Operation Logs prerequisites (IDs, API keys, org IDs, tokens)

  1. Sign in to Zoom App Marketplace.
  2. Go to Develop > Build App > Server-to-Server OAuth.
  3. Create the app and add the following scope: report:read:operation_logs:admin (or report:read:admin ).
  4. In App Credentials, copy and save the following details in a secure location:
    • Account ID.
    • Client ID.
    • Client Secret.

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, zoom-operation-logs ).
  3. Create a user following this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select the Security credentialstab.
  6. Click Create Access Keyin the Access Keyssection.
  7. Select Third-party serviceas the Use case.
  8. Click Next.
  9. Optional: add a description tag.
  10. Click Create access key.
  11. Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
  12. Click Done.
  13. Select the Permissionstab.
  14. Click Add permissionsin the Permissions policiessection.
  15. Select Add permissions.
  16. Select Attach policies directly
  17. Search for and select the AmazonS3FullAccesspolicy.
  18. Click Next.
  19. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies > Create policy > JSON tab.
  2. Enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutZoomOperationLogs" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:PutObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::zoom-operation-logs/zoom/operationlogs/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowStateReadWrite" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     , 
      
     "s3:PutObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::zoom-operation-logs/zoom/operationlogs/state.json" 
      
     } 
      
     ] 
     } 
     
    
    • Replace zoom-operation-logs if you entered a different bucket name.
  3. Click Next > Create policy.

  4. Go to IAM > Roles > Create role > AWS service > Lambda.

  5. Attach the newly created policy.

  6. Name the role WriteZoomOperationLogsToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:
Setting Value
Name zoom_operationlogs_to_s3
Runtime Python 3.13
Architecture x86_64
Execution role WriteZoomOperationLogsToS3Role
  1. After the function is created, open the Codetab, delete the stub and enter the following code( zoom_operationlogs_to_s3.py ):

      #!/usr/bin/env python3 
     import 
      
     os 
     , 
      
     json 
     , 
      
     gzip 
     , 
      
     io 
     , 
      
     uuid 
     , 
      
     datetime 
      
     as 
      
     dt 
     , 
      
     base64 
     , 
      
     urllib.parse 
     , 
      
     urllib.request 
     import 
      
     boto3 
     # ---- Environment ---- 
     S3_BUCKET 
     = 
     os 
     . 
     environ 
     [ 
     "S3_BUCKET" 
     ] 
     S3_PREFIX 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "S3_PREFIX" 
     , 
     "zoom/operationlogs/" 
     ) 
     STATE_KEY 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "STATE_KEY" 
     , 
     S3_PREFIX 
     + 
     "state.json" 
     ) 
     ZOOM_ACCOUNT_ID 
     = 
     os 
     . 
     environ 
     [ 
     "ZOOM_ACCOUNT_ID" 
     ] 
     ZOOM_CLIENT_ID 
     = 
     os 
     . 
     environ 
     [ 
     "ZOOM_CLIENT_ID" 
     ] 
     ZOOM_CLIENT_SECRET 
     = 
     os 
     . 
     environ 
     [ 
     "ZOOM_CLIENT_SECRET" 
     ] 
     PAGE_SIZE 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "PAGE_SIZE" 
     , 
     "300" 
     )) 
     # API default 30; max may vary 
     TIMEOUT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "TIMEOUT" 
     , 
     "30" 
     )) 
     TOKEN_URL 
     = 
     "https://zoom.us/oauth/token" 
     REPORT_URL 
     = 
     "https://api.zoom.us/v2/report/operationlogs" 
     s3 
     = 
     boto3 
     . 
     client 
     ( 
     "s3" 
     ) 
     # ---- Helpers ---- 
     def 
      
     _http 
     ( 
     req 
     : 
     urllib 
     . 
     request 
     . 
     Request 
     ): 
     return 
     urllib 
     . 
     request 
     . 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     TIMEOUT 
     ) 
     def 
      
     get_token 
     () 
     - 
    > str 
     : 
     params 
     = 
     urllib 
     . 
     parse 
     . 
     urlencode 
     ({ 
     "grant_type" 
     : 
     "account_credentials" 
     , 
     "account_id" 
     : 
     ZOOM_ACCOUNT_ID 
     , 
     }) 
     . 
     encode 
     () 
     basic 
     = 
     base64 
     . 
     b64encode 
     ( 
     f 
     " 
     { 
     ZOOM_CLIENT_ID 
     } 
     : 
     { 
     ZOOM_CLIENT_SECRET 
     } 
     " 
     . 
     encode 
     ()) 
     . 
     decode 
     () 
     req 
     = 
     urllib 
     . 
     request 
     . 
     Request 
     ( 
     TOKEN_URL 
     , 
     data 
     = 
     params 
     , 
     headers 
     = 
     { 
     "Authorization" 
     : 
     f 
     "Basic 
     { 
     basic 
     } 
     " 
     , 
     "Content-Type" 
     : 
     "application/x-www-form-urlencoded" 
     , 
     "Accept" 
     : 
     "application/json" 
     , 
     "Host" 
     : 
     "zoom.us" 
     , 
     }, 
     method 
     = 
     "POST" 
     , 
     ) 
     with 
     _http 
     ( 
     req 
     ) 
     as 
     r 
     : 
     body 
     = 
     json 
     . 
     loads 
     ( 
     r 
     . 
     read 
     ()) 
     return 
     body 
     [ 
     "access_token" 
     ] 
     def 
      
     get_state 
     () 
     - 
    > dict 
     : 
     try 
     : 
     obj 
     = 
     s3 
     . 
     get_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     ) 
     return 
     json 
     . 
     loads 
     ( 
     obj 
     [ 
     "Body" 
     ] 
     . 
     read 
     ()) 
     except 
     Exception 
     : 
     # initial state: start today 
     today 
     = 
     dt 
     . 
     date 
     . 
     today 
     () 
     . 
     isoformat 
     () 
     return 
     { 
     "cursor_date" 
     : 
     today 
     , 
     "next_page_token" 
     : 
     None 
     } 
     def 
      
     put_state 
     ( 
     state 
     : 
     dict 
     ): 
     state 
     [ 
     "updated_at" 
     ] 
     = 
     dt 
     . 
     datetime 
     . 
     utcnow 
     () 
     . 
     isoformat 
     () 
     + 
     "Z" 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     state 
     ) 
     . 
     encode 
     ()) 
     def 
      
     write_chunk 
     ( 
     items 
     : 
     list 
     [ 
     dict 
     ], 
     ts 
     : 
     dt 
     . 
     datetime 
     ) 
     - 
    > str 
     : 
     key 
     = 
     f 
     " 
     { 
     S3_PREFIX 
     }{ 
     ts 
     : 
     %Y/%m/%d 
     } 
     /zoom-operationlogs- 
     { 
     uuid 
     . 
     uuid4 
     () 
     } 
     .json.gz" 
     buf 
     = 
     io 
     . 
     BytesIO 
     () 
     with 
     gzip 
     . 
     GzipFile 
     ( 
     fileobj 
     = 
     buf 
     , 
     mode 
     = 
     "w" 
     ) 
     as 
     gz 
     : 
     for 
     rec 
     in 
     items 
     : 
     gz 
     . 
     write 
     (( 
     json 
     . 
     dumps 
     ( 
     rec 
     ) 
     + 
     "n" 
     ) 
     . 
     encode 
     ()) 
     buf 
     . 
     seek 
     ( 
     0 
     ) 
     s3 
     . 
     upload_fileobj 
     ( 
     buf 
     , 
     S3_BUCKET 
     , 
     key 
     ) 
     return 
     key 
     def 
      
     fetch_page 
     ( 
     token 
     : 
     str 
     , 
     from_date 
     : 
     str 
     , 
     to_date 
     : 
     str 
     , 
     next_page_token 
     : 
     str 
     | 
     None 
     ) 
     - 
    > dict 
     : 
     q 
     = 
     { 
     "from" 
     : 
     from_date 
     , 
     "to" 
     : 
     to_date 
     , 
     "page_size" 
     : 
     str 
     ( 
     PAGE_SIZE 
     ), 
     } 
     if 
     next_page_token 
     : 
     q 
     [ 
     "next_page_token" 
     ] 
     = 
     next_page_token 
     url 
     = 
     REPORT_URL 
     + 
     "?" 
     + 
     urllib 
     . 
     parse 
     . 
     urlencode 
     ( 
     q 
     ) 
     req 
     = 
     urllib 
     . 
     request 
     . 
     Request 
     ( 
     url 
     , 
     headers 
     = 
     { 
     "Authorization" 
     : 
     f 
     "Bearer 
     { 
     token 
     } 
     " 
     , 
     "Accept" 
     : 
     "application/json" 
     , 
     }) 
     with 
     _http 
     ( 
     req 
     ) 
     as 
     r 
     : 
     return 
     json 
     . 
     loads 
     ( 
     r 
     . 
     read 
     ()) 
     def 
      
     lambda_handler 
     ( 
     event 
     = 
     None 
     , 
     context 
     = 
     None 
     ): 
     token 
     = 
     get_token 
     () 
     state 
     = 
     get_state 
     () 
     cursor_date 
     = 
     state 
     . 
     get 
     ( 
     "cursor_date" 
     ) 
     # YYYY-MM-DD 
     # API requires from/to in yyyy-mm-dd, max one month per request 
     from_date 
     = 
     cursor_date 
     to_date 
     = 
     cursor_date 
     total_written 
     = 
     0 
     next_token 
     = 
     state 
     . 
     get 
     ( 
     "next_page_token" 
     ) 
     while 
     True 
     : 
     page 
     = 
     fetch_page 
     ( 
     token 
     , 
     from_date 
     , 
     to_date 
     , 
     next_token 
     ) 
     items 
     = 
     page 
     . 
     get 
     ( 
     "operation_logs" 
     , 
     []) 
     or 
     [] 
     if 
     items 
     : 
     write_chunk 
     ( 
     items 
     , 
     dt 
     . 
     datetime 
     . 
     utcnow 
     ()) 
     total_written 
     += 
     len 
     ( 
     items 
     ) 
     next_token 
     = 
     page 
     . 
     get 
     ( 
     "next_page_token" 
     ) 
     if 
     not 
     next_token 
     : 
     break 
     # Advance to next day if we've finished this date 
     today 
     = 
     dt 
     . 
     date 
     . 
     today 
     () 
     . 
     isoformat 
     () 
     if 
     cursor_date 
    < today 
     : 
     nxt 
     = 
     ( 
     dt 
     . 
     datetime 
     . 
     fromisoformat 
     ( 
     cursor_date 
     ) 
     + 
     dt 
     . 
     timedelta 
     ( 
     days 
     = 
     1 
     )) 
     . 
     date 
     () 
     . 
     isoformat 
     () 
     state 
     [ 
     "cursor_date" 
     ] 
     = 
     nxt 
     state 
     [ 
     "next_page_token" 
     ] 
     = 
     None 
     else 
     : 
     # stay on today; continue later with next_page_token=None 
     state 
     [ 
     "next_page_token" 
     ] 
     = 
     None 
     put_state 
     ( 
     state 
     ) 
     return 
     { 
     "ok" 
     : 
     True 
     , 
     "written" 
     : 
     total_written 
     , 
     "date" 
     : 
     from_date 
     } 
     if 
     __name__ 
     == 
     "__main__" 
     : 
     print 
     ( 
     lambda_handler 
     ()) 
     
    
  2. Go to Configuration > Environment variables > Edit > Add new environment variable.

  3. Enter the following environment variables, replacing with your values:

    Key Example value
    S3_BUCKET zoom-operation-logs
    S3_PREFIX zoom/operationlogs/
    STATE_KEY zoom/operationlogs/state.json
    ZOOM_ACCOUNT_ID <your-zoom-account-id>
    ZOOM_CLIENT_ID <your-zoom-client-id>
    ZOOM_CLIENT_SECRET <your-zoom-client-secret>
    PAGE_SIZE 300
    TIMEOUT 30
  4. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  5. Select the Configurationtab.

  6. In the General configurationpanel click Edit.

  7. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler.
  2. Click Create schedule.
  3. Provide the following configuration details:
    • Recurring schedule: Rate( 15 min ).
    • Target: Your Lambda function zoom_operationlogs_to_s3 .
    • Name: zoom-operationlogs-schedule-15min .
  4. Click Create schedule.

Optional: Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console. go to IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: secops-reader .
    • Access type: Access key — Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. In the JSON editor, enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::zoom-operation-logs/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::zoom-operation-logs" 
      
     } 
      
     ] 
     } 
     
    
  7. Set the name to secops-reader-policy .

  8. Go to Create policy > search/select > Next > Add permissions.

  9. Go to Security credentials > Access keys > Create access key.

  10. Download the CSV(these values are entered into the feed).

Configure a feed in Google SecOps to ingest Zoom Operation Logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Zoom Operation Logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Zoom Operation Logsas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://zoom-operation-logs/zoom/operationlogs/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

UDM Mapping Table

Log Field UDM Mapping Logic
action
metadata.product_event_type The raw log field "action" is mapped to this UDM field.
category_type
additional.fields.key The raw log field "category_type" is mapped to this UDM field.
category_type
additional.fields.value.string_value The raw log field "category_type" is mapped to this UDM field.
Department
target.user.department The raw log field "Department" (extracted from "operation_detail" field) is mapped to this UDM field.
Description
target.user.role_description The raw log field "Description" (extracted from "operation_detail" field) is mapped to this UDM field.
Display Name
target.user.user_display_name The raw log field "Display Name" (extracted from "operation_detail" field) is mapped to this UDM field.
Email Address
target.user.email_addresses The raw log field "Email Address" (extracted from "operation_detail" field) is mapped to this UDM field.
First Name
target.user.first_name The raw log field "First Name" (extracted from "operation_detail" field) is mapped to this UDM field.
Job Title
target.user.title The raw log field "Job Title" (extracted from "operation_detail" field) is mapped to this UDM field.
Last Name
target.user.last_name The raw log field "Last Name" (extracted from "operation_detail" field) is mapped to this UDM field.
Location
target.location.name The raw log field "Location" (extracted from "operation_detail" field) is mapped to this UDM field.
operation_detail
metadata.description The raw log field "operation_detail" is mapped to this UDM field.
operator
principal.user.email_addresses The raw log field "operator" is mapped to this UDM field if it matches an email regex.
operator
principal.user.userid The raw log field "operator" is mapped to this UDM field if it doesn't match an email regex.
Room Name
target.user.attribute.labels.value The raw log field "Room Name" (extracted from "operation_detail" field) is mapped to this UDM field.
Role Name
target.user.attribute.roles.name The raw log field "Role Name" (extracted from "operation_detail" field) is mapped to this UDM field.
time
metadata.event_timestamp.seconds The raw log field "time" is parsed and mapped to this UDM field.
Type
target.user.attribute.labels.value The raw log field "Type" (extracted from "operation_detail" field) is mapped to this UDM field.
User Role
target.user.attribute.roles.name The raw log field "User Role" (extracted from "operation_detail" field) is mapped to this UDM field.
User Type
target.user.attribute.labels.value The raw log field "User Type" (extracted from "operation_detail" field) is mapped to this UDM field.
metadata.log_type The value "ZOOM_OPERATION_LOGS" is assigned to this UDM field.
metadata.vendor_name The value "ZOOM" is assigned to this UDM field.
metadata.product_name The value "ZOOM_OPERATION_LOGS" is assigned to this UDM field.
metadata.event_type The value is determined based on the following logic:
1. If "event_type" field is not empty, its value is used.
2. If "operator", "email", or "email2" fields are not empty, the value is set to "USER_UNCATEGORIZED".
3. Otherwise, the value is set to "GENERIC_EVENT".
json_data
about.user.attribute.labels.value The raw log field "json_data" (extracted from "operation_detail" field) is parsed as JSON. The "assistant" and "options" fields from each element of the parsed JSON array are mapped to the "value" field of the "labels" array in the UDM.
json_data
about.user.userid The raw log field "json_data" (extracted from "operation_detail" field) is parsed as JSON. The "userId" field from each element of the parsed JSON array (except the first one) is mapped to the "userid" field of the "about.user" object in the UDM.
json_data
target.user.attribute.labels.value The raw log field "json_data" (extracted from "operation_detail" field) is parsed as JSON. The "assistant" and "options" fields from the first element of the parsed JSON array are mapped to the "value" field of the "labels" array in the UDM.
json_data
target.user.userid The raw log field "json_data" (extracted from "operation_detail" field) is parsed as JSON. The "userId" field from the first element of the parsed JSON array is mapped to the "userid" field of the "target.user" object in the UDM.
email
target.user.email_addresses The raw log field "email" (extracted from "operation_detail" field) is mapped to this UDM field.
email2
target.user.email_addresses The raw log field "email2" (extracted from "operation_detail" field) is mapped to this UDM field.
role
target.user.attribute.roles.name The raw log field "role" (extracted from "operation_detail" field) is mapped to this UDM field.

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: