Collect Rippling activity logs

Supported in:

This document explains how to ingest Rippling activity logs to Google Security Operations using Amazon S3.

Before you begin

Make sure you have the following prerequisites:

  • A Google SecOps instance.
  • Privileged access to Rippling(API token with access to Company Activity).
  • Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).

Get Rippling prerequisites

  1. Sign in to Rippling Admin.
  2. Open Search > API Tokens.
    Alternative path: Settings > Company Settings > API Tokens.
  3. Click Create API token.
  4. Provide the following configuration details:
    • Name: Provide a unique and meaningful name (for example, Google SecOps S3 Export )
    • API version: Base API (v1)
    • Scopes/Permissions: Enable company:activity:read (required for Company Activity).
  5. Click Createand save the token value in a secure location. (You'll use it as a bearer token).

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, rippling-activity-logs ).
  3. Create a Userfollowing this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select Security credentialstab.
  6. Click Create Access Keyin section Access Keys.
  7. Select Third-party serviceas Use case.
  8. Click Next.
  9. Optional: Add description tag.
  10. Click Create access key.
  11. Click Download CSV fileto save the Access Keyand Secret Access Keyfor future reference.
  12. Click Done.
  13. Select Permissionstab.
  14. Click Add permissionsin section Permissions policies.
  15. Select Add permissions.
  16. Select Attach policies directly.
  17. Search for AmazonS3FullAccesspolicy.
  18. Select the policy.
  19. Click Next.
  20. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies.
  2. Click Create policy > JSON tab.
  3. Copy and paste the following policy.
  4. Policy JSON(replace values if you entered a different bucket or prefix):

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:PutObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::rippling-activity-logs/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowGetStateObject" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:GetObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::rippling-activity-logs/rippling/activity/state.json" 
      
     } 
      
     ] 
     } 
     ```` 
     
    
  5. Click Next > Create policy.

  6. Go to IAM > Roles > Create role > AWS service > Lambda.

  7. Attach the newly created policy.

  8. Name the role WriteRipplingToS3Role and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name rippling_activity_to_s3
    Runtime Python 3.13
    Architecture x86_64
    Execution role WriteRipplingToS3Role
  4. After the function is created, open the Codetab, delete the stub and paste the following code ( rippling_activity_to_s3.py ).

      #!/usr/bin/env python3 
     # Lambda: Pull Rippling Company Activity logs to S3 (raw JSON, no transforms) 
     import 
      
     os 
     , 
      
     json 
     , 
      
     time 
     , 
      
     urllib.parse 
     from 
      
     urllib.request 
      
     import 
     Request 
     , 
     urlopen 
     from 
      
     datetime 
      
     import 
     datetime 
     , 
     timezone 
     , 
     timedelta 
     import 
      
     boto3 
     API_TOKEN 
     = 
     os 
     . 
     environ 
     [ 
     "RIPPLING_API_TOKEN" 
     ] 
     ACTIVITY_URL 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "RIPPLING_ACTIVITY_URL" 
     , 
     "https://api.rippling.com/platform/api/company_activity" 
     ) 
     S3_BUCKET 
     = 
     os 
     . 
     environ 
     [ 
     "S3_BUCKET" 
     ] 
     S3_PREFIX 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "S3_PREFIX" 
     , 
     "rippling/activity/" 
     ) 
     STATE_KEY 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "STATE_KEY" 
     , 
     "rippling/activity/state.json" 
     ) 
     LIMIT 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "LIMIT" 
     , 
     "1000" 
     )) 
     MAX_PAGES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "MAX_PAGES" 
     , 
     "10" 
     )) 
     LOOKBACK_MINUTES 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "LOOKBACK_MINUTES" 
     , 
     "60" 
     )) 
     END_LAG_SECONDS 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "END_LAG_SECONDS" 
     , 
     "120" 
     )) 
     s3 
     = 
     boto3 
     . 
     client 
     ( 
     "s3" 
     ) 
     def 
      
     _headers 
     (): 
     return 
     { 
     "Authorization" 
     : 
     f 
     "Bearer 
     { 
     API_TOKEN 
     } 
     " 
     , 
     "Accept" 
     : 
     "application/json" 
     } 
     def 
      
     _get_state 
     (): 
     try 
     : 
     obj 
     = 
     s3 
     . 
     get_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     ) 
     j 
     = 
     json 
     . 
     loads 
     ( 
     obj 
     [ 
     "Body" 
     ] 
     . 
     read 
     ()) 
     return 
     { 
     "since" 
     : 
     j 
     . 
     get 
     ( 
     "since" 
     ), 
     "next" 
     : 
     j 
     . 
     get 
     ( 
     "next" 
     )} 
     except 
     Exception 
     : 
     return 
     { 
     "since" 
     : 
     None 
     , 
     "next" 
     : 
     None 
     } 
     def 
      
     _put_state 
     ( 
     since_iso 
     , 
     next_cursor 
     ): 
     body 
     = 
     json 
     . 
     dumps 
     ({ 
     "since" 
     : 
     since_iso 
     , 
     "next" 
     : 
     next_cursor 
     }, 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     )) 
     . 
     encode 
     ( 
     "utf-8" 
     ) 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     STATE_KEY 
     , 
     Body 
     = 
     body 
     ) 
     def 
      
     _get 
     ( 
     url 
     ): 
     req 
     = 
     Request 
     ( 
     url 
     , 
     method 
     = 
     "GET" 
     ) 
     for 
     k 
     , 
     v 
     in 
     _headers 
     () 
     . 
     items 
     (): 
     req 
     . 
     add_header 
     ( 
     k 
     , 
     v 
     ) 
     with 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     60 
     ) 
     as 
     r 
     : 
     return 
     json 
     . 
     loads 
     ( 
     r 
     . 
     read 
     () 
     . 
     decode 
     ( 
     "utf-8" 
     )) 
     def 
      
     _build_url 
     ( 
     base 
     , 
     params 
     ): 
     qs 
     = 
     urllib 
     . 
     parse 
     . 
     urlencode 
     ( 
     params 
     ) 
     return 
     f 
     " 
     { 
     base 
     } 
     ? 
     { 
     qs 
     } 
     " 
     if 
     qs 
     else 
     base 
     def 
      
     _parse_iso 
     ( 
     ts 
     ): 
     if 
     ts 
     . 
     endswith 
     ( 
     "Z" 
     ): 
     ts 
     = 
     ts 
     [: 
     - 
     1 
     ] 
     + 
     "+00:00" 
     return 
     datetime 
     . 
     fromisoformat 
     ( 
     ts 
     ) 
     def 
      
     _iso_from_epoch 
     ( 
     sec 
     ): 
     return 
     datetime 
     . 
     fromtimestamp 
     ( 
     sec 
     , 
     tz 
     = 
     timezone 
     . 
     utc 
     ) 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     . 
     replace 
     ( 
     "+00:00" 
     , 
     "Z" 
     ) 
     def 
      
     _write 
     ( 
     payload 
     , 
     run_ts_iso 
     , 
     page_index 
     , 
     source 
     = 
     "company_activity" 
     ): 
     day_path 
     = 
     _parse_iso 
     ( 
     run_ts_iso 
     ) 
     . 
     strftime 
     ( 
     "%Y/%m/ 
     %d 
     " 
     ) 
     key 
     = 
     f 
     " 
     { 
     S3_PREFIX 
     . 
     strip 
     ( 
     '/' 
     ) 
     } 
     / 
     { 
     day_path 
     } 
     / 
     { 
     run_ts_iso 
     . 
     replace 
     ( 
     ':' 
     , 
     '' 
     ) 
     . 
     replace 
     ( 
     '-' 
     , 
     '' 
     ) 
     } 
     -page 
     { 
     page_index 
     : 
     05d 
     } 
     - 
     { 
     source 
     } 
     .json" 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     S3_BUCKET 
     , 
     Key 
     = 
     key 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     payload 
     , 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     )) 
     . 
     encode 
     ( 
     "utf-8" 
     )) 
     return 
     key 
     def 
      
     lambda_handler 
     ( 
     event 
     = 
     None 
     , 
     context 
     = 
     None 
     ): 
     state 
     = 
     _get_state 
     () 
     run_end 
     = 
     datetime 
     . 
     now 
     ( 
     timezone 
     . 
     utc 
     ) 
     - 
     timedelta 
     ( 
     seconds 
     = 
     END_LAG_SECONDS 
     ) 
     end_iso 
     = 
     run_end 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     . 
     replace 
     ( 
     "+00:00" 
     , 
     "Z" 
     ) 
     since_iso 
     = 
     state 
     [ 
     "since" 
     ] 
     next_cursor 
     = 
     state 
     [ 
     "next" 
     ] 
     if 
     since_iso 
     is 
     None 
     : 
     since_iso 
     = 
     _iso_from_epoch 
     ( 
     time 
     . 
     time 
     () 
     - 
     LOOKBACK_MINUTES 
     * 
     60 
     ) 
     else 
     : 
     try 
     : 
     since_iso 
     = 
     ( 
     _parse_iso 
     ( 
     since_iso 
     ) 
     + 
     timedelta 
     ( 
     seconds 
     = 
     1 
     )) 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     . 
     replace 
     ( 
     "+00:00" 
     , 
     "Z" 
     ) 
     except 
     Exception 
     : 
     since_iso 
     = 
     _iso_from_epoch 
     ( 
     time 
     . 
     time 
     () 
     - 
     LOOKBACK_MINUTES 
     * 
     60 
     ) 
     run_ts_iso 
     = 
     end_iso 
     pages 
     = 
     0 
     total 
     = 
     0 
     newest_ts 
     = 
     None 
     pending_next 
     = 
     None 
     while 
     pages 
    < MAX_PAGES 
     : 
     params 
     = 
     { 
     "limit" 
     : 
     str 
     ( 
     LIMIT 
     )} 
     if 
     next_cursor 
     : 
     params 
     [ 
     "next" 
     ] 
     = 
     next_cursor 
     else 
     : 
     params 
     [ 
     "startDate" 
     ] 
     = 
     since_iso 
     params 
     [ 
     "endDate" 
     ] 
     = 
     end_iso 
     url 
     = 
     _build_url 
     ( 
     ACTIVITY_URL 
     , 
     params 
     ) 
     data 
     = 
     _get 
     ( 
     url 
     ) 
     _write 
     ( 
     data 
     , 
     run_ts_iso 
     , 
     pages 
     ) 
     events 
     = 
     data 
     . 
     get 
     ( 
     "events" 
     ) 
     or 
     data 
     . 
     get 
     ( 
     "items" 
     ) 
     or 
     data 
     . 
     get 
     ( 
     "data" 
     ) 
     or 
     [] 
     total 
     += 
     len 
     ( 
     events 
     ) 
     if 
     isinstance 
     ( 
     events 
     , 
     list 
     ) 
     else 
     0 
     if 
     isinstance 
     ( 
     events 
     , 
     list 
     ): 
     for 
     ev 
     in 
     events 
     : 
     t 
     = 
     ev 
     . 
     get 
     ( 
     "timestamp" 
     ) 
     or 
     ev 
     . 
     get 
     ( 
     "time" 
     ) 
     or 
     ev 
     . 
     get 
     ( 
     "event_time" 
     ) 
     if 
     isinstance 
     ( 
     t 
     , 
     str 
     ): 
     try 
     : 
     dt_ts 
     = 
     _parse_iso 
     ( 
     t 
     ) 
     if 
     newest_ts 
     is 
     None 
     or 
     dt_ts 
    > newest_ts 
     : 
     newest_ts 
     = 
     dt_ts 
     except 
     Exception 
     : 
     pass 
     nxt 
     = 
     data 
     . 
     get 
     ( 
     "next" 
     ) 
     or 
     data 
     . 
     get 
     ( 
     "next_cursor" 
     ) 
     or 
     None 
     pages 
     += 
     1 
     if 
     nxt 
     : 
     next_cursor 
     = 
     nxt 
     pending_next 
     = 
     nxt 
     continue 
     else 
     : 
     pending_next 
     = 
     None 
     break 
     new_since_iso 
     = 
     ( 
     newest_ts 
     or 
     run_end 
     ) 
     . 
     replace 
     ( 
     microsecond 
     = 
     0 
     ) 
     . 
     isoformat 
     () 
     . 
     replace 
     ( 
     "+00:00" 
     , 
     "Z" 
     ) 
     _put_state 
     ( 
     new_since_iso 
     , 
     pending_next 
     ) 
     return 
     { 
     "ok" 
     : 
     True 
     , 
     "pages" 
     : 
     pages 
     , 
     "events" 
     : 
     total 
     , 
     "since" 
     : 
     new_since_iso 
     , 
     "next" 
     : 
     pending_next 
     } 
     
    
  5. Go to Configuration > Environment variables.

  6. Click Edit > Add new environment variable.

  7. Enter the environment variables provided in the following table, replacing the example values with your values.

    Environment variables

    Key Example value
    S3_BUCKET rippling-activity-logs
    S3_PREFIX rippling/activity/
    STATE_KEY rippling/activity/state.json
    RIPPLING_API_TOKEN your-api-token
    RIPPLING_ACTIVITY_URL https://api.rippling.com/platform/api/company_activity
    LIMIT 1000
    MAX_PAGES 10
    LOOKBACK_MINUTES 60
    END_LAG_SECONDS 120
  8. After the function is created, stay on its page (or open Lambda > Functions > your-function).

  9. Select the Configurationtab.

  10. In the General configurationpanel click Edit.

  11. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 1 hour ).
    • Target: Your Lambda function rippling_activity_to_s3 .
    • Name: rippling-activity-logs-1h .
  3. Click Create schedule.

(Optional) Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console, go to IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: Enter secops-reader .
    • Access type: Select Access key — Programmatic access.
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. JSON:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::rippling-activity-logs/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::rippling-activity-logs" 
      
     } 
      
     ] 
     } 
     
    
  7. Name = secops-reader-policy .

  8. Click Create policy > search/select > Next > Add permissions.

  9. Create access key for secops-reader : Security credentials > Access keys.

  10. Click Create access key.

  11. Download the .CSV . (You'll paste these values into the feed).

Configure a feed in Google SecOps to ingest Rippling Activity Logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Rippling Activity Logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Rippling Activity Logsas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://rippling-activity-logs/rippling/activity/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: rippling.activity
    • Optional: Ingestion labels: Add the ingestion label.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: