Collect Rippling activity logs
This document explains how to ingest Rippling activity logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to Rippling(API token with access to Company Activity).
- Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).
Get Rippling prerequisites
- Sign in to Rippling Admin.
- Open Search >
API Tokens.
Alternative path: Settings > Company Settings > API Tokens. - Click Create API token.
- Provide the following configuration details:
- Name: Provide a unique and meaningful name (for example,
Google SecOps S3 Export) - API version: Base API (v1)
- Scopes/Permissions: Enable
company:activity:read(required for Company Activity).
- Name: Provide a unique and meaningful name (for example,
- Click Createand save the token value in a secure location. (You'll use it as a bearer token).
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
rippling-activity-logs). - Create a Userfollowing this user guide: Creating an IAM user .
- Select the created User.
- Select Security credentialstab.
- Click Create Access Keyin section Access Keys.
- Select Third-party serviceas Use case.
- Click Next.
- Optional: Add description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor future reference.
- Click Done.
- Select Permissionstab.
- Click Add permissionsin section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccesspolicy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
- Copy and paste the following policy.
-
Policy JSON(replace values if you entered a different bucket or prefix):
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::rippling-activity-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::rippling-activity-logs/rippling/activity/state.json" } ] } ```` -
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
WriteRipplingToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name rippling_activity_to_s3Runtime Python 3.13 Architecture x86_64 Execution role WriteRipplingToS3Role -
After the function is created, open the Codetab, delete the stub and paste the following code (
rippling_activity_to_s3.py).#!/usr/bin/env python3 # Lambda: Pull Rippling Company Activity logs to S3 (raw JSON, no transforms) import os , json , time , urllib.parse from urllib.request import Request , urlopen from datetime import datetime , timezone , timedelta import boto3 API_TOKEN = os . environ [ "RIPPLING_API_TOKEN" ] ACTIVITY_URL = os . environ . get ( "RIPPLING_ACTIVITY_URL" , "https://api.rippling.com/platform/api/company_activity" ) S3_BUCKET = os . environ [ "S3_BUCKET" ] S3_PREFIX = os . environ . get ( "S3_PREFIX" , "rippling/activity/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , "rippling/activity/state.json" ) LIMIT = int ( os . environ . get ( "LIMIT" , "1000" )) MAX_PAGES = int ( os . environ . get ( "MAX_PAGES" , "10" )) LOOKBACK_MINUTES = int ( os . environ . get ( "LOOKBACK_MINUTES" , "60" )) END_LAG_SECONDS = int ( os . environ . get ( "END_LAG_SECONDS" , "120" )) s3 = boto3 . client ( "s3" ) def _headers (): return { "Authorization" : f "Bearer { API_TOKEN } " , "Accept" : "application/json" } def _get_state (): try : obj = s3 . get_object ( Bucket = S3_BUCKET , Key = STATE_KEY ) j = json . loads ( obj [ "Body" ] . read ()) return { "since" : j . get ( "since" ), "next" : j . get ( "next" )} except Exception : return { "since" : None , "next" : None } def _put_state ( since_iso , next_cursor ): body = json . dumps ({ "since" : since_iso , "next" : next_cursor }, separators = ( "," , ":" )) . encode ( "utf-8" ) s3 . put_object ( Bucket = S3_BUCKET , Key = STATE_KEY , Body = body ) def _get ( url ): req = Request ( url , method = "GET" ) for k , v in _headers () . items (): req . add_header ( k , v ) with urlopen ( req , timeout = 60 ) as r : return json . loads ( r . read () . decode ( "utf-8" )) def _build_url ( base , params ): qs = urllib . parse . urlencode ( params ) return f " { base } ? { qs } " if qs else base def _parse_iso ( ts ): if ts . endswith ( "Z" ): ts = ts [: - 1 ] + "+00:00" return datetime . fromisoformat ( ts ) def _iso_from_epoch ( sec ): return datetime . fromtimestamp ( sec , tz = timezone . utc ) . replace ( microsecond = 0 ) . isoformat () . replace ( "+00:00" , "Z" ) def _write ( payload , run_ts_iso , page_index , source = "company_activity" ): day_path = _parse_iso ( run_ts_iso ) . strftime ( "%Y/%m/ %d " ) key = f " { S3_PREFIX . strip ( '/' ) } / { day_path } / { run_ts_iso . replace ( ':' , '' ) . replace ( '-' , '' ) } -page { page_index : 05d } - { source } .json" s3 . put_object ( Bucket = S3_BUCKET , Key = key , Body = json . dumps ( payload , separators = ( "," , ":" )) . encode ( "utf-8" )) return key def lambda_handler ( event = None , context = None ): state = _get_state () run_end = datetime . now ( timezone . utc ) - timedelta ( seconds = END_LAG_SECONDS ) end_iso = run_end . replace ( microsecond = 0 ) . isoformat () . replace ( "+00:00" , "Z" ) since_iso = state [ "since" ] next_cursor = state [ "next" ] if since_iso is None : since_iso = _iso_from_epoch ( time . time () - LOOKBACK_MINUTES * 60 ) else : try : since_iso = ( _parse_iso ( since_iso ) + timedelta ( seconds = 1 )) . replace ( microsecond = 0 ) . isoformat () . replace ( "+00:00" , "Z" ) except Exception : since_iso = _iso_from_epoch ( time . time () - LOOKBACK_MINUTES * 60 ) run_ts_iso = end_iso pages = 0 total = 0 newest_ts = None pending_next = None while pages < MAX_PAGES : params = { "limit" : str ( LIMIT )} if next_cursor : params [ "next" ] = next_cursor else : params [ "startDate" ] = since_iso params [ "endDate" ] = end_iso url = _build_url ( ACTIVITY_URL , params ) data = _get ( url ) _write ( data , run_ts_iso , pages ) events = data . get ( "events" ) or data . get ( "items" ) or data . get ( "data" ) or [] total += len ( events ) if isinstance ( events , list ) else 0 if isinstance ( events , list ): for ev in events : t = ev . get ( "timestamp" ) or ev . get ( "time" ) or ev . get ( "event_time" ) if isinstance ( t , str ): try : dt_ts = _parse_iso ( t ) if newest_ts is None or dt_ts > newest_ts : newest_ts = dt_ts except Exception : pass nxt = data . get ( "next" ) or data . get ( "next_cursor" ) or None pages += 1 if nxt : next_cursor = nxt pending_next = nxt continue else : pending_next = None break new_since_iso = ( newest_ts or run_end ) . replace ( microsecond = 0 ) . isoformat () . replace ( "+00:00" , "Z" ) _put_state ( new_since_iso , pending_next ) return { "ok" : True , "pages" : pages , "events" : total , "since" : new_since_iso , "next" : pending_next } -
Go to Configuration > Environment variables.
-
Click Edit > Add new environment variable.
-
Enter the environment variables provided in the following table, replacing the example values with your values.
Environment variables
Key Example value S3_BUCKETrippling-activity-logsS3_PREFIXrippling/activity/STATE_KEYrippling/activity/state.jsonRIPPLING_API_TOKENyour-api-tokenRIPPLING_ACTIVITY_URLhttps://api.rippling.com/platform/api/company_activityLIMIT1000MAX_PAGES10LOOKBACK_MINUTES60END_LAG_SECONDS120 -
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour). - Target: Your Lambda function
rippling_activity_to_s3. - Name:
rippling-activity-logs-1h.
- Recurring schedule: Rate(
- Click Create schedule.
(Optional) Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader. - Access type: Select Access key — Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
JSON:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::rippling-activity-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::rippling-activity-logs" } ] } -
Name =
secops-reader-policy. -
Click Create policy > search/select > Next > Add permissions.
-
Create access key for
secops-reader: Security credentials > Access keys. -
Click Create access key.
-
Download the
.CSV. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest Rippling Activity Logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Rippling Activity Logs). - Select Amazon S3 V2as the Source type.
- Select Rippling Activity Logsas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://rippling-activity-logs/rippling/activity/ - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace:
rippling.activity - Optional: Ingestion labels: Add the ingestion label.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.

