Collect SailPoint IAM logs
This document explains how to ingest SailPoint IAM logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- Privileged access to SailPoint Identity Security Cloudtenant or API
- Privileged access to AWS(S3, IAM, Lambda, EventBridge)
Collect SailPoint IAM prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the SailPoint Identity Security Cloud Admin Consoleas an administrator.
- Go to Global > Security Settings > API Management.
- Click Create API Client.
- Choose Client Credentialsas the grant type.
- Provide the following configuration details:
- Name: Enter a descriptive name (for example,
Chronicle Export API). - Description: Enter description for the API client.
- Scopes: Select
sp:scopes:all(or appropriate read scopes for audit events).
- Name: Enter a descriptive name (for example,
- Click Createand copy the generated API credentials securely.
- Record your SailPoint tenant base URL(for example,
https://tenant.api.identitynow.com). - Copy and save in a secure location the following details:
- IDN_CLIENT_ID
- IDN_CLIENT_SECRET
- IDN_BASE
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
sailpoint-iam-logs). - Create a user following this user guide: Creating an IAM user .
- Select the created User.
- Select the Security credentialstab.
- Click Create Access Keyin the Access Keyssection.
- Select Third-party serviceas the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
- Click Done.
- Select the Permissionstab.
- Click Add permissionsin the Permissions policiessection.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccesspolicy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
-
Copy and paste the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::sailpoint-iam-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::sailpoint-iam-logs/sailpoint/iam/state.json" } ] }- Replace
sailpoint-iam-logsif you entered a different bucket name:
- Replace
-
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
SailPointIamToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name sailpoint_iam_to_s3Runtime Python 3.13 Architecture x86_64 Execution role SailPointIamToS3Role -
After the function is created, open the Codetab, delete the stub and enter the following code (
sailpoint_iam_to_s3.py):#!/usr/bin/env python3 # Lambda: Pull SailPoint Identity Security Cloud audit events and store raw JSONL payloads to S3 # - Uses /v3/search API with pagination for audit events. # - Preserves vendor-native JSON format for identity events. # - Retries with exponential backoff; unique S3 keys to avoid overwrites. # - Outputs JSONL format (one event per line) for optimal Chronicle ingestion. import os , json , time , uuid , urllib.parse from urllib.request import Request , urlopen from urllib.error import URLError , HTTPError import boto3 S3_BUCKET = os . environ [ "S3_BUCKET" ] S3_PREFIX = os . environ . get ( "S3_PREFIX" , "sailpoint/iam/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , "sailpoint/iam/state.json" ) WINDOW_SEC = int ( os . environ . get ( "WINDOW_SECONDS" , "3600" )) # default 1h HTTP_TIMEOUT = int ( os . environ . get ( "HTTP_TIMEOUT" , "60" )) IDN_BASE = os . environ [ "IDN_BASE" ] # e.g. https://tenant.api.identitynow.com CLIENT_ID = os . environ [ "IDN_CLIENT_ID" ] CLIENT_SECRET = os . environ [ "IDN_CLIENT_SECRET" ] SCOPE = os . environ . get ( "IDN_SCOPE" , "sp:scopes:all" ) PAGE_SIZE = int ( os . environ . get ( "PAGE_SIZE" , "250" )) MAX_PAGES = int ( os . environ . get ( "MAX_PAGES" , "20" )) MAX_RETRIES = int ( os . environ . get ( "MAX_RETRIES" , "3" )) USER_AGENT = os . environ . get ( "USER_AGENT" , "sailpoint-iam-to-s3/1.0" ) s3 = boto3 . client ( "s3" ) def _load_state (): try : obj = s3 . get_object ( Bucket = S3_BUCKET , Key = STATE_KEY ) return json . loads ( obj [ "Body" ] . read ()) except Exception : return {} def _save_state ( st ): s3 . put_object ( Bucket = S3_BUCKET , Key = STATE_KEY , Body = json . dumps ( st , separators = ( "," , ":" )) . encode ( "utf-8" ), ContentType = "application/json" , ) def _iso ( ts : float ) - > str : return time . strftime ( "%Y-%m- %d T%H:%M:%SZ" , time . gmtime ( ts )) def _get_oauth_token () - > str : """Get OAuth2 access token using Client Credentials flow""" token_url = f " { IDN_BASE . rstrip ( '/' ) } /oauth/token" data = urllib . parse . urlencode ({ 'grant_type' : 'client_credentials' , 'client_id' : CLIENT_ID , 'client_secret' : CLIENT_SECRET , 'scope' : SCOPE }) . encode ( 'utf-8' ) req = Request ( token_url , data = data , method = "POST" ) req . add_header ( "Content-Type" , "application/x-www-form-urlencoded" ) req . add_header ( "User-Agent" , USER_AGENT ) with urlopen ( req , timeout = HTTP_TIMEOUT ) as r : response = json . loads ( r . read ()) return response [ "access_token" ] def _search_events ( access_token : str , created_from : str , search_after : list = None ) - > list : """Search for audit events using SailPoint's /v3/search API IMPORTANT: SailPoint requires colons in ISO8601 timestamps to be escaped with backslashes. Example: 2024-01-15T10:30:00Z must be sent as 2024-01-15T10\:30\:00Z Reference: https://developer.sailpoint.com/discuss/t/datetime-searches/6609 """ search_url = f " { IDN_BASE . rstrip ( '/' ) } /v3/search" # Escape colons in timestamp for SailPoint search query # SailPoint requires: created:>=2024-01-15T10\:30\:00Z (colons must be escaped) escaped_timestamp = created_from . replace ( ":" , " \\ :" ) query_str = f 'created:>= { escaped_timestamp } ' payload = { "indices" : [ "events" ], "query" : { "query" : query_str }, "sort" : [ "created" , "+id" ], "limit" : PAGE_SIZE } if search_after : payload [ "searchAfter" ] = search_after attempt = 0 while True : req = Request ( search_url , data = json . dumps ( payload ) . encode ( 'utf-8' ), method = "POST" ) req . add_header ( "Content-Type" , "application/json" ) req . add_header ( "Accept" , "application/json" ) req . add_header ( "Authorization" , f "Bearer { access_token } " ) req . add_header ( "User-Agent" , USER_AGENT ) try : with urlopen ( req , timeout = HTTP_TIMEOUT ) as r : response = json . loads ( r . read ()) # Handle different response formats if isinstance ( response , list ): return response return response . get ( "results" , response . get ( "data" , [])) except ( HTTPError , URLError ) as e : attempt += 1 print ( f "HTTP error on attempt { attempt } : { e } " ) if attempt > MAX_RETRIES : raise # exponential backoff with jitter time . sleep ( min ( 60 , 2 ** attempt ) + ( time . time () % 1 )) def _put_events_data ( events : list , from_ts : float , to_ts : float , page_num : int ) - > str : """Write events to S3 in JSONL format (one JSON object per line) JSONL format is preferred for Chronicle ingestion as it allows: - Line-by-line processing - Better error recovery - Lower memory footprint """ # Create unique S3 key for events data ts_path = time . strftime ( "%Y/%m/ %d " , time . gmtime ( to_ts )) uniq = f " { int ( time . time () * 1e6 ) } _ { uuid . uuid4 () . hex [: 8 ] } " key = f " { S3_PREFIX }{ ts_path } /sailpoint_iam_ { int ( from_ts ) } _ { int ( to_ts ) } _p { page_num : 03d } _ { uniq } .jsonl" # Convert events list to JSONL format (one JSON object per line) jsonl_lines = [ json . dumps ( event , separators = ( "," , ":" )) for event in events ] jsonl_content = " \n " . join ( jsonl_lines ) s3 . put_object ( Bucket = S3_BUCKET , Key = key , Body = jsonl_content . encode ( "utf-8" ), ContentType = "application/x-ndjson" , # JSONL MIME type Metadata = { 'source' : 'sailpoint-iam' , 'from_timestamp' : str ( int ( from_ts )), 'to_timestamp' : str ( int ( to_ts )), 'page_number' : str ( page_num ), 'events_count' : str ( len ( events )), 'format' : 'jsonl' } ) return key def _get_item_id ( item : dict ) - > str : """Extract ID from event item, trying multiple possible fields""" for field in ( "id" , "uuid" , "eventId" , "_id" ): if field in item and item [ field ]: return str ( item [ field ]) return "" def lambda_handler ( event = None , context = None ): st = _load_state () now = time . time () from_ts = float ( st . get ( "last_to_ts" ) or ( now - WINDOW_SEC )) to_ts = now # Get OAuth token access_token = _get_oauth_token () created_from = _iso ( from_ts ) print ( f "Fetching SailPoint IAM events from: { created_from } " ) # Handle pagination state last_created = st . get ( "last_created" ) last_id = st . get ( "last_id" ) search_after = [ last_created , last_id ] if ( last_created and last_id ) else None pages = 0 total_events = 0 written_keys = [] newest_created = last_created or created_from newest_id = last_id or "" while pages < MAX_PAGES : events = _search_events ( access_token , created_from , search_after ) if not events : break # Write page to S3 in JSONL format key = _put_events_data ( events , from_ts , to_ts , pages + 1 ) written_keys . append ( key ) total_events += len ( events ) # Update pagination state from last item last_event = events [ - 1 ] last_event_created = last_event . get ( "created" ) or last_event . get ( "metadata" , {}) . get ( "created" ) last_event_id = _get_item_id ( last_event ) if last_event_created : newest_created = last_event_created if last_event_id : newest_id = last_event_id search_after = [ newest_created , newest_id ] pages += 1 # If we got less than page size, we're done if len ( events ) < PAGE_SIZE : break print ( f "Successfully retrieved { total_events } events across { pages } pages" ) # Save state for next run st [ "last_to_ts" ] = to_ts st [ "last_created" ] = newest_created st [ "last_id" ] = newest_id st [ "last_successful_run" ] = now _save_state ( st ) return { "statusCode" : 200 , "body" : { "success" : True , "pages" : pages , "total_events" : total_events , "s3_keys" : written_keys , "from_timestamp" : from_ts , "to_timestamp" : to_ts , "last_created" : newest_created , "last_id" : newest_id , "format" : "jsonl" } } if __name__ == "__main__" : print ( lambda_handler ()) -
Go to Configuration > Environment variables > Edit > Add new environment variable.
-
Enter the following environment variables, replacing with your values.
Environment variables
Key Example value S3_BUCKETsailpoint-iam-logsS3_PREFIXsailpoint/iam/STATE_KEYsailpoint/iam/state.jsonWINDOW_SECONDS3600HTTP_TIMEOUT60MAX_RETRIES3USER_AGENTsailpoint-iam-to-s3/1.0IDN_BASEhttps://tenant.api.identitynow.comIDN_CLIENT_IDyour-client-id(from step 2)IDN_CLIENT_SECRETyour-client-secret(from step 2)IDN_SCOPEsp:scopes:allPAGE_SIZE250MAX_PAGES20 -
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour). - Target: Your Lambda function
sailpoint_iam_to_s3. - Name:
sailpoint-iam-1h.
- Recurring schedule: Rate(
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- Go to AWS Console > IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::sailpoint-iam-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::sailpoint-iam-logs" } ] } -
Set the name to
secops-reader-policy. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest SailPoint IAM logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- On the next page, click Configure a single feed.
- In the Feed namefield, enter a name for the feed (for example,
SailPoint IAM logs). - Select Amazon S3 V2as the Source type.
- Select SailPoint IAMas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://sailpoint-iam-logs/sailpoint/iam/ - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
UDM Mapping Table
| Log Field | UDM Mapping | Logic |
|---|---|---|
action
|
metadata.description
|
The value of the action
field from the raw log. |
actor.name
|
principal.user.user_display_name
|
The value of the actor.name
field from the raw log. |
attributes.accountName
|
principal.user.group_identifiers
|
The value of the attributes.accountName
field from the raw log. |
attributes.appId
|
target.asset_id
|
"App ID: " concatenated with the value of the attributes.appId
field from the raw log. |
attributes.attributeName
|
additional.fields[0].value.string_value
|
The value of the attributes.attributeName
field from the raw log, placed within an additional.fields
object. The key is set to "Attribute Name". |
attributes.attributeValue
|
additional.fields[1].value.string_value
|
The value of the attributes.attributeValue
field from the raw log, placed within an additional.fields
object. The key is set to "Attribute Value". |
attributes.cloudAppName
|
target.application
|
The value of the attributes.cloudAppName
field from the raw log. |
attributes.hostName
|
target.hostname
, target.asset.hostname
|
The value of the attributes.hostName
field from the raw log. |
attributes.interface
|
additional.fields[2].value.string_value
|
The value of the attributes.interface
field from the raw log, placed within an additional.fields
object. The key is set to "Interface". |
attributes.operation
|
security_result.action_details
|
The value of the attributes.operation
field from the raw log. |
attributes.previousValue
|
additional.fields[3].value.string_value
|
The value of the attributes.previousValue
field from the raw log, placed within an additional.fields
object. The key is set to "Previous Value". |
attributes.provisioningResult
|
security_result.detection_fields.value
|
The value of the attributes.provisioningResult
field from the raw log, placed within a security_result.detection_fields
object. The key is set to "Provisioning Result". |
attributes.sourceId
|
principal.labels[0].value
|
The value of the attributes.sourceId
field from the raw log, placed within a principal.labels
object. The key is set to "Source Id". |
attributes.sourceName
|
principal.labels[1].value
|
The value of the attributes.sourceName
field from the raw log, placed within a principal.labels
object. The key is set to "Source Name". |
auditClassName
|
metadata.product_event_type
|
The value of the auditClassName
field from the raw log. |
created
|
metadata.event_timestamp.seconds
, metadata.event_timestamp.nanos
|
The value of the created
field from the raw log, converted to timestamp if instant.epochSecond
is not present. |
id
|
metadata.product_log_id
|
The value of the id
field from the raw log. |
instant.epochSecond
|
metadata.event_timestamp.seconds
|
The value of the instant.epochSecond
field from the raw log, used for timestamp. |
ipAddress
|
principal.asset.ip
, principal.ip
|
The value of the ipAddress
field from the raw log. |
interface
|
additional.fields[0].value.string_value
|
The value of the interface
field from the raw log, placed within an additional.fields
object. The key is set to "interface". |
loggerName
|
intermediary.application
|
The value of the loggerName
field from the raw log. |
message
|
metadata.description
, security_result.description
|
Used for various purposes, including setting the description in metadata and security_result, and extracting XML content. |
name
|
security_result.description
|
The value of the name
field from the raw log. |
operation
|
target.resource.attribute.labels[0].value
, metadata.product_event_type
|
The value of the operation
field from the raw log, placed within a target.resource.attribute.labels
object. The key is set to "operation". Also used for metadata.product_event_type
. |
org
|
principal.administrative_domain
|
The value of the org
field from the raw log. |
pod
|
principal.location.name
|
The value of the pod
field from the raw log. |
referenceClass
|
additional.fields[1].value.string_value
|
The value of the referenceClass
field from the raw log, placed within an additional.fields
object. The key is set to "referenceClass". |
referenceId
|
additional.fields[2].value.string_value
|
The value of the referenceId
field from the raw log, placed within an additional.fields
object. The key is set to "referenceId". |
sailPointObjectName
|
additional.fields[3].value.string_value
|
The value of the sailPointObjectName
field from the raw log, placed within an additional.fields
object. The key is set to "sailPointObjectName". |
serverHost
|
principal.hostname
, principal.asset.hostname
|
The value of the serverHost
field from the raw log. |
stack
|
additional.fields[4].value.string_value
|
The value of the stack
field from the raw log, placed within an additional.fields
object. The key is set to "Stack". |
status
|
security_result.severity_details
|
The value of the status
field from the raw log. |
target
|
additional.fields[4].value.string_value
|
The value of the target
field from the raw log, placed within an additional.fields
object. The key is set to "target". |
target.name
|
principal.user.userid
|
The value of the target.name
field from the raw log. |
technicalName
|
security_result.summary
|
The value of the technicalName
field from the raw log. |
thrown.cause.message
|
xml_body
, detailed_message
|
The value of the thrown.cause.message
field from the raw log, used to extract XML content. |
thrown.message
|
xml_body
, detailed_message
|
The value of the thrown.message
field from the raw log, used to extract XML content. |
trackingNumber
|
additional.fields[5].value.string_value
|
The value of the trackingNumber
field from the raw log, placed within an additional.fields
object. The key is set to "Tracking Number". |
type
|
metadata.product_event_type
|
The value of the type
field from the raw log. |
_version
|
metadata.product_version
|
The value of the _version
field from the raw log. |
|
N/A
|
metadata.event_timestamp
|
Derived from instant.epochSecond
or created
fields. |
|
N/A
|
metadata.event_type
|
Determined by parser logic based on various fields, including has_principal_user
, has_target_application
, technicalName
, and action
. Default value is "GENERIC_EVENT". |
|
N/A
|
metadata.log_type
|
Set to "SAILPOINT_IAM". |
|
N/A
|
metadata.product_name
|
Set to "IAM". |
|
N/A
|
metadata.vendor_name
|
Set to "SAILPOINT". |
|
N/A
|
extensions.auth.type
|
Set to "AUTHTYPE_UNSPECIFIED" in certain conditions. |
|
N/A
|
target.resource.attribute.labels[0].key
|
Set to "operation". |
Need more help? Get answers from Community members and Google SecOps professionals.

