Collect Harness IO audit logs
This document explains how to ingest Harness IO audit logs to Google Security Operations using Amazon S3.
Before you begin
- Google SecOps instance
- Privileged access to Harness (API key and account ID)
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Get Harness API key and account ID for a Personal Account
- Sign in to the Harnessweb UI.
- Go to your User Profile > My API Keys.
- Select API Key.
- Enter a Namefor the API key.
- Click Save.
- Select Tokenunder your new API key.
- Enter a Namefor the token.
- Click Generate Token.
- Copy and save the token in a secure location.
- Copy and save your Account ID(appears in the Harness URL and in Account Settings).
Optional: Get Harness API key and account ID for a service account
- Sign in to the Harnessweb UI.
- Create a service account
- Go to Account Settings > Access Control.
- Select Service Accounts > select the service account for which you want to create an API key.
- Under API Keys, select API Key.
- Enter a Namefor the API key.
- Click Save.
- Select Tokenunder the new API key.
- Enter a Name for the token.
- Click Generate Token.
- Copy and save the token in a secure location.
- Copy and save your Account ID(appears in the Harness URL and in Account Settings).
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
harness-io
). - Create a user following this user guide: Creating an IAM user .
- Select the created User.
- Select the Security credentialstab.
- Click Create Access Keyin the Access Keyssection.
- Select Third-party serviceas the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
- Click Done.
- Select the Permissionstab.
- Click Add permissionsin the Permissions policiessection.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccesspolicy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
-
Enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutHarnessObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::harness-io/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::harness-io/harness/audit/state.json" } ] }
- Replace
harness-io
if you entered a different bucket name):
- Replace
-
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
WriteHarnessToS3Role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name harness_io_to_s3
Runtime Python 3.13 Architecture x86_64 Execution role WriteHarnessToS3Role
-
After the function is created, open the Codetab, delete the stub and enter the following the code (
harness_io_to_s3.py
).#!/usr/bin/env python3 import os , json , time , urllib.parse from urllib.request import Request , urlopen from urllib.error import HTTPError , URLError import boto3 API_BASE = os . environ . get ( "HARNESS_API_BASE" , "https://app.harness.io" ) . rstrip ( "/" ) ACCOUNT_ID = os . environ [ "HARNESS_ACCOUNT_ID" ] API_KEY = os . environ [ "HARNESS_API_KEY" ] # x-api-key token BUCKET = os . environ [ "S3_BUCKET" ] PREFIX = os . environ . get ( "S3_PREFIX" , "harness/audit/" ) . strip ( "/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , "harness/audit/state.json" ) PAGE_SIZE = min ( int ( os . environ . get ( "PAGE_SIZE" , "100" )), 100 ) # <=100 START_MINUTES_BACK = int ( os . environ . get ( "START_MINUTES_BACK" , "60" )) s3 = boto3 . client ( "s3" ) HDRS = { "x-api-key" : API_KEY , "Content-Type" : "application/json" , "Accept" : "application/json" } def _read_state (): try : obj = s3 . get_object ( Bucket = BUCKET , Key = STATE_KEY ) j = json . loads ( obj [ "Body" ] . read ()) return j . get ( "since" ), j . get ( "pageToken" ) except Exception : return None , None def _write_state ( since_ms : int , page_token : str | None ): body = json . dumps ({ "since" : since_ms , "pageToken" : page_token }) . encode ( "utf-8" ) s3 . put_object ( Bucket = BUCKET , Key = STATE_KEY , Body = body , ContentType = "application/json" ) def _http_post ( path : str , body : dict , query : dict , timeout : int = 60 , max_retries : int = 5 ) - > dict : qs = urllib . parse . urlencode ( query ) url = f " { API_BASE }{ path } ? { qs } " data = json . dumps ( body ) . encode ( "utf-8" ) attempt , backoff = 0 , 1.0 while True : req = Request ( url , data = data , method = "POST" ) for k , v in HDRS . items (): req . add_header ( k , v ) try : with urlopen ( req , timeout = timeout ) as r : return json . loads ( r . read () . decode ( "utf-8" )) except HTTPError as e : if ( e . code == 429 or 500 < = e . code < = 599 ) and attempt < max_retries : time . sleep ( backoff ) attempt += 1 backoff *= 2 continue raise except URLError : if attempt < max_retries : time . sleep ( backoff ) attempt += 1 backoff *= 2 continue raise def _write_page ( obj : dict , now : float , page_index : int ) - > str : ts = time . strftime ( "%Y/%m/ %d /%H%M%S" , time . gmtime ( now )) key = f " { PREFIX } / { ts } -page { page_index : 05d } .json" s3 . put_object ( Bucket = BUCKET , Key = key , Body = json . dumps ( obj , separators = ( "," , ":" )) . encode ( "utf-8" ), ContentType = "application/json" , ) return key def fetch_and_store (): now_s = time . time () since_ms , page_token = _read_state () if since_ms is None : since_ms = int (( now_s - START_MINUTES_BACK * 60 ) * 1000 ) until_ms = int ( now_s * 1000 ) page_index = 0 total = 0 while True : body = { "startTime" : since_ms , "endTime" : until_ms } query = { "accountIdentifier" : ACCOUNT_ID , "pageSize" : PAGE_SIZE } if page_token : query [ "pageToken" ] = page_token else : query [ "pageIndex" ] = page_index data = _http_post ( "/audit/api/audits/listV2" , body , query ) _write_page ( data , now_s , page_index ) entries = [] for key in ( "data" , "content" , "response" , "resource" , "resources" , "items" ): if isinstance ( data . get ( key ), list ): entries = data [ key ] break total += len ( entries ) if isinstance ( entries , list ) else 0 next_token = ( data . get ( "pageToken" ) or ( isinstance ( data . get ( "meta" ), dict ) and data [ "meta" ] . get ( "pageToken" )) or ( isinstance ( data . get ( "metadata" ), dict ) and data [ "metadata" ] . get ( "pageToken" )) ) if next_token : page_token = next_token page_index += 1 continue if len ( entries ) < PAGE_SIZE : break page_index += 1 _write_state ( until_ms , None ) return { "pages" : page_index + 1 , "objects_estimate" : total } def lambda_handler ( event = None , context = None ): return fetch_and_store () if __name__ == "__main__" : print ( lambda_handler ())
-
Go to Configuration > Environment variables > Edit > Add new environment variable.
-
Enter the following environment variables, replacing with your values:
Key Example S3_BUCKET
harness-io
S3_PREFIX
harness/audit/
STATE_KEY
harness/audit/state.json
HARNESS_ACCOUNT_ID
123456789
HARNESS_API_KEY
harness_xxx_token
HARNESS_API_BASE
https://app.harness.io
PAGE_SIZE
100
START_MINUTES_BACK
60
-
After the function is created, stay on its page (or open Lambda > Functions > your‑function).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
-
Provide the following configuration details:
- Recurring schedule: Rate(
1 hour
). - Target: your Lambda function.
- Name:
harness-io-1h
.
- Recurring schedule: Rate(
-
Click Create schedule.
Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users, then click Add users.
- Provide the following configuration details:
- User: Enter a unique name (for example,
secops-reader
) - Access type: Select Access key - Programmatic access
- Click Create user.
- User: Enter a unique name (for example,
- Attach minimal read policy (custom): Users >
select
secops-reader
> Permissions > Add permissions > Attach policies directly > Create policy -
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::<your-bucket>/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::<your-bucket>" } ] }
-
Set the name to
secops-reader-policy
. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest Harness IO logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Harness IO
). - Select Amazon S3 V2as the Source type.
- Select Harness IOas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://harness-io/harness/audit/
- Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label to be applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.