Collect Aware audit logs
This document explains how to ingest Aware audit logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- Google SecOps instance
- Privileged access to Awaretenant
- Privileged access to AWS(S3, IAM, Lambda, EventBridge)
Collect Aware prerequisites (IDs, API keys, org IDs, tokens)
- Sign in to the Aware Admin Console.
- Go to System Settings > Integrations > API Tokens.
- Click + API Tokenand grant Audit Logs Read-onlypermission.
- Copy and save in a secure location the following details:
- API Token
- API Base URL:
https://api.aware.work/external/system/auditlogs/v1
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
aware-audit-logs
). - Create a user following this user guide: Creating an IAM user .
- Select the created User.
- Select the Security credentialstab.
- Click Create Access Keyin the Access Keyssection.
- Select Third-party serviceas the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
- Click Done.
- Select the Permissionstab.
- Click Add permissionsin the Permissions policiessection.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccesspolicy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
-
Enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::aware-audit-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::aware-audit-logs/aware/state.json" } ] }
- Replace
aware-audit-logs
if you entered a different bucket name.
- Replace
-
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
AwareAuditLambdaRole
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
- Provide the following configuration details:
Setting | Value |
---|---|
Name | aware-audit-poller
|
Runtime | Python 3.13 |
Architecture | x86_64 |
Execution role | AwareAuditLambdaRole
|
-
After the function is created, open the Codetab, delete the stub and enter the following code (
aware-audit-poller.py
):import boto3 , gzip , io , json , os , time , urllib.parse import urllib.request from datetime import datetime , timedelta , timezone from botocore.exceptions import ClientError AWARE_ENDPOINT = "https://api.aware.work/external/system/auditlogs/v1" API_TOKEN = os . environ [ "AWARE_API_TOKEN" ] BUCKET = os . environ [ "S3_BUCKET" ] PREFIX = os . environ . get ( "S3_PREFIX" , "aware/audit/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , "aware/state.json" ) MAX_PER_PAGE = int ( os . environ . get ( "MAX_PER_PAGE" , "500" )) s3 = boto3 . client ( "s3" ) def _load_state (): try : obj = s3 . get_object ( Bucket = BUCKET , Key = STATE_KEY ) return json . loads ( obj [ "Body" ] . read () . decode ( "utf-8" )) except ClientError as e : if e . response . get ( "Error" , {}) . get ( "Code" ) == "NoSuchKey" : return {} raise def _save_state ( state ): s3 . put_object ( Bucket = BUCKET , Key = STATE_KEY , Body = json . dumps ( state ) . encode ( "utf-8" )) def handler ( event , context ): tz_utc = timezone . utc now = datetime . now ( tz = tz_utc ) state = _load_state () start_date = ( datetime . fromisoformat ( state [ "last_date" ]) . date () if "last_date" in state else ( now - timedelta ( days = 1 )) . date () ) end_date = now . date () total = 0 day = start_date while day < = end_date : day_str = day . strftime ( "%Y-%m- %d " ) params = { "filter" : f "startDate: { day_str } ,endDate: { day_str } " , "limit" : str ( MAX_PER_PAGE )} offset = 1 out = io . BytesIO () gz = gzip . GzipFile ( filename = "aware_audit.jsonl" , mode = "wb" , fileobj = out ) wrote_any = False while True : q = urllib . parse . urlencode ({ ** params , "offset" : str ( offset )}) req = urllib . request . Request ( f " { AWARE_ENDPOINT } ? { q } " ) req . add_header ( "X-Aware-Api-Key" , API_TOKEN ) with urllib . request . urlopen ( req , timeout = 30 ) as resp : payload = json . loads ( resp . read () . decode ( "utf-8" )) items = ( payload . get ( "value" ) or {}) . get ( "auditLogData" ) or [] if not items : break for item in items : gz . write (( json . dumps ( item , separators = ( "," , ":" )) + "n" ) . encode ( "utf-8" )) total += 1 wrote_any = True offset += 1 time . sleep ( 0.2 ) gz . close () if wrote_any : key = f " { PREFIX }{ day . strftime ( '%Y/%m/ %d ' ) } /aware_audit_ { now . strftime ( '%Y%m %d _%H%M%S' ) } .jsonl.gz" s3 . put_object ( Bucket = BUCKET , Key = key , Body = out . getvalue (), ContentType = "application/json" , ContentEncoding = "gzip" , ) _save_state ({ "last_date" : day . isoformat ()}) day += timedelta ( days = 1 ) return { "status" : "ok" , "written" : total }
-
Go to Configuration > Environment variables > Edit > Add new environment variable.
-
Enter the following environment variables, replacing with your values:
Key Example value S3_BUCKET
aware-audit-logs
S3_PREFIX
aware/audit/
STATE_KEY
aware/state.json
AWARE_API_TOKEN
<your-aware-api-token>
MAX_PER_PAGE
500
-
After the function is created, stay on its page (or open Lambda > Functions > your-function**).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour
). - Target: Your Lambda function
aware-audit-poller
. - Name:
aware-audit-poller-1h
.
- Recurring schedule: Rate(
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- In the AWS Console. go to IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User:
secops-reader
. - Access type: Access key — Programmatic access.
- User:
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::aware-audit-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::aware-audit-logs" } ] }
-
Set the name to
secops-reader-policy
. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest Aware Audit logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Aware Audit logs
). - Select Amazon S3 V2as the Source type.
- Select Aware Auditas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://aware-audit-logs/aware/audit/
- Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.