Collect Tines audit logs
This document explains how to ingest Tines Audit Logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to Tines.
- Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).
Get the Tines URL
- In your browser, open the Tines UI for your tenant.
- Copy the domain from the address bar — you'll use it as
TINES_BASE_URL.- Format:
https://<tenant-domain>(for example,https://<tenant-domain>.tines.com).
- Format:
Create a Tines Service API key (recommended) or Personal API key
Values to save for later steps:
-
TINES_BASE_URL— For example,https://<domain>.tines.com -
TINES_API_KEY— The token you create in the following steps
Option 1 - Service API key (recommended)
- Go to the Navigation menu > API keys.
- Click + New key.
- Select Service API key.
- Enter a descriptive name (for example,
SecOps Audit Logs). - Click Create.
- Copy the generated token immediately and save it securely — you'll use it as
TINES_API_KEY.
Option 2 - Personal API key (if Service keys are not available)
- Go to the Navigation menu > API keys.
- Click + New key.
- Select Personal API key.
- Enter a descriptive name.
- Click Create.
-
Copy the generated token and save it securely.
Grant the Audit Log Read permission
- Sign in as a Tenant Owner(or request one to do this).
- Go to Settings > Admin > User administration(or click your team name in the upper left menu and select Users).
- Find the service account userassociated with your Service API key (it will have the same name as your API key).
- If using a Personal API key, find your own user account instead.
- Click the user to open their profile.
- In the Tenant permissionssection, enable AUDIT_LOG_READ.
- Click Save.
(Optional) Verify API access
-
Test the endpoint using curl or any HTTP client:
curl -X GET "https://<tenant-domain>/api/v1/audit_logs?per_page=1" \ -H "Authorization: Bearer <TINES_API_KEY>" \ -H "Content-Type: application/json" -
You should receive a JSON response with audit log entries.
-
You can also verify audit logs exist by navigating to Settings > Monitoring > Audit logsin the UI (requires AUDIT_LOG_READ permission).
Configure AWS S3 bucket
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
tines-audit-logs).
Configure the IAM policy and role for Lambda S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
- Copy and paste the following policy.
-
Policy JSON(replace
tines-audit-logsif you entered a different bucket name):{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::tines-audit-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::tines-audit-logs/tines/audit/state.json" } ] } -
Click Next > Create policy.
-
Name the policy
TinesLambdaS3Policy. -
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the
TinesLambdaS3Policyyou just created. -
Name the role
TinesAuditToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name tines_audit_to_s3Runtime Python 3.13 Architecture x86_64 Execution role TinesAuditToS3Role -
After the function is created, open the Codetab, delete the stub and paste the following code (
tines_audit_to_s3.py).#!/usr/bin/env python3 # Lambda: Pull Tines Audit Logs to S3 (no transform) import os , json , time , urllib.parse from urllib.request import Request , urlopen from urllib.error import HTTPError , URLError import boto3 S3_BUCKET = os . environ [ "S3_BUCKET" ] S3_PREFIX = os . environ . get ( "S3_PREFIX" , "tines/audit/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , "tines/audit/state.json" ) LOOKBACK_SEC = int ( os . environ . get ( "LOOKBACK_SECONDS" , "3600" )) # default 1h PAGE_SIZE = int ( os . environ . get ( "PAGE_SIZE" , "500" )) # Max is 500 for Tines MAX_PAGES = int ( os . environ . get ( "MAX_PAGES" , "20" )) TIMEOUT = int ( os . environ . get ( "HTTP_TIMEOUT" , "60" )) HTTP_RETRIES = int ( os . environ . get ( "HTTP_RETRIES" , "3" )) TINES_BASE_URL = os . environ [ "TINES_BASE_URL" ] TINES_API_KEY = os . environ [ "TINES_API_KEY" ] s3 = boto3 . client ( "s3" ) def _iso ( ts : float ) - > str : return time . strftime ( "%Y-%m- %d T%H:%M:%SZ" , time . gmtime ( ts )) def _load_state () - > dict : try : obj = s3 . get_object ( Bucket = S3_BUCKET , Key = STATE_KEY ) b = obj [ "Body" ] . read () return json . loads ( b ) if b else {} except Exception : return {} def _save_state ( st : dict ) - > None : s3 . put_object ( Bucket = S3_BUCKET , Key = STATE_KEY , Body = json . dumps ( st , separators = ( "," , ":" )) . encode ( "utf-8" ), ContentType = "application/json" , ) def _req ( url : str ) - > dict : attempt = 0 while True : try : req = Request ( url , method = "GET" ) req . add_header ( "Authorization" , f "Bearer { TINES_API_KEY } " ) req . add_header ( "Accept" , "application/json" ) req . add_header ( "Content-Type" , "application/json" ) with urlopen ( req , timeout = TIMEOUT ) as r : data = r . read () return json . loads ( data . decode ( "utf-8" )) except HTTPError as e : if e . code in ( 429 , 500 , 502 , 503 , 504 ) and attempt < HTTP_RETRIES : retry_after = 1 + attempt try : retry_after = int ( e . headers . get ( "Retry-After" , retry_after )) except Exception : pass time . sleep ( max ( 1 , retry_after )) attempt += 1 continue raise except URLError : if attempt < HTTP_RETRIES : time . sleep ( 1 + attempt ) attempt += 1 continue raise def _write ( payload , page : int ) - > str : ts = time . gmtime () key = f " { S3_PREFIX }{ time . strftime ( '%Y/%m/ %d /%H%M%S' , ts ) } -tines-audit- { page : 05d } .json" s3 . put_object ( Bucket = S3_BUCKET , Key = key , Body = json . dumps ( payload , separators = ( "," , ":" )) . encode ( "utf-8" ), ContentType = "application/json" , ) return key def _extract_items ( payload ) - > list : if isinstance ( payload , list ): return payload if isinstance ( payload , dict ): audit_logs = payload . get ( "audit_logs" ) if isinstance ( audit_logs , list ): return audit_logs return [] def _extract_newest_ts ( items : list , current : str | None ) - > str | None : newest = current for it in items : # Use created_at as the timestamp field t = it . get ( "created_at" ) if isinstance ( t , str ) and ( newest is None or t > newest ): newest = t return newest def lambda_handler ( event = None , context = None ): st = _load_state () since = st . get ( "since" ) or _iso ( time . time () - LOOKBACK_SEC ) page = 1 pages = 0 total = 0 newest_ts = since while pages < MAX_PAGES : # Build URL with query parameters # Note: Tines audit logs API uses 'after' parameter for filtering base_url = f " { TINES_BASE_URL . rstrip ( '/' ) } /api/v1/audit_logs" params = { "after" : since , # Filter for logs created after this timestamp "page" : page , "per_page" : PAGE_SIZE } url = f " { base_url } ? { urllib . parse . urlencode ( params ) } " payload = _req ( url ) _write ( payload , page ) items = _extract_items ( payload ) total += len ( items ) newest_ts = _extract_newest_ts ( items , newest_ts ) pages += 1 # Check if there's a next page using meta.next_page_number meta = payload . get ( "meta" ) or {} next_page = meta . get ( "next_page_number" ) if not next_page : break page = next_page if newest_ts and newest_ts != since : st [ "since" ] = newest_ts _save_state ( st ) return { "ok" : True , "pages" : pages , "items" : total , "since" : st . get ( "since" )} if __name__ == "__main__" : print ( lambda_handler ()) -
Go to Configuration > Environment variables.
-
Click Edit > Add new environment variable.
-
Enter the environment variables provided in the following table, replacing the example values with your values.
Environment variables
Key Example value S3_BUCKETtines-audit-logsS3_PREFIXtines/audit/STATE_KEYtines/audit/state.jsonTINES_BASE_URLhttps://your-tenant.tines.comTINES_API_KEYyour-tines-api-keyLOOKBACK_SECONDS3600PAGE_SIZE500MAX_PAGES20HTTP_TIMEOUT60HTTP_RETRIES3 -
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel, click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour). - Target: Your Lambda function
tines_audit_to_s3. - Name:
tines-audit-1h.
- Recurring schedule: Rate(
- Click Create schedule.
Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader. - Access type: Select Access key — Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
JSON:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::tines-audit-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::tines-audit-logs" } ] } -
Name =
secops-reader-policy. -
Click Create policy > search/select > Next > Add permissions.
-
Create access key for
secops-reader: Security credentials > Access keys. -
Click Create access key.
-
Download the
.CSV. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest Tines Audit Logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Tines Audit Logs). - Select Amazon S3 V2as the Source type.
- Select Tinesas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://tines-audit-logs/tines/audit/ - Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.

