Collect Snipe-IT logs
This document explains how to ingest Snipe-IT logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance.
- Privileged access to the Snipe-ITtenant.
- Privileged access to AWS(S3, Identity and Access Management (IAM), Lambda, EventBridge).
Collect Snipe-IT prerequisites (API token and base URL)
- Sign in to Snipe-IT.
- Open your user menu (top-right avatar) and click Manage API keys.
- Click Create New API Key:
- Name/Label: Enter a descriptive label (for example,
Google SecOps export). - Click Generate.
- Name/Label: Enter a descriptive label (for example,
- Copy the API token(it will be shown only once). Store it securely.
- Determine your API base URL, typically:
-
https://<your-domain>/api/v1 - Example:
https://snipeit.example.com/api/v1
-
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
snipe-it-logs). - Create a Userfollowing this user guide: Creating an IAM user .
- Select the created User.
- Select Security credentialstab.
- Click Create Access Keyin section Access Keys.
- Select Third-party serviceas Use case.
- Click Next.
- Optional: Add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor future reference.
- Click Done.
- Select Permissionstab.
- Click Add permissionsin section Permissions policies.
- Select Add permissions.
- Select Attach policies directly.
- Search for AmazonS3FullAccesspolicy.
- Select the policy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies.
- Click Create policy > JSON tab.
- Copy and paste the following policy.
-
Policy JSON(replace
snipe-it-logsif you entered a different bucket name):{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::snipe-it-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::snipe-it-logs/snipeit/state.json" } ] } -
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
SnipeITToS3Roleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name snipeit_assets_to_s3Runtime Python 3.13 Architecture x86_64 Execution role SnipeITToS3Role -
After the function is created, open the Codetab, delete the stub and paste the following code (
snipeit_assets_to_s3.py).#!/usr/bin/env python3 # Lambda: Pull Snipe-IT hardware (assets) via REST API and write raw JSON pages to S3 (no transform) import os , json , time , urllib.parse from urllib.request import Request , urlopen import boto3 BASE = os . environ [ "SNIPE_BASE_URL" ] . rstrip ( "/" ) # e.g. https://snipeit.example.com/api/v1 TOKEN = os . environ [ "SNIPE_API_TOKEN" ] BUCKET = os . environ [ "S3_BUCKET" ] PREFIX = os . environ . get ( "S3_PREFIX" , "snipeit/assets/" ) PAGE_SIZE = int ( os . environ . get ( "PAGE_SIZE" , "500" )) # Snipe-IT max 500 per request MAX_PAGES = int ( os . environ . get ( "MAX_PAGES" , "200" )) s3 = boto3 . client ( "s3" ) def _headers (): return { "Authorization" : f "Bearer { TOKEN } " , "Accept" : "application/json" } def fetch_page ( offset : int ) - > dict : params = { "limit" : PAGE_SIZE , "offset" : offset , "sort" : "id" , "order" : "asc" } qs = urllib . parse . urlencode ( params ) url = f " { BASE } /hardware? { qs } " req = Request ( url , method = "GET" , headers = _headers ()) with urlopen ( req , timeout = 60 ) as r : return json . loads ( r . read () . decode ( "utf-8" )) def write_page ( payload : dict , ts : float , page : int ) - > str : key = f " { PREFIX } / { time . strftime ( '%Y/%m/ %d ' , time . gmtime ( ts )) } /snipeit-hardware- { page : 05d } .json" body = json . dumps ( payload , separators = ( "," , ":" )) . encode ( "utf-8" ) s3 . put_object ( Bucket = BUCKET , Key = key , Body = body , ContentType = "application/json" ) return key def lambda_handler ( event = None , context = None ): ts = time . time () offset = 0 page = 0 total = 0 while page < MAX_PAGES : data = fetch_page ( offset ) rows = data . get ( "rows" ) or data . get ( "data" ) or [] write_page ( data , ts , page ) total += len ( rows ) if len ( rows ) < PAGE_SIZE : break page += 1 offset += PAGE_SIZE return { "ok" : True , "pages" : page + 1 , "objects" : total } if __name__ == "__main__" : print ( lambda_handler ()) -
Go to Configuration > Environment variables.
-
Click Edit > Add new environment variable.
-
Enter the environment variables provided in the following table replacing the example values with your values.
Environment variables
Key Example value S3_BUCKETsnipe-it-logsS3_PREFIXsnipeit/assets/SNIPE_BASE_URLhttps://snipeit.example.com/api/v1SNIPE_API_TOKEN<your-api-token>PAGE_SIZE500MAX_PAGES200 -
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel, click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour). - Target: Your Lambda function
snipeit_assets_to_s3. - Name:
snipeit_assets_to_s3-1h.
- Recurring schedule: Rate(
- Click Create schedule.
(Optional) Create read-only IAM user and keys for Google SecOps
- Go to AWS Console > IAM > Users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader. - Access type: Select Access key â€" Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
JSON:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::snipe-it-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::snipe-it-logs" } ] } -
Name =
secops-reader-policy. -
Click Create policy > search/select > Next > Add permissions.
-
Create access key for
secops-reader: Security credentials > Access keys. -
Click Create access key.
-
Download the
.CSV. (You'll paste these values into the feed).
Configure a feed in Google SecOps to ingest Snipe-IT logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Snipe-IT logs). - Select Amazon S3 V2as the Source type.
- Select Snipe-ITas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://snipe-it-logs/snipeit/assets/ - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.

