Collect Bitwarden Enterprise event logs
This document explains how to ingest Bitwarden Enterprise event logs to Google Security Operations using Amazon S3. The parser transforms raw JSON formatted event logs into a structured format conforming to the Chronicle UDM. It extracts relevant fields like user details, IP addresses, and event types, mapping them to corresponding UDM fields for consistent security analysis.
Before you begin
- Google SecOps instance
- Privileged access to Bitwarden tenant
- Privileged access to AWS (S3, IAM, Lambda, EventBridge)
Get Bitwarden API key and URL
- In the Bitwarden Admin console.
- Go to Settings > Organization info > View API key.
- Copy and save the following details to a secure location:
- Client ID
- Client Secret
- Determine your Bitwarden endpoints (based on region):
- IDENTITY_URL:
https://identity.bitwarden.com/connect/token
(EU:https://identity.bitwarden.eu/connect/token
) - API_BASE:
https://api.bitwarden.com
(EU:https://api.bitwarden.eu
)
- IDENTITY_URL:
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
bitwarden-events
). - Create a user following this user guide: Creating an IAM user .
- Select the created User.
- Select the Security credentialstab.
- Click Create Access Keyin the Access Keyssection.
- Select Third-party serviceas the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
- Click Done.
- Select the Permissionstab.
- Click Add permissionsin the Permissions policiessection.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccesspolicy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- Go to AWS console > IAM > Policies > Create policy > JSON tab.
-
Enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutBitwardenObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::bitwarden-events/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::bitwarden-events/bitwarden/events/state.json" } ] }
- Replace
bitwarden-events
if you entered a different bucket name.
- Replace
-
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
WriteBitwardenToS3Role
and click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name bitwarden_events_to_s3
Runtime Python 3.13 Architecture x86_64 Execution role WriteBitwardenToS3Role
-
After the function is created, open the Codetab, delete the stub and enter the following code (
bitwarden_events_to_s3.py
):#!/usr/bin/env python3 import os , json , time , urllib.parse from urllib.request import Request , urlopen from urllib.error import HTTPError , URLError import boto3 IDENTITY_URL = os . environ . get ( "IDENTITY_URL" , "https://identity.bitwarden.com/connect/token" ) API_BASE = os . environ . get ( "API_BASE" , "https://api.bitwarden.com" ) . rstrip ( "/" ) CID = os . environ [ "BW_CLIENT_ID" ] # organization.ClientId CSECRET = os . environ [ "BW_CLIENT_SECRET" ] # organization.ClientSecret BUCKET = os . environ [ "S3_BUCKET" ] PREFIX = os . environ . get ( "S3_PREFIX" , "bitwarden/events/" ) . strip ( "/" ) STATE_KEY = os . environ . get ( "STATE_KEY" , "bitwarden/events/state.json" ) MAX_PAGES = int ( os . environ . get ( "MAX_PAGES" , "10" )) HEADERS_FORM = { "Content-Type" : "application/x-www-form-urlencoded" } HEADERS_JSON = { "Accept" : "application/json" } s3 = boto3 . client ( "s3" ) def _read_state (): try : obj = s3 . get_object ( Bucket = BUCKET , Key = STATE_KEY ) j = json . loads ( obj [ "Body" ] . read ()) return j . get ( "continuationToken" ) except Exception : return None def _write_state ( token ): body = json . dumps ({ "continuationToken" : token }) . encode ( "utf-8" ) s3 . put_object ( Bucket = BUCKET , Key = STATE_KEY , Body = body , ContentType = "application/json" ) def _http ( req : Request , timeout : int = 60 , max_retries : int = 5 ): attempt , backoff = 0 , 1.0 while True : try : with urlopen ( req , timeout = timeout ) as r : return json . loads ( r . read () . decode ( "utf-8" )) except HTTPError as e : # Retry on 429 and 5xx if ( e . code == 429 or 500 < = e . code < = 599 ) and attempt < max_retries : time . sleep ( backoff ); attempt += 1 ; backoff *= 2 ; continue raise except URLError : if attempt < max_retries : time . sleep ( backoff ); attempt += 1 ; backoff *= 2 ; continue raise def _get_token (): body = urllib . parse . urlencode ({ "grant_type" : "client_credentials" , "scope" : "api.organization" , "client_id" : CID , "client_secret" : CSECRET , }) . encode ( "utf-8" ) req = Request ( IDENTITY_URL , data = body , method = "POST" , headers = HEADERS_FORM ) data = _http ( req , timeout = 30 ) return data [ "access_token" ], int ( data . get ( "expires_in" , 3600 )) def _fetch_events ( bearer : str , cont : str | None ): params = {} if cont : params [ "continuationToken" ] = cont qs = ( "?" + urllib . parse . urlencode ( params )) if params else "" url = f " { API_BASE } /public/events { qs } " req = Request ( url , method = "GET" , headers = { "Authorization" : f "Bearer { bearer } " , ** HEADERS_JSON }) return _http ( req , timeout = 60 ) def _write_page ( obj : dict , run_ts_s : int , page_index : int ) - > str : # Make filename unique per page to avoid overwrites in the same second key = f " { PREFIX } / { time . strftime ( '%Y/%m/ %d /%H%M%S' , time . gmtime ( run_ts_s )) } -page { page_index : 05d } -bitwarden-events.json" s3 . put_object ( Bucket = BUCKET , Key = key , Body = json . dumps ( obj , separators = ( "," , ":" )) . encode ( "utf-8" ), ContentType = "application/json" , ) return key def lambda_handler ( event = None , context = None ): bearer , _ttl = _get_token () cont = _read_state () run_ts_s = int ( time . time ()) pages = 0 written = 0 while pages < MAX_PAGES : data = _fetch_events ( bearer , cont ) # write page _write_page ( data , run_ts_s , pages ) pages += 1 # count entries (official shape: {"object":"list","data":[...], "continuationToken": "..."} ) entries = [] if isinstance ( data . get ( "data" ), list ): entries = data [ "data" ] elif isinstance ( data . get ( "entries" ), list ): # fallback if shape differs entries = data [ "entries" ] written += len ( entries ) # next page token (official: "continuationToken") next_cont = data . get ( "continuationToken" ) if next_cont : cont = next_cont continue break # Save state only if there are more pages to continue in next run _write_state ( cont if pages > = MAX_PAGES and cont else None ) return { "ok" : True , "pages" : pages , "events_estimate" : written , "nextContinuationToken" : cont } if __name__ == "__main__" : print ( lambda_handler ())
-
Go to Configuration > Environment variables > Edit > Add new environment variable.
-
Enter the following environment variables, replacing with your values:
Key Example S3_BUCKET
bitwarden-events
S3_PREFIX
bitwarden/events/
STATE_KEY
bitwarden/events/state.json
BW_CLIENT_ID
<organization client_id>
BW_CLIENT_SECRET
<organization client_secret>
IDENTITY_URL
https://identity.bitwarden.com/connect/token
(EU:https://identity.bitwarden.eu/connect/token
)API_BASE
https://api.bitwarden.com
(EU:https://api.bitwarden.eu
)MAX_PAGES
10
-
After the function is created, stay on its page (or open Lambda > Functions > your-function).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour
). - Target: your Lambda function.
- Name:
bitwarden-events-1h
.
- Recurring schedule: Rate(
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- In the AWS Console, go to IAM > Users, then click Add users.
- Provide the following configuration details:
- User: Enter a unique name (for example,
secops-reader
) - Access type: Select Access key - Programmatic access
- Click Create user.
- User: Enter a unique name (for example,
- Attach minimal read policy (custom): Users >
select
secops-reader
> Permissions > Add permissions > Attach policies directly > Create policy -
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::<your-bucket>/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::<your-bucket>" } ] }
-
Set the name to
secops-reader-policy
. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest the Bitwarden Enterprise Event Logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Bitwarden Events
). - Select Amazon S3 V2as the Source type.
- Select Bitwarden eventsas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://bitwarden-events/bitwarden/events/
- Source deletion options: Select the deletion option according to your preference.
- Maximum File Age: Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: the asset namespace .
- Ingestion labels: the label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
UDM Mapping Table
Log Field | UDM Mapping | Logic |
---|---|---|
actingUserId
|
target.user.userid | If enriched.actingUser.userId
is empty or null, this field is used to populate the target.user.userid
field. |
collectionID
|
security_result.detection_fields.key | Populates the key
field within detection_fields
in security_result
. |
collectionID
|
security_result.detection_fields.value | Populates the value
field within detection_fields
in security_result
. |
date
|
metadata.event_timestamp | Parsed and converted to a timestamp format and mapped to event_timestamp
. |
enriched.actingUser.accessAll
|
security_result.rule_labels.key | Sets the value to "Access_All" within rule_labels
in security_result
. |
enriched.actingUser.accessAll
|
security_result.rule_labels.value | Populates the value
field within rule_labels
in security_result
with the value from enriched.actingUser.accessAll
converted to string. |
enriched.actingUser.email
|
target.user.email_addresses | Populates the email_addresses
field within target.user
. |
enriched.actingUser.id
|
metadata.product_log_id | Populates the product_log_id
field within metadata
. |
enriched.actingUser.id
|
target.labels.key | Sets the value to "ID" within target.labels
. |
enriched.actingUser.id
|
target.labels.value | Populates the value
field within target.labels
with the value from enriched.actingUser.id
. |
enriched.actingUser.name
|
target.user.user_display_name | Populates the user_display_name
field within target.user
. |
enriched.actingUser.object
|
target.labels.key | Sets the value to "Object" within target.labels
. |
enriched.actingUser.object
|
target.labels.value | Populates the value
field within target.labels
with the value from enriched.actingUser.object
. |
enriched.actingUser.resetPasswordEnrolled
|
target.labels.key | Sets the value to "ResetPasswordEnrolled" within target.labels
. |
enriched.actingUser.resetPasswordEnrolled
|
target.labels.value | Populates the value
field within target.labels
with the value from enriched.actingUser.resetPasswordEnrolled
converted to string. |
enriched.actingUser.twoFactorEnabled
|
security_result.rule_labels.key | Sets the value to "Two Factor Enabled" within rule_labels
in security_result
. |
enriched.actingUser.twoFactorEnabled
|
security_result.rule_labels.value | Populates the value
field within rule_labels
in security_result
with the value from enriched.actingUser.twoFactorEnabled
converted to string. |
enriched.actingUser.userId
|
target.user.userid | Populates the userid
field within target.user
. |
enriched.collection.id
|
additional.fields.key | Sets the value to "Collection ID" within additional.fields
. |
enriched.collection.id
|
additional.fields.value.string_value | Populates the string_value
field within additional.fields
with the value from enriched.collection.id
. |
enriched.collection.object
|
additional.fields.key | Sets the value to "Collection Object" within additional.fields
. |
enriched.collection.object
|
additional.fields.value.string_value | Populates the string_value
field within additional.fields
with the value from enriched.collection.object
. |
enriched.type
|
metadata.product_event_type | Populates the product_event_type
field within metadata
. |
groupId
|
target.user.group_identifiers | Adds the value to the group_identifiers
array within target.user
. |
ipAddress
|
principal.ip | Extracted IP address from the field and mapped to principal.ip
. |
N/A
|
extensions.auth | An empty object is created by the parser. |
N/A
|
metadata.event_type | Determined based on the enriched.type
and presence of principal
and target
information. Possible values: USER_LOGIN, STATUS_UPDATE, GENERIC_EVENT. |
N/A
|
security_result.action | Determined based on the enriched.type
. Possible values: ALLOW, BLOCK. |
object
|
additional.fields.key | Sets the value to "Object" within additional.fields
. |
object
|
additional.fields.value | Populates the value
field within additional.fields
with the value from object
. |
Need more help? Get answers from Community members and Google SecOps professionals.