Collect Tanium audit logs
This document explains how to ingest Tanium audit logs to Google Security Operations using Amazon S3 using Tanium Connect's native S3 export capability. The parser extracts the logs, initially clearing numerous default fields. It then parses the log message using grok and the json filter, extracting fields like timestamp, device IP, and audit details. The parser maps these extracted fields to the UDM, handling various data types and conditional logic to populate appropriate UDM fields based on the presence and values of specific Tanium audit log attributes.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- Privileged access to Tanium Connectand Tanium Console
- Privileged access to AWS(S3, IAM)
Create an Amazon S3 bucket
- Open the Amazon S3 console.
- If required, you can change the Region.
- From the navigation bar, select the Regionwhere you want your Tanium Audit logsto reside.
- Click Create Bucket.
- Bucket Name: Enter a meaningful name for the bucket (for example,
tanium-audit-logs). - Region: Select your preferred Region (for example,
us-east-1). - Click Create.
- Bucket Name: Enter a meaningful name for the bucket (for example,
Create an IAM user with full access to Amazon S3
- Open the IAM console.
- Click Users > Add user.
- Enter a user name(for example,
tanium-connect-s3-user). - Select both Programmatic accessand/or AWS Management Console accessas needed.
- Select either Autogenerated passwordor Custom password.
- Click Next: Permissions.
- Choose Attach existing policies directly.
- Search for and select the AmazonS3FullAccesspolicy to the user.
- Click Next: Tags.
- Click Next: Review.
- Click Create user.
- Copyand savethe Access Key IDand Secret Access Keyfor future reference.
Configure permissions on Amazon S3 bucket
- In the Amazon S3 console, choose the bucket that you previously created.
- Click Permissions > Bucket policy.
-
In the Bucket Policy Editor, add the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Principal" : { "AWS" : "arn:aws:iam::YOUR_ACCOUNT_ID:user/tanium-connect-s3-user" }, "Action" : [ "s3:PutObject" , "s3:PutObjectAcl" , "s3:GetObject" , "s3:ListBucket" ], "Resource" : [ "arn:aws:s3:::tanium-audit-logs" , "arn:aws:s3:::tanium-audit-logs/*" ] } ] } -
Replace the following variables:
- Change
YOUR_ACCOUNT_IDto your AWS account ID. - Change
tanium-audit-logsto your actual bucket name if different. - Change
tanium-connect-s3-userto your actual IAM username if different.
- Change
-
Click Save.
Configure Tanium Connect for S3 export
Create an AWS S3 connection in Tanium Connect
- Sign in to the Tanium Consoleas an administrator.
- Go to Tanium Connect > Connections.
- Click Create Connection.
- In the General Informationsection, provide the following configuration details:
- Name: Enter a descriptive name (for example,
Tanium Audit to S3). - Description: Enter a meaningful description (for example,
Export Tanium audit logs to S3 for Google SecOps ingestion). - Enable: Select to enable the connection.
- Log Level: Select Information(default) or adjust as needed.
- Name: Enter a descriptive name (for example,
Configure the connection source
- In the Configurationsection, for Source, select Tanium Audit.
- Configure the audit source settings:
- Days of History Retrieved: Enter the number of days of historical audit data to retrieve (for example,
7for one week). - Audit Types: Select the audit types you want to export. Choose from:
- Action History: Actions issued by console operators.
- Authentication: User authentication events.
- Content: Content changes and modifications.
- Groups: Computer group changes.
- Packages: Package-related activities.
- Sensors: Sensor modifications.
- System Settings: System configuration changes.
- Users: User management activities.
- Days of History Retrieved: Enter the number of days of historical audit data to retrieve (for example,
Configure the AWS S3 destination
- For Destination, select AWS S3.
- Provide the following configuration details:
- Destination Name: Enter a name (for example,
Google SecOps S3 Bucket). - AWS Access Key: Enter the Access Key ID from the IAM user created earlier.
- AWS Secret Key: Enter the Secret Access Key from the IAM user created earlier.
- Bucket Name: Enter your S3 bucket name (for example,
tanium-audit-logs). - Bucket Path: Optional. Enter a path prefix (for example,
tanium/audit/). - Region: Select the AWS region where your bucket resides (for example,
us-east-1).
- Destination Name: Enter a name (for example,
Configure the format and schedule
- In the Formatsection, configure the output format:
- Format Type: Select JSON.
- Include Column Headers: Select if you want column headers included.
- Generate Document: Deselect this option to send raw JSON data.
- In the Schedulesection, configure when the connection runs:
- Schedule Type: Select Cron.
- Cron Expression: Enter a cron expression for regular exports (for example,
0 */1 * * *for hourly exports). - Start Date: Set the start date for the schedule.
- Click Save Changes.
Test and run the connection
- From the Connect Overviewpage, go to Connections.
- Click the connection you created ( Tanium Audit to S3).
- Click Run Nowto test the connection.
- Confirm that you want to run the connection.
- Monitor the connection status and verify that audit logs are being exported to your S3 bucket.
Optional: Create read-only IAM user & keys for Google SecOps
- Go to AWS Console > IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User: Enter
secops-reader. - Access type: Select Access key – Programmatic access.
- User: Enter
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::tanium-audit-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::tanium-audit-logs" } ] } -
Set the name to
secops-reader-policy. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest Tanium Audit logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Tanium Audit logs). - Select Amazon S3 V2as the Source type.
- Select Tanium Auditas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://tanium-audit-logs/tanium/audit/(adjust path if you used a different bucket name or path). - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default is 180 days.
- Access Key ID: User access key with access to the S3 bucket (from the read-only user created above).
- Secret Access Key: User secret key with access to the S3 bucket (from the read-only user created above).
- Asset namespace: The asset namespace .
- Ingestion labels: The label to be applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
UDM Mapping Table
| Log Field | UDM Mapping | Logic |
|---|---|---|
ActionId
|
metadata.product_log_id
|
Directly mapped from the ActionId
field. |
ActionName
|
security_result.action_details
|
Directly mapped from the ActionName
field. |
Approver
|
additional.fields[Approver].value.string_value
|
Directly mapped from the Approver
field. |
Approver
|
principal.user.userid
|
Mapped from the Approver
field if Issuer
is not present. |
audit_name
|
metadata.description
|
Directly mapped from the audit_name
field. |
audit_row_id
|
additional.fields[audit_row_id].value.string_value
|
Directly mapped from the audit_row_id
field. |
audit_type
|
additional.fields[audit_type].value.string_value
|
Directly mapped from the audit_type
field. |
authentication_type
|
principal.user.attribute.labels[authentication_type].value
|
Directly mapped from the authentication_type
field extracted from the details
field. |
Command
|
principal.process.command_line
|
Directly mapped from the Command
field after URL decoding. |
creation_time
|
target.resource.attribute.creation_time
|
Directly mapped from the creation_time
field. |
details
|
network.session_id
|
Extracted from the details
field using key-value parsing. |
details
|
principal.user.attribute.labels[authentication_type].value
|
Extracted from the details
field using key-value parsing. |
details
|
principal.asset.ip
, principal.ip
|
The IP address is extracted from the details
field using key-value parsing and mapped to both principal.asset.ip
and principal.ip
. |
DistributeOver
|
additional.fields[DistributeOver].value.string_value
|
Directly mapped from the DistributeOver
field. |
dvc_ip
|
intermediary.hostname
|
Directly mapped from the dvc_ip
field extracted from the syslog message. |
dvc_ip
|
observer.ip
|
Directly mapped from the dvc_ip
field if logstash.collect.host
is not present. |
Expiration
|
additional.fields[Expiration].value.string_value
|
Directly mapped from the Expiration
field. |
host.architecture
|
target.asset.hardware.cpu_platform
|
Directly mapped from the host.architecture
field. |
host.id
|
target.asset.asset_id
|
Directly mapped from the host.id
field, prefixed with "Host ID:". |
host.ip
|
target.ip
|
Directly mapped from the host.ip
field. |
host.mac
|
target.mac
|
Directly mapped from the host.mac
field. |
host.name
|
target.hostname
|
Directly mapped from the host.name
field if host.hostname
is not present. |
host.os.kernel
|
target.platform_patch_level
|
Directly mapped from the host.os.kernel
field. |
host.os.name
|
additional.fields[os_name].value.string_value
|
Directly mapped from the host.os.name
field. |
host.os.version
|
target.platform_version
|
Directly mapped from the host.os.version
field. |
InsertTime
|
additional.fields[InsertTime].value.string_value
|
Directly mapped from the InsertTime
field. |
Issuer
|
additional.fields[Issuer].value.string_value
|
Directly mapped from the Issuer
field. |
Issuer
|
principal.user.userid
|
Directly mapped from the Issuer
field if present. |
last_modified_by
|
principal.resource.attribute.labels[last_modified_by].value
|
Directly mapped from the last_modified_by
field. |
log.source.address
|
principal.ip
|
The IP address is extracted from the log.source.address
field and mapped to principal.ip
. |
log.source.address
|
principal.port
|
The port is extracted from the log.source.address
field. |
logstash.collect.host
|
observer.ip
|
Directly mapped from the logstash.collect.host
field if present. |
logstash.collect.timestamp
|
metadata.collected_timestamp
|
Directly mapped from the logstash.collect.timestamp
field. |
logstash.ingest.timestamp
|
metadata.ingested_timestamp
|
Directly mapped from the logstash.ingest.timestamp
field. |
logstash.irm_environment
|
additional.fields[irm_environment].value.string_value
|
Directly mapped from the logstash.irm_environment
field. |
logstash.irm_region
|
additional.fields[irm_region].value.string_value
|
Directly mapped from the logstash.irm_region
field. |
logstash.irm_site
|
additional.fields[irm_site].value.string_value
|
Directly mapped from the logstash.irm_site
field. |
logstash.process.host
|
intermediary.hostname
|
Directly mapped from the logstash.process.host
field. |
message
|
dvc_ip
, json_data
, timestamp
|
Parsed using grok to extract dvc_ip
, json_data
, and timestamp
. |
modification_time
|
target.resource.attribute.last_update_time
|
Directly mapped from the modification_time
field. |
modifier_user_id
|
principal.resource.attribute.labels[modifier_user_id].value
|
Directly mapped from the modifier_user_id
field. |
object_id
|
target.resource.product_object_id
|
Directly mapped from the object_id
field. |
object_name
|
target.resource.name
|
Directly mapped from the object_name
field. |
object_type_name
|
target.resource.attribute.labels[object_type_name].value
|
Directly mapped from the object_type_name
field. |
PackageName
|
additional.fields[PackageName].value.string_value
|
Directly mapped from the PackageName
field. |
SourceId
|
additional.fields[SourceId].value.string_value
|
Directly mapped from the SourceId
field. |
StartTime
|
additional.fields[StartTime].value.string_value
|
Directly mapped from the StartTime
field. |
Status
|
security_result.action
|
Mapped to "BLOCK" if Status
is "Closed", "ALLOW" if Status
is "Open". |
Status
|
security_result.summary
|
Directly mapped from the Status
field. |
tanium_audit_type
|
metadata.product_event_type
|
Directly mapped from the tanium_audit_type
field. |
timestamp
|
metadata.event_timestamp
|
Directly mapped from the timestamp
field extracted from the syslog message or message
field. |
type
|
additional.fields[type].value.string_value
|
Directly mapped from the type
field. |
type_name
|
metadata.product_event_type
|
Directly mapped from the type_name
field. |
User
|
principal.user.userid
|
Directly mapped from the User
field. Determined by parser logic based on the presence of src_ip
, has_target
, and has_user
. Can be "NETWORK_CONNECTION", "USER_RESOURCE_ACCESS", "STATUS_UPDATE", or "GENERIC_EVENT". Hardcoded to "TANIUM_AUDIT". Hardcoded to "cybersecurity". Hardcoded to "TANIUM_AUDIT". |
@version
|
metadata.product_version
|
Directly mapped from the @version
field. |
agent.ephemeral_id
|
additional.fields[ephemeral_id].value.string_value
|
Directly mapped from the agent.ephemeral_id
field. |
agent.id
|
observer.asset_id
|
Directly mapped from the agent.id
field, prefixed with "filebeat:". |
agent.type
|
observer.application
|
Directly mapped from the agent.type
field. |
agent.version
|
observer.platform_version
|
Directly mapped from the agent.version
field. |
Comment
|
security_result.description
|
Directly mapped from the Comment
field. |
host.hostname
|
target.hostname
|
Directly mapped from the host.hostname
field if present. |
input.type
|
network.ip_protocol
|
Mapped to "TCP" if input.type
is "tcp" or "TCP". |
syslog_severity
|
security_result.severity
|
Mapped to "HIGH" if syslog_severity
is "error" or "warning", "MEDIUM" if "notice", "LOW" if "information" or "info". |
syslog_severity
|
security_result.severity_details
|
Directly mapped from the syslog_severity
field. |
Need more help? Get answers from Community members and Google SecOps professionals.

