Collect Citrix Analytics logs
This document explains how to ingest Citrix Analytics logs to Google Security Operations using Amazon S3.
Before you begin
Make sure you have the following prerequisites:
- Google SecOps instance
- Privileged access to Citrix Analytics for Performancetenant
- Privileged access to AWS(S3, IAM, Lambda, EventBridge)
Collect Citrix Analytics prerequisites
- Sign in to the Citrix Cloud Console.
- Go to Identity and Access Management > API Access.
- Click Create Client.
- Copy and save in a secure location the following details:
- Client ID
- Client Secret
- Customer ID(located in the Citrix Cloud URL or IAM page)
- API Base URL:
https://api.cloud.com/casodata
Configure AWS S3 bucket and IAM for Google SecOps
- Create Amazon S3 bucketfollowing this user guide: Creating a bucket
- Save bucket Nameand Regionfor future reference (for example,
citrix-analytics-logs). - Create a user following this user guide: Creating an IAM user .
- Select the created User.
- Select the Security credentialstab.
- Click Create Access Keyin the Access Keyssection.
- Select Third-party serviceas the Use case.
- Click Next.
- Optional: add a description tag.
- Click Create access key.
- Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
- Click Done.
- Select the Permissionstab.
- Click Add permissionsin the Permissions policiessection.
- Select Add permissions.
- Select Attach policies directly
- Search for and select the AmazonS3FullAccesspolicy.
- Click Next.
- Click Add permissions.
Configure the IAM policy and role for S3 uploads
- In the AWS console, go to IAM > Policies > Create policy > JSON tab.
-
Enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Sid" : "AllowPutObjects" , "Effect" : "Allow" , "Action" : "s3:PutObject" , "Resource" : "arn:aws:s3:::citrix-analytics-logs/*" }, { "Sid" : "AllowGetStateObject" , "Effect" : "Allow" , "Action" : "s3:GetObject" , "Resource" : "arn:aws:s3:::citrix-analytics-logs/citrix_analytics/state.json" } ] }- Replace
citrix-analytics-logsif you entered a different bucket name.
- Replace
-
Click Next > Create policy.
-
Go to IAM > Roles > Create role > AWS service > Lambda.
-
Attach the newly created policy.
-
Name the role
CitrixAnalyticsLambdaRoleand click Create role.
Create the Lambda function
- In the AWS Console, go to Lambda > Functions > Create function.
- Click Author from scratch.
-
Provide the following configuration details:
Setting Value Name CitrixAnalyticsCollectorRuntime Python 3.13 Architecture x86_64 Execution role CitrixAnalyticsLambdaRole -
After the function is created, open the Codetab, delete the stub and enter the following code (
CitrixAnalyticsCollector.py):import os import json import uuid import datetime import urllib.parse import urllib.request import boto3 import botocore CITRIX_TOKEN_URL_TMPL = "https://api.cloud.com/cctrustoauth2/ {customerid} /tokens/clients" DEFAULT_API_BASE = "https://api.cloud.com/casodata" s3 = boto3 . client ( "s3" ) def _http_post_form ( url , data_dict ): """POST form data to get authentication token.""" data = urllib . parse . urlencode ( data_dict ) . encode ( "utf-8" ) req = urllib . request . Request ( url , data = data , headers = { "Accept" : "application/json" , "Content-Type" : "application/x-www-form-urlencoded" , }) with urllib . request . urlopen ( req , timeout = 30 ) as response : return json . loads ( response . read () . decode ( "utf-8" )) def _http_get_json ( url , headers ): """GET JSON data from API endpoint.""" req = urllib . request . Request ( url , headers = headers ) with urllib . request . urlopen ( req , timeout = 60 ) as response : return json . loads ( response . read () . decode ( "utf-8" )) def get_citrix_token ( customer_id , client_id , client_secret ): """Get Citrix Cloud authentication token.""" url = CITRIX_TOKEN_URL_TMPL . format ( customerid = customer_id ) payload = { "grant_type" : "client_credentials" , "client_id" : client_id , "client_secret" : client_secret , } token_response = _http_post_form ( url , payload ) return token_response [ "access_token" ] def fetch_odata_entity ( entity , when_utc , top , headers , api_base ): """Fetch data from Citrix Analytics OData API with pagination.""" year = when_utc . year month = when_utc . month day = when_utc . day hour = when_utc . hour base_url = f " { api_base . rstrip ( '/' ) } / { entity } ?year= { year : 04d } & month= { month : 02d } & day= { day : 02d } & hour= { hour : 02d } " skip = 0 while True : url = f " { base_url } & $top= { top } & $skip= { skip } " data = _http_get_json ( url , headers ) items = data . get ( "value" , []) if not items : break for item in items : yield item if len ( items ) < top : break skip += top def read_state_file ( bucket , state_key ): """Read the last processed timestamp from S3 state file.""" try : obj = s3 . get_object ( Bucket = bucket , Key = state_key ) content = obj [ "Body" ] . read () . decode ( "utf-8" ) state = json . loads ( content ) timestamp_str = state . get ( "last_hour_utc" ) if timestamp_str : return datetime . datetime . fromisoformat ( timestamp_str . replace ( "Z" , "+00:00" )) . replace ( tzinfo = None ) except botocore . exceptions . ClientError as e : if e . response [ "Error" ][ "Code" ] == "NoSuchKey" : return None raise return None def write_state_file ( bucket , state_key , dt_utc ): """Write the current processed timestamp to S3 state file.""" state_data = { "last_hour_utc" : dt_utc . isoformat () + "Z" } s3 . put_object ( Bucket = bucket , Key = state_key , Body = json . dumps ( state_data , separators = ( "," , ":" )), ContentType = "application/json" ) def write_ndjson_to_s3 ( bucket , key , records ): """Write records as NDJSON to S3.""" body_lines = [] for record in records : json_line = json . dumps ( record , separators = ( "," , ":" ), ensure_ascii = False ) body_lines . append ( json_line ) body = ( "n" . join ( body_lines ) + "n" ) . encode ( "utf-8" ) s3 . put_object ( Bucket = bucket , Key = key , Body = body , ContentType = "application/x-ndjson" ) def lambda_handler ( event , context ): """Main Lambda handler function.""" # Environment variables bucket = os . environ [ "S3_BUCKET" ] prefix = os . environ . get ( "S3_PREFIX" , "" ) . strip ( "/" ) state_key = os . environ . get ( "STATE_KEY" ) or f " { prefix } /state.json" customer_id = os . environ [ "CITRIX_CUSTOMER_ID" ] client_id = os . environ [ "CITRIX_CLIENT_ID" ] client_secret = os . environ [ "CITRIX_CLIENT_SECRET" ] api_base = os . environ . get ( "API_BASE" , DEFAULT_API_BASE ) entities = [ e . strip () for e in os . environ . get ( "ENTITIES" , "sessions,machines,users" ) . split ( "," ) if e . strip ()] top_n = int ( os . environ . get ( "TOP_N" , "1000" )) lookback_minutes = int ( os . environ . get ( "LOOKBACK_MINUTES" , "75" )) # Determine target hour to collect now = datetime . datetime . utcnow () fallback_target = ( now - datetime . timedelta ( minutes = lookback_minutes )) . replace ( minute = 0 , second = 0 , microsecond = 0 ) last_processed = read_state_file ( bucket , state_key ) if last_processed : target_hour = last_processed + datetime . timedelta ( hours = 1 ) else : target_hour = fallback_target # Get authentication token token = get_citrix_token ( customer_id , client_id , client_secret ) headers = { "Authorization" : f "CwsAuth bearer= { token } " , "Citrix-CustomerId" : customer_id , "Accept" : "application/json" , "Content-Type" : "application/json" , } total_records = 0 # Process each entity type for entity in entities : records = [] for row in fetch_odata_entity ( entity , target_hour , top_n , headers , api_base ): enriched_record = { "citrix_entity" : entity , "citrix_hour_utc" : target_hour . isoformat () + "Z" , "collection_timestamp" : datetime . datetime . utcnow () . isoformat () + "Z" , "raw" : row } records . append ( enriched_record ) # Write in batches to avoid memory issues if len ( records ) > = 1000 : s3_key = f " { prefix } / { entity } /year= { target_hour . year : 04d } /month= { target_hour . month : 02d } /day= { target_hour . day : 02d } /hour= { target_hour . hour : 02d } /part- { uuid . uuid4 () . hex } .ndjson" write_ndjson_to_s3 ( bucket , s3_key , records ) total_records += len ( records ) records = [] # Write remaining records if records : s3_key = f " { prefix } / { entity } /year= { target_hour . year : 04d } /month= { target_hour . month : 02d } /day= { target_hour . day : 02d } /hour= { target_hour . hour : 02d } /part- { uuid . uuid4 () . hex } .ndjson" write_ndjson_to_s3 ( bucket , s3_key , records ) total_records += len ( records ) # Update state file write_state_file ( bucket , state_key , target_hour ) return { "statusCode" : 200 , "body" : json . dumps ({ "success" : True , "hour_collected" : target_hour . isoformat () + "Z" , "records_written" : total_records , "entities_processed" : entities }) } -
Go to Configuration > Environment variables > Edit > Add new environment variable.
-
Enter the following environment variables, replacing with your values:
Key Example value S3_BUCKETcitrix-analytics-logsS3_PREFIXcitrix_analyticsSTATE_KEYcitrix_analytics/state.jsonCITRIX_CLIENT_IDyour-client-idCITRIX_CLIENT_SECRETyour-client-secretAPI_BASEhttps://api.cloud.com/casodataCITRIX_CUSTOMER_IDyour-customer-idENTITIESsessions,machines,usersTOP_N1000LOOKBACK_MINUTES75 -
After the function is created, stay on its page (or open Lambda > Functions > CitrixAnalyticsCollector).
-
Select the Configurationtab.
-
In the General configurationpanel click Edit.
-
Change Timeoutto 5 minutes (300 seconds)and click Save.
Create an EventBridge schedule
- Go to Amazon EventBridge > Scheduler > Create schedule.
- Provide the following configuration details:
- Recurring schedule: Rate(
1 hour) - Target: your Lambda function
CitrixAnalyticsCollector - Name:
CitrixAnalyticsCollector-1h
- Recurring schedule: Rate(
- Click Create schedule.
Optional: Create read-only IAM user & keys for Google SecOps
- In the AWS Console. go to IAM > Users > Add users.
- Click Add users.
- Provide the following configuration details:
- User:
secops-reader - Access type: Access key — Programmatic access
- User:
- Click Create user.
- Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
-
In the JSON editor, enter the following policy:
{ "Version" : "2012-10-17" , "Statement" : [ { "Effect" : "Allow" , "Action" : [ "s3:GetObject" ], "Resource" : "arn:aws:s3:::citrix-analytics-logs/*" }, { "Effect" : "Allow" , "Action" : [ "s3:ListBucket" ], "Resource" : "arn:aws:s3:::citrix-analytics-logs" } ] } -
Set the name to
secops-reader-policy. -
Go to Create policy > search/select > Next > Add permissions.
-
Go to Security credentials > Access keys > Create access key.
-
Download the CSV(these values are entered into the feed).
Configure a feed in Google SecOps to ingest Citrix Analytics logs
- Go to SIEM Settings > Feeds.
- Click + Add New Feed.
- In the Feed namefield, enter a name for the feed (for example,
Citrix Analytics Performance logs). - Select Amazon S3 V2as the Source type.
- Select Citrix Analyticsas the Log type.
- Click Next.
- Specify values for the following input parameters:
- S3 URI:
s3://citrix-analytics-logs/citrix_analytics/ - Source deletion options: Select deletion option according to your preference.
- Maximum File Age: Include files modified in the last number of days. Default 180 Days.
- Access Key ID: User access key with access to the S3 bucket.
- Secret Access Key: User secret key with access to the S3 bucket.
- Asset namespace: The asset namespace .
- Ingestion labels: The label applied to the events from this feed.
- S3 URI:
- Click Next.
- Review your new feed configuration in the Finalizescreen, and then click Submit.
Need more help? Get answers from Community members and Google SecOps professionals.

