Collect Citrix Analytics logs

Supported in:

This document explains how to ingest Citrix Analytics logs to Google Security Operations using Amazon S3.

Before you begin

Make sure you have the following prerequisites:

  • Google SecOps instance
  • Privileged access to Citrix Analytics for Performancetenant
  • Privileged access to AWS(S3, IAM, Lambda, EventBridge)

Collect Citrix Analytics prerequisites

  1. Sign in to the Citrix Cloud Console.
  2. Go to Identity and Access Management > API Access.
  3. Click Create Client.
  4. Copy and save in a secure location the following details:
    • Client ID
    • Client Secret
    • Customer ID(located in the Citrix Cloud URL or IAM page)
    • API Base URL: https://api.cloud.com/casodata

Configure AWS S3 bucket and IAM for Google SecOps

  1. Create Amazon S3 bucketfollowing this user guide: Creating a bucket
  2. Save bucket Nameand Regionfor future reference (for example, citrix-analytics-logs ).
  3. Create a user following this user guide: Creating an IAM user .
  4. Select the created User.
  5. Select the Security credentialstab.
  6. Click Create Access Keyin the Access Keyssection.
  7. Select Third-party serviceas the Use case.
  8. Click Next.
  9. Optional: add a description tag.
  10. Click Create access key.
  11. Click Download CSV fileto save the Access Keyand Secret Access Keyfor later use.
  12. Click Done.
  13. Select the Permissionstab.
  14. Click Add permissionsin the Permissions policiessection.
  15. Select Add permissions.
  16. Select Attach policies directly
  17. Search for and select the AmazonS3FullAccesspolicy.
  18. Click Next.
  19. Click Add permissions.

Configure the IAM policy and role for S3 uploads

  1. In the AWS console, go to IAM > Policies > Create policy > JSON tab.
  2. Enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Sid" 
     : 
      
     "AllowPutObjects" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:PutObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::citrix-analytics-logs/*" 
      
     }, 
      
     { 
      
     "Sid" 
     : 
      
     "AllowGetStateObject" 
     , 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     "s3:GetObject" 
     , 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::citrix-analytics-logs/citrix_analytics/state.json" 
      
     } 
      
     ] 
     } 
     
    
    • Replace citrix-analytics-logs if you entered a different bucket name.
  3. Click Next > Create policy.

  4. Go to IAM > Roles > Create role > AWS service > Lambda.

  5. Attach the newly created policy.

  6. Name the role CitrixAnalyticsLambdaRole and click Create role.

Create the Lambda function

  1. In the AWS Console, go to Lambda > Functions > Create function.
  2. Click Author from scratch.
  3. Provide the following configuration details:

    Setting Value
    Name CitrixAnalyticsCollector
    Runtime Python 3.13
    Architecture x86_64
    Execution role CitrixAnalyticsLambdaRole
  4. After the function is created, open the Codetab, delete the stub and enter the following code ( CitrixAnalyticsCollector.py ):

      import 
      
     os 
     import 
      
     json 
     import 
      
     uuid 
     import 
      
     datetime 
     import 
      
     urllib.parse 
     import 
      
     urllib.request 
     import 
      
     boto3 
     import 
      
     botocore 
     CITRIX_TOKEN_URL_TMPL 
     = 
     "https://api.cloud.com/cctrustoauth2/ 
     {customerid} 
     /tokens/clients" 
     DEFAULT_API_BASE 
     = 
     "https://api.cloud.com/casodata" 
     s3 
     = 
     boto3 
     . 
     client 
     ( 
     "s3" 
     ) 
     def 
      
     _http_post_form 
     ( 
     url 
     , 
     data_dict 
     ): 
      
     """POST form data to get authentication token.""" 
     data 
     = 
     urllib 
     . 
     parse 
     . 
     urlencode 
     ( 
     data_dict 
     ) 
     . 
     encode 
     ( 
     "utf-8" 
     ) 
     req 
     = 
     urllib 
     . 
     request 
     . 
     Request 
     ( 
     url 
     , 
     data 
     = 
     data 
     , 
     headers 
     = 
     { 
     "Accept" 
     : 
     "application/json" 
     , 
     "Content-Type" 
     : 
     "application/x-www-form-urlencoded" 
     , 
     }) 
     with 
     urllib 
     . 
     request 
     . 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     30 
     ) 
     as 
     response 
     : 
     return 
     json 
     . 
     loads 
     ( 
     response 
     . 
     read 
     () 
     . 
     decode 
     ( 
     "utf-8" 
     )) 
     def 
      
     _http_get_json 
     ( 
     url 
     , 
     headers 
     ): 
      
     """GET JSON data from API endpoint.""" 
     req 
     = 
     urllib 
     . 
     request 
     . 
     Request 
     ( 
     url 
     , 
     headers 
     = 
     headers 
     ) 
     with 
     urllib 
     . 
     request 
     . 
     urlopen 
     ( 
     req 
     , 
     timeout 
     = 
     60 
     ) 
     as 
     response 
     : 
     return 
     json 
     . 
     loads 
     ( 
     response 
     . 
     read 
     () 
     . 
     decode 
     ( 
     "utf-8" 
     )) 
     def 
      
     get_citrix_token 
     ( 
     customer_id 
     , 
     client_id 
     , 
     client_secret 
     ): 
      
     """Get Citrix Cloud authentication token.""" 
     url 
     = 
     CITRIX_TOKEN_URL_TMPL 
     . 
     format 
     ( 
     customerid 
     = 
     customer_id 
     ) 
     payload 
     = 
     { 
     "grant_type" 
     : 
     "client_credentials" 
     , 
     "client_id" 
     : 
     client_id 
     , 
     "client_secret" 
     : 
     client_secret 
     , 
     } 
     token_response 
     = 
     _http_post_form 
     ( 
     url 
     , 
     payload 
     ) 
     return 
     token_response 
     [ 
     "access_token" 
     ] 
     def 
      
     fetch_odata_entity 
     ( 
     entity 
     , 
     when_utc 
     , 
     top 
     , 
     headers 
     , 
     api_base 
     ): 
      
     """Fetch data from Citrix Analytics OData API with pagination.""" 
     year 
     = 
     when_utc 
     . 
     year 
     month 
     = 
     when_utc 
     . 
     month 
     day 
     = 
     when_utc 
     . 
     day 
     hour 
     = 
     when_utc 
     . 
     hour 
     base_url 
     = 
     f 
     " 
     { 
     api_base 
     . 
     rstrip 
     ( 
     '/' 
     ) 
     } 
     / 
     { 
     entity 
     } 
     ?year= 
     { 
     year 
     : 
     04d 
     } 
    & month= 
     { 
     month 
     : 
     02d 
     } 
    & day= 
     { 
     day 
     : 
     02d 
     } 
    & hour= 
     { 
     hour 
     : 
     02d 
     } 
     " 
     skip 
     = 
     0 
     while 
     True 
     : 
     url 
     = 
     f 
     " 
     { 
     base_url 
     } 
    & $top= 
     { 
     top 
     } 
    & $skip= 
     { 
     skip 
     } 
     " 
     data 
     = 
     _http_get_json 
     ( 
     url 
     , 
     headers 
     ) 
     items 
     = 
     data 
     . 
     get 
     ( 
     "value" 
     , 
     []) 
     if 
     not 
     items 
     : 
     break 
     for 
     item 
     in 
     items 
     : 
     yield 
     item 
     if 
     len 
     ( 
     items 
     ) 
    < top 
     : 
     break 
     skip 
     += 
     top 
     def 
      
     read_state_file 
     ( 
     bucket 
     , 
     state_key 
     ): 
      
     """Read the last processed timestamp from S3 state file.""" 
     try 
     : 
     obj 
     = 
     s3 
     . 
     get_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     state_key 
     ) 
     content 
     = 
     obj 
     [ 
     "Body" 
     ] 
     . 
     read 
     () 
     . 
     decode 
     ( 
     "utf-8" 
     ) 
     state 
     = 
     json 
     . 
     loads 
     ( 
     content 
     ) 
     timestamp_str 
     = 
     state 
     . 
     get 
     ( 
     "last_hour_utc" 
     ) 
     if 
     timestamp_str 
     : 
     return 
     datetime 
     . 
     datetime 
     . 
     fromisoformat 
     ( 
     timestamp_str 
     . 
     replace 
     ( 
     "Z" 
     , 
     "+00:00" 
     )) 
     . 
     replace 
     ( 
     tzinfo 
     = 
     None 
     ) 
     except 
     botocore 
     . 
     exceptions 
     . 
     ClientError 
     as 
     e 
     : 
     if 
     e 
     . 
     response 
     [ 
     "Error" 
     ][ 
     "Code" 
     ] 
     == 
     "NoSuchKey" 
     : 
     return 
     None 
     raise 
     return 
     None 
     def 
      
     write_state_file 
     ( 
     bucket 
     , 
     state_key 
     , 
     dt_utc 
     ): 
      
     """Write the current processed timestamp to S3 state file.""" 
     state_data 
     = 
     { 
     "last_hour_utc" 
     : 
     dt_utc 
     . 
     isoformat 
     () 
     + 
     "Z" 
     } 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     state_key 
     , 
     Body 
     = 
     json 
     . 
     dumps 
     ( 
     state_data 
     , 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     )), 
     ContentType 
     = 
     "application/json" 
     ) 
     def 
      
     write_ndjson_to_s3 
     ( 
     bucket 
     , 
     key 
     , 
     records 
     ): 
      
     """Write records as NDJSON to S3.""" 
     body_lines 
     = 
     [] 
     for 
     record 
     in 
     records 
     : 
     json_line 
     = 
     json 
     . 
     dumps 
     ( 
     record 
     , 
     separators 
     = 
     ( 
     "," 
     , 
     ":" 
     ), 
     ensure_ascii 
     = 
     False 
     ) 
     body_lines 
     . 
     append 
     ( 
     json_line 
     ) 
     body 
     = 
     ( 
     "n" 
     . 
     join 
     ( 
     body_lines 
     ) 
     + 
     "n" 
     ) 
     . 
     encode 
     ( 
     "utf-8" 
     ) 
     s3 
     . 
     put_object 
     ( 
     Bucket 
     = 
     bucket 
     , 
     Key 
     = 
     key 
     , 
     Body 
     = 
     body 
     , 
     ContentType 
     = 
     "application/x-ndjson" 
     ) 
     def 
      
     lambda_handler 
     ( 
     event 
     , 
     context 
     ): 
      
     """Main Lambda handler function.""" 
     # Environment variables 
     bucket 
     = 
     os 
     . 
     environ 
     [ 
     "S3_BUCKET" 
     ] 
     prefix 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "S3_PREFIX" 
     , 
     "" 
     ) 
     . 
     strip 
     ( 
     "/" 
     ) 
     state_key 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "STATE_KEY" 
     ) 
     or 
     f 
     " 
     { 
     prefix 
     } 
     /state.json" 
     customer_id 
     = 
     os 
     . 
     environ 
     [ 
     "CITRIX_CUSTOMER_ID" 
     ] 
     client_id 
     = 
     os 
     . 
     environ 
     [ 
     "CITRIX_CLIENT_ID" 
     ] 
     client_secret 
     = 
     os 
     . 
     environ 
     [ 
     "CITRIX_CLIENT_SECRET" 
     ] 
     api_base 
     = 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "API_BASE" 
     , 
     DEFAULT_API_BASE 
     ) 
     entities 
     = 
     [ 
     e 
     . 
     strip 
     () 
     for 
     e 
     in 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "ENTITIES" 
     , 
     "sessions,machines,users" 
     ) 
     . 
     split 
     ( 
     "," 
     ) 
     if 
     e 
     . 
     strip 
     ()] 
     top_n 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "TOP_N" 
     , 
     "1000" 
     )) 
     lookback_minutes 
     = 
     int 
     ( 
     os 
     . 
     environ 
     . 
     get 
     ( 
     "LOOKBACK_MINUTES" 
     , 
     "75" 
     )) 
     # Determine target hour to collect 
     now 
     = 
     datetime 
     . 
     datetime 
     . 
     utcnow 
     () 
     fallback_target 
     = 
     ( 
     now 
     - 
     datetime 
     . 
     timedelta 
     ( 
     minutes 
     = 
     lookback_minutes 
     )) 
     . 
     replace 
     ( 
     minute 
     = 
     0 
     , 
     second 
     = 
     0 
     , 
     microsecond 
     = 
     0 
     ) 
     last_processed 
     = 
     read_state_file 
     ( 
     bucket 
     , 
     state_key 
     ) 
     if 
     last_processed 
     : 
     target_hour 
     = 
     last_processed 
     + 
     datetime 
     . 
     timedelta 
     ( 
     hours 
     = 
     1 
     ) 
     else 
     : 
     target_hour 
     = 
     fallback_target 
     # Get authentication token 
     token 
     = 
     get_citrix_token 
     ( 
     customer_id 
     , 
     client_id 
     , 
     client_secret 
     ) 
     headers 
     = 
     { 
     "Authorization" 
     : 
     f 
     "CwsAuth bearer= 
     { 
     token 
     } 
     " 
     , 
     "Citrix-CustomerId" 
     : 
     customer_id 
     , 
     "Accept" 
     : 
     "application/json" 
     , 
     "Content-Type" 
     : 
     "application/json" 
     , 
     } 
     total_records 
     = 
     0 
     # Process each entity type 
     for 
     entity 
     in 
     entities 
     : 
     records 
     = 
     [] 
     for 
     row 
     in 
     fetch_odata_entity 
     ( 
     entity 
     , 
     target_hour 
     , 
     top_n 
     , 
     headers 
     , 
     api_base 
     ): 
     enriched_record 
     = 
     { 
     "citrix_entity" 
     : 
     entity 
     , 
     "citrix_hour_utc" 
     : 
     target_hour 
     . 
     isoformat 
     () 
     + 
     "Z" 
     , 
     "collection_timestamp" 
     : 
     datetime 
     . 
     datetime 
     . 
     utcnow 
     () 
     . 
     isoformat 
     () 
     + 
     "Z" 
     , 
     "raw" 
     : 
     row 
     } 
     records 
     . 
     append 
     ( 
     enriched_record 
     ) 
     # Write in batches to avoid memory issues 
     if 
     len 
     ( 
     records 
     ) 
    > = 
     1000 
     : 
     s3_key 
     = 
     f 
     " 
     { 
     prefix 
     } 
     / 
     { 
     entity 
     } 
     /year= 
     { 
     target_hour 
     . 
     year 
     : 
     04d 
     } 
     /month= 
     { 
     target_hour 
     . 
     month 
     : 
     02d 
     } 
     /day= 
     { 
     target_hour 
     . 
     day 
     : 
     02d 
     } 
     /hour= 
     { 
     target_hour 
     . 
     hour 
     : 
     02d 
     } 
     /part- 
     { 
     uuid 
     . 
     uuid4 
     () 
     . 
     hex 
     } 
     .ndjson" 
     write_ndjson_to_s3 
     ( 
     bucket 
     , 
     s3_key 
     , 
     records 
     ) 
     total_records 
     += 
     len 
     ( 
     records 
     ) 
     records 
     = 
     [] 
     # Write remaining records 
     if 
     records 
     : 
     s3_key 
     = 
     f 
     " 
     { 
     prefix 
     } 
     / 
     { 
     entity 
     } 
     /year= 
     { 
     target_hour 
     . 
     year 
     : 
     04d 
     } 
     /month= 
     { 
     target_hour 
     . 
     month 
     : 
     02d 
     } 
     /day= 
     { 
     target_hour 
     . 
     day 
     : 
     02d 
     } 
     /hour= 
     { 
     target_hour 
     . 
     hour 
     : 
     02d 
     } 
     /part- 
     { 
     uuid 
     . 
     uuid4 
     () 
     . 
     hex 
     } 
     .ndjson" 
     write_ndjson_to_s3 
     ( 
     bucket 
     , 
     s3_key 
     , 
     records 
     ) 
     total_records 
     += 
     len 
     ( 
     records 
     ) 
     # Update state file 
     write_state_file 
     ( 
     bucket 
     , 
     state_key 
     , 
     target_hour 
     ) 
     return 
     { 
     "statusCode" 
     : 
     200 
     , 
     "body" 
     : 
     json 
     . 
     dumps 
     ({ 
     "success" 
     : 
     True 
     , 
     "hour_collected" 
     : 
     target_hour 
     . 
     isoformat 
     () 
     + 
     "Z" 
     , 
     "records_written" 
     : 
     total_records 
     , 
     "entities_processed" 
     : 
     entities 
     }) 
     } 
     
    
  5. Go to Configuration > Environment variables > Edit > Add new environment variable.

  6. Enter the following environment variables, replacing with your values:

    Key Example value
    S3_BUCKET citrix-analytics-logs
    S3_PREFIX citrix_analytics
    STATE_KEY citrix_analytics/state.json
    CITRIX_CLIENT_ID your-client-id
    CITRIX_CLIENT_SECRET your-client-secret
    API_BASE https://api.cloud.com/casodata
    CITRIX_CUSTOMER_ID your-customer-id
    ENTITIES sessions,machines,users
    TOP_N 1000
    LOOKBACK_MINUTES 75
  7. After the function is created, stay on its page (or open Lambda > Functions > CitrixAnalyticsCollector).

  8. Select the Configurationtab.

  9. In the General configurationpanel click Edit.

  10. Change Timeoutto 5 minutes (300 seconds)and click Save.

Create an EventBridge schedule

  1. Go to Amazon EventBridge > Scheduler > Create schedule.
  2. Provide the following configuration details:
    • Recurring schedule: Rate( 1 hour )
    • Target: your Lambda function CitrixAnalyticsCollector
    • Name: CitrixAnalyticsCollector-1h
  3. Click Create schedule.

Optional: Create read-only IAM user & keys for Google SecOps

  1. In the AWS Console. go to IAM > Users > Add users.
  2. Click Add users.
  3. Provide the following configuration details:
    • User: secops-reader
    • Access type: Access key — Programmatic access
  4. Click Create user.
  5. Attach minimal read policy (custom): Users > secops-reader > Permissions > Add permissions > Attach policies directly > Create policy.
  6. In the JSON editor, enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:GetObject" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::citrix-analytics-logs/*" 
      
     }, 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
     "s3:ListBucket" 
     ], 
      
     "Resource" 
     : 
      
     "arn:aws:s3:::citrix-analytics-logs" 
      
     } 
      
     ] 
     } 
     
    
  7. Set the name to secops-reader-policy .

  8. Go to Create policy > search/select > Next > Add permissions.

  9. Go to Security credentials > Access keys > Create access key.

  10. Download the CSV(these values are entered into the feed).

Configure a feed in Google SecOps to ingest Citrix Analytics logs

  1. Go to SIEM Settings > Feeds.
  2. Click + Add New Feed.
  3. In the Feed namefield, enter a name for the feed (for example, Citrix Analytics Performance logs ).
  4. Select Amazon S3 V2as the Source type.
  5. Select Citrix Analyticsas the Log type.
  6. Click Next.
  7. Specify values for the following input parameters:
    • S3 URI: s3://citrix-analytics-logs/citrix_analytics/
    • Source deletion options: Select deletion option according to your preference.
    • Maximum File Age: Include files modified in the last number of days. Default 180 Days.
    • Access Key ID: User access key with access to the S3 bucket.
    • Secret Access Key: User secret key with access to the S3 bucket.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label applied to the events from this feed.
  8. Click Next.
  9. Review your new feed configuration in the Finalizescreen, and then click Submit.

Need more help? Get answers from Community members and Google SecOps professionals.

Create a Mobile Website
View Site in Mobile | Classic
Share by: