Collect Oracle Cloud Infrastructure Audit logs

Supported in:

This document explains how to ingest Oracle Cloud Infrastructure Audit logs to Google Security Operations using Amazon S3.

Before you begin

Ensure that you have the following prerequisites:

  • Google SecOps instance.
  • Oracle Cloud Infrastructure account with permissions to create and manage:
    • Service Connector Hub
    • Oracle Functions
    • Vaults and Secrets
    • Dynamic Groups and IAM Policies
    • Logging
  • AWS account with permissions to create and manage:
    • S3 buckets
    • IAM users and policies

Create an Amazon S3 bucket

  1. Sign in to the AWS Management Console.
  2. Go to S3 > Create bucket.
  3. Provide the following configuration details:
    • Bucket name: Enter a unique name (for example, oci-audit-logs-bucket ).
    • AWS Region: Select a region (for example, us-east-1 ).
    • Keep the default settings for other options.
  4. Click Create bucket.
  5. Save the bucket Nameand Regionfor later use.

Create an IAM user in AWS for OCI Functions

  1. Sign in to the AWS Management Console.
  2. Go to IAM > Users > Add users.
  3. Provide the following configuration details:
    • User name: Enter a username (for example, oci-functions-s3-user ).
    • Access type: Select Access key - Programmatic access.
  4. Click Next: Permissions.
  5. Click Attach existing policies directly.
  6. Search for and select the AmazonS3FullAccesspolicy.
  7. Click Next: Tags.
  8. Click Next: Review.
  9. Click Create user.
  10. Important: On the success page, copy and save the following credentials:
    • Access key ID
    • Secret access key

Store AWS credentials in OCI Vault

To securely store AWS credentials, you must use Oracle Cloud Infrastructure Vault instead of hardcoding them in the function code.

Create a Vault and Master Encryption Key

  1. Sign in to the Oracle Cloud Console.
  2. Go to Identity and Security > Vault.
  3. If you don't have a Vault, click Create Vault.
  4. Provide the following configuration details:
    • Create in Compartment: Select your compartment.
    • Name: Enter a name (for example, oci-functions-vault ).
  5. Click Create Vault.
  6. After the Vault is created, click the Vault name to open it.
  7. Under Master Encryption Keys, click Create Key.
  8. Provide the following configuration details:
    • Protection Mode: Software
    • Name: Enter a name (for example, oci-functions-key ).
    • Key Shape: Algorithm: AES
    • Key Shape: Length: 256 bits
  9. Click Create Key.

Create secrets for AWS credentials

  1. In the Vault, under Secrets, click Create Secret.
  2. Provide the following configuration details for the AWS access key:
    • Create in Compartment: Select your compartment.
    • Name: aws-access-key
    • Description: AWS access key for S3
    • Encryption Key: Select the Master Encryption Key you created.
    • Secret Type Contents: Plain-Text
    • Secret Contents: Paste your AWS access key ID.
  3. Click Create Secret.
  4. Copy and save the OCIDof this secret (it looks like ocid1.vaultsecret.oc1... ).
  5. Click Create Secretagain to create the second secret.
  6. Provide the following configuration details for the AWS secret key:
    • Create in Compartment: Select your compartment.
    • Name: aws-secret-key
    • Description: AWS secret key for S3
    • Encryption Key: Select the same Master Encryption Key.
    • Secret Type Contents: Plain-Text
    • Secret Contents: Paste your AWS secret access key.
  7. Click Create Secret.
  8. Copy and save the OCIDof this secret.

Create a Dynamic Group for OCI Functions

  1. Sign in to the Oracle Cloud Console.
  2. Go to Identity & Security > Identity > Dynamic Groups.
  3. Click Create Dynamic Group.
  4. Provide the following configuration details:

    • Name: oci-functions-dynamic-group
    • Description: Dynamic group for OCI Functions to access Vault secrets
    • Matching Rules: Enter the following rule (replace <your_compartment_ocid> with your compartment OCID):

        ALL 
        
       { 
       resource 
       . 
       type 
        
       = 
        
       ' 
       fnfunc 
       ' 
       , 
        
       resource 
       . 
       compartment 
       . 
       id 
        
       = 
        
       ' 
      < your_compartment_ocid 
      > ' 
       } 
       
      
  5. Click Create.

Create an IAM Policy for Vault access

  1. Sign in to the Oracle Cloud Console.
  2. Go to Identity & Security > Identity > Policies.
  3. Select the compartment where you want to create the policy.
  4. Click Create Policy.
  5. Provide the following configuration details:

    • Name: oci-functions-vault-access-policy
    • Description: Allow OCI Functions to read secrets from Vault
    • Policy Builder: Toggle Show manual editor.
    • Policy statements: Enter the following (replace <compartment_name> with your compartment name):

       allow dynamic-group oci-functions-dynamic-group to manage secret-family in compartment <compartment_name> 
      
  6. Click Create.

Create an OCI Function Application

  1. Sign in to the Oracle Cloud Console.
  2. Go to Developer Services > Applications (under Functions).
  3. Click Create Application.
  4. Provide the following configuration details:
    • Name: Enter a name (for example, oci-logs-to-s3-app ).
    • VCN: Select a VCN in your compartment.
    • Subnets: Select one or more subnets.
  5. Click Create.

Create and deploy the OCI Function

  1. In the Oracle Cloud Console, click the Cloud Shellicon in the top-right corner.
  2. Wait for Cloud Shell to initialize.

Create the function

  1. In Cloud Shell, create a new directory for your function:

     mkdir  
    pushlogs cd 
      
    pushlogs 
    
  2. Initialize a new Python function:

     fn  
    init  
    --runtime  
    python 
    
  3. This creates three files: func.py , func.yaml , and requirements.txt .

Update func.py

  • Replace the contents of func.py with the following code:

      import 
      
     io 
     import 
      
     json 
     import 
      
     logging 
     import 
      
     boto3 
     import 
      
     oci 
     import 
      
     base64 
     import 
      
     os 
     from 
      
     fdk 
      
     import 
     response 
     def 
      
     handler 
     ( 
     ctx 
     , 
     data 
     : 
     io 
     . 
     BytesIO 
     = 
     None 
     ): 
      
     """ 
     OCI Function to push audit logs from OCI Logging to AWS S3 
     """ 
     try 
     : 
     # Parse incoming log data from Service Connector 
     funDataStr 
     = 
     data 
     . 
     read 
     () 
     . 
     decode 
     ( 
     'utf-8' 
     ) 
     funData 
     = 
     json 
     . 
     loads 
     ( 
     funDataStr 
     ) 
     logging 
     . 
     getLogger 
     () 
     . 
     info 
     ( 
     f 
     "Received 
     { 
     len 
     ( 
     funData 
     ) 
     } 
     log entries" 
     ) 
     # Replace these with your actual OCI Vault secret OCIDs 
     secret_key_id 
     = 
     "ocid1.vaultsecret.oc1..<your_secret_key_ocid>" 
     access_key_id 
     = 
     "ocid1.vaultsecret.oc1..<your_access_key_ocid>" 
     # Replace with your S3 bucket name 
     s3_bucket_name 
     = 
     "oci-audit-logs-bucket" 
     # Use Resource Principals for OCI authentication 
     signer 
     = 
     oci 
     . 
     auth 
     . 
     signers 
     . 
     get_resource_principals_signer 
     () 
     secret_client 
     = 
     oci 
     . 
     secrets 
     . 
     SecretsClient 
     ({}, 
     signer 
     = 
     signer 
     ) 
     def 
      
     read_secret_value 
     ( 
     secret_client 
     , 
     secret_id 
     ): 
      
     """Retrieve and decode secret value from OCI Vault""" 
     response 
     = 
     secret_client 
     . 
     get_secret_bundle 
     ( 
     secret_id 
     ) 
     base64_secret_content 
     = 
     response 
     . 
     data 
     . 
     secret_bundle_content 
     . 
     content 
     base64_secret_bytes 
     = 
     base64_secret_content 
     . 
     encode 
     ( 
     'ascii' 
     ) 
     base64_message_bytes 
     = 
     base64 
     . 
     b64decode 
     ( 
     base64_secret_bytes 
     ) 
     secret_content 
     = 
     base64_message_bytes 
     . 
     decode 
     ( 
     'ascii' 
     ) 
     return 
     secret_content 
     # Retrieve AWS credentials from OCI Vault 
     awsaccesskey 
     = 
     read_secret_value 
     ( 
     secret_client 
     , 
     access_key_id 
     ) 
     awssecretkey 
     = 
     read_secret_value 
     ( 
     secret_client 
     , 
     secret_key_id 
     ) 
     # Initialize boto3 session with AWS credentials 
     session 
     = 
     boto3 
     . 
     Session 
     ( 
     aws_access_key_id 
     = 
     awsaccesskey 
     , 
     aws_secret_access_key 
     = 
     awssecretkey 
     ) 
     s3 
     = 
     session 
     . 
     resource 
     ( 
     's3' 
     ) 
     # Process each log entry 
     for 
     i 
     in 
     range 
     ( 
     0 
     , 
     len 
     ( 
     funData 
     )): 
     # Use timestamp as filename 
     filename 
     = 
     funData 
     [ 
     i 
     ] 
     . 
     get 
     ( 
     'time' 
     , 
     f 
     'log_ 
     { 
     i 
     } 
     ' 
     ) 
     # Remove special characters from filename 
     filename 
     = 
     filename 
     . 
     replace 
     ( 
     ':' 
     , 
     '-' 
     ) 
     . 
     replace 
     ( 
     '.' 
     , 
     '-' 
     ) 
     logging 
     . 
     getLogger 
     () 
     . 
     info 
     ( 
     f 
     "Processing log entry: 
     { 
     filename 
     } 
     " 
     ) 
     # Write log entry to temporary file 
     temp_file 
     = 
     f 
     '/tmp/ 
     { 
     filename 
     } 
     .json' 
     with 
     open 
     ( 
     temp_file 
     , 
     'w' 
     , 
     encoding 
     = 
     'utf-8' 
     ) 
     as 
     f 
     : 
     json 
     . 
     dump 
     ( 
     funData 
     [ 
     i 
     ], 
     f 
     , 
     ensure_ascii 
     = 
     False 
     , 
     indent 
     = 
     4 
     ) 
     # Upload to S3 
     s3_key 
     = 
     f 
     ' 
     { 
     filename 
     } 
     .json' 
     s3 
     . 
     meta 
     . 
     client 
     . 
     upload_file 
     ( 
     Filename 
     = 
     temp_file 
     , 
     Bucket 
     = 
     s3_bucket_name 
     , 
     Key 
     = 
     s3_key 
     ) 
     logging 
     . 
     getLogger 
     () 
     . 
     info 
     ( 
     f 
     "Uploaded 
     { 
     s3_key 
     } 
     to S3 bucket 
     { 
     s3_bucket_name 
     } 
     " 
     ) 
     # Clean up temporary file 
     os 
     . 
     remove 
     ( 
     temp_file 
     ) 
     return 
     response 
     . 
     Response 
     ( 
     ctx 
     , 
     response_data 
     = 
     json 
     . 
     dumps 
     ({ 
     "status" 
     : 
     "success" 
     , 
     "processed_logs" 
     : 
     len 
     ( 
     funData 
     ) 
     }), 
     headers 
     = 
     { 
     "Content-Type" 
     : 
     "application/json" 
     } 
     ) 
     except 
     Exception 
     as 
     e 
     : 
     logging 
     . 
     getLogger 
     () 
     . 
     error 
     ( 
     f 
     "Error processing logs: 
     { 
     str 
     ( 
     e 
     ) 
     } 
     " 
     ) 
     return 
     response 
     . 
     Response 
     ( 
     ctx 
     , 
     response_data 
     = 
     json 
     . 
     dumps 
     ({ 
     "status" 
     : 
     "error" 
     , 
     "message" 
     : 
     str 
     ( 
     e 
     ) 
     }), 
     headers 
     = 
     { 
     "Content-Type" 
     : 
     "application/json" 
     }, 
     status_code 
     = 
     500 
     ) 
     
    
    • Replace secret_key_id with your actual vault secret OCID for AWS secret key
    • Replace access_key_id with your actual vault secret OCID for AWS access key
    • Replace s3_bucket_name with your actual S3 bucket name

Update func.yaml

Replace the contents of func.yaml with:

   
 schema_version 
 : 
  
 20180708 
  
 name 
 : 
  
 pushlogs 
  
 version 
 : 
  
 0.0.1 
  
 runtime 
 : 
  
 python 
  
 build_image 
 : 
  
 fnproject/python:3.9-dev 
  
 run_image 
 : 
  
 fnproject/python:3.9 
  
 entrypoint 
 : 
  
 /python/bin/fdk /function/func.py handler 
  
 memory 
 : 
  
 256 
 

Update requirements.txt

  • Replace the contents of requirements.txt with:

     fdk>=0.1.56
    boto3
    oci 
    

Deploy the function

  1. Set the Fn context to use your application:

     fn  
    use  
    context  
    <region-context>
    fn  
    update  
    context  
    oracle.compartment-id  
    <compartment-ocid> 
    
  2. Deploy the function:

     fn  
    -v  
    deploy  
    --app  
    oci-logs-to-s3-app 
    
  3. Wait for the deployment to complete. You should see output indicating the function was successfully deployed.

  4. Verify the function was created:

     fn  
    list  
    functions  
    oci-logs-to-s3-app 
    

Create a Service Connector to send OCI Audit logs to the Function

  1. Sign in to the Oracle Cloud Console.
  2. Go to Analytics & AI > Messaging > Service Connector Hub.
  3. Select the compartment where you want to create the service connector.
  4. Click Create Service Connector.

Configure Service Connector details

  1. Provide the following configuration details:

Service Connector Information:* Connector Name: Enter a descriptive name (for example, audit-logs-to-s3-connector ). * Description: Optional description (for example, "Forward OCI Audit logs to AWS S3"). * Resource Compartment: Select the compartment.

Configure Source

  1. Under Configure Source:
    • Source: Select Logging.
    • Compartment: Select the compartment containing audit logs.
    • Log Group: Select _Audit (this is the default log group for audit logs).
    • Logs: Click + Another Log.
    • Select the audit log for your compartment (for example, _Audit_Include_Subcompartment ).

Configure Target

  1. Under Configure Target:
    • Target: Select Functions.
    • Compartment: Select the compartment containing your function application.
    • Function Application: Select oci-logs-to-s3-app (the application you created earlier).
    • Function: Select pushlogs (the function you deployed).

Configure Policy

  1. Under Configure Policy:

    • Review the required IAM policy statements displayed.
    • Click Createto create the required policies automatically.
  2. Click Createto create the service connector.

  3. Wait for the service connector to be created and activated. The status should change to Active.

Verify logs are being pushed to AWS S3

  1. Sign in to the Oracle Cloud Console.
  2. Perform some actions that generate audit logs (for example, create or modify a resource).
  3. Wait 2-5 minutes for logs to be processed.
  4. Sign in to the AWS Management Console.
  5. Go to S3 > Buckets.
  6. Click your bucket (for example, oci-audit-logs-bucket ).
  7. Verify that JSON log files are appearing in the bucket.

Configure AWS S3 bucket and IAM for Google SecOps

Create an IAM user for Chronicle

  1. Sign in to the AWS Management Console.
  2. Go to IAM > Users > Add users.
  3. Provide the following configuration details:
    • User name: Enter chronicle-s3-reader .
    • Access type: Select Access key - Programmatic access.
  4. Click Next: Permissions.
  5. Click Attach existing policies directly.
  6. Search for and select the AmazonS3ReadOnlyAccesspolicy.
  7. Click Next: Tags.
  8. Click Next: Review.
  9. Click Create user.
  10. Click Download CSV fileto save the Access Key IDand Secret Access Key.
  11. Click Close.

Optional: Create a custom IAM policy for least-privilege access

If you want to restrict access to only the specific bucket:

  1. Go to IAM > Policies > Create policy.
  2. Click the JSONtab.
  3. Enter the following policy:

      { 
      
     "Version" 
     : 
      
     "2012-10-17" 
     , 
      
     "Statement" 
     : 
      
     [ 
      
     { 
      
     "Effect" 
     : 
      
     "Allow" 
     , 
      
     "Action" 
     : 
      
     [ 
      
     "s3:GetObject" 
     , 
      
     "s3:ListBucket" 
      
     ], 
      
     "Resource" 
     : 
      
     [ 
      
     "arn:aws:s3:::oci-audit-logs-bucket" 
     , 
      
     "arn:aws:s3:::oci-audit-logs-bucket/*" 
      
     ] 
      
     } 
      
     ] 
     } 
     
    
    • Replace oci-audit-logs-bucket with your bucket name.
  4. Click Next: Tags.

  5. Click Next: Review.

  6. Provide the following configuration details:

    • Name: chronicle-s3-read-policy
    • Description: Read-only access to OCI audit logs bucket
  7. Click Create policy.

  8. Go back to IAM > Usersand select the chronicle-s3-reader user.

  9. Click Add permissions > Attach policies directly.

  10. Search for and select chronicle-s3-read-policy .

  11. Remove the AmazonS3ReadOnlyAccesspolicy if you added it earlier.

  12. Click Add permissions.

Configure a feed in Google SecOps to ingest Oracle Cloud Audit logs

  1. Go to SIEM Settings > Feeds.
  2. Click Add New Feed.
  3. On the next page, click Configure a single feed.
  4. In the Feed namefield, enter a name for the feed (for example, Oracle Cloud Audit Logs ).
  5. Select Amazon S3 V2as the Source type.
  6. Select Oracle Cloud Infrastructureas the Log type.
  7. Click Next.
  8. Specify values for the following input parameters:
    • S3 URI: Enter the S3 bucket URI (for example, s3://oci-audit-logs-bucket/ ).
    • Source deletion option: Select the deletion option according to your preference:
      • Never: Recommended for testing and initial setup.
      • Delete transferred files: Deletes files after successful ingestion (use for production to manage storage costs).
    • Maximum File Age: Include files modified in the last number of days. Default is 180 days.
    • Access Key ID: Enter the access key ID from the Chronicle IAM user you created.
    • Secret Access Key: Enter the secret access key from the Chronicle IAM user you created.
    • Asset namespace: The asset namespace .
    • Ingestion labels: The label to be applied to the events from this feed.
  9. Click Next.
  10. Review your new feed configuration in the Finalizescreen, and then click Submit.

UDM mapping table

Log Field UDM Mapping Logic
data.request.headers.authorization.0
event.idm.read_only_udm.additional.fields Value taken from data.request.headers.authorization.0 and added as a key-value pair where the key is "Request Headers Authorization".
data.compartmentId
event.idm.read_only_udm.additional.fields Value taken from data.compartmentId and added as a key-value pair where the key is "compartmentId".
data.compartmentName
event.idm.read_only_udm.additional.fields Value taken from data.compartmentName and added as a key-value pair where the key is "compartmentName".
data.response.headers.Content-Length.0
event.idm.read_only_udm.additional.fields Value taken from data.response.headers.Content-Length.0 and added as a key-value pair where the key is "Response Headers Content-Length".
data.response.headers.Content-Type.0
event.idm.read_only_udm.additional.fields Value taken from data.response.headers.Content-Type.0 and added as a key-value pair where the key is "Response Headers Content-Type".
data.eventGroupingId
event.idm.read_only_udm.additional.fields Value taken from data.eventGroupingId and added as a key-value pair where the key is "eventGroupingId".
oracle.tenantid , data.identity.tenantId
event.idm.read_only_udm.additional.fields Value is taken from oracle.tenantid if present, otherwise from data.identity.tenantId . It is added as a key-value pair where the key is "tenantId".
data.message
event.idm.read_only_udm.metadata.description Value taken from data.message .
time
event.idm.read_only_udm.metadata.event_timestamp Value taken from time and parsed as an ISO8601 timestamp.
event.idm.read_only_udm.metadata.event_type Set to GENERIC_EVENT by default. Set to NETWORK_CONNECTION if a principal (IP or hostname) and a target IP are present. Set to STATUS_UPDATE if only a principal is present.
time
event.idm.read_only_udm.metadata.ingested_timestamp If oracle.ingestedtime is not empty, the value is taken from the time field and parsed as an ISO8601 timestamp.
oracle.tenantid
event.idm.read_only_udm.metadata.product_deployment_id Value taken from oracle.tenantid .
type
event.idm.read_only_udm.metadata.product_event_type Value taken from type .
oracle.logid
event.idm.read_only_udm.metadata.product_log_id Value taken from oracle.logid .
specversion
event.idm.read_only_udm.metadata.product_version Value taken from specversion .
data.request.action
event.idm.read_only_udm.network.http.method Value taken from data.request.action .
data.identity.userAgent
event.idm.read_only_udm.network.http.parsed_user_agent Value taken from data.identity.userAgent and parsed.
data.response.status
event.idm.read_only_udm.network.http.response_code Value taken from data.response.status and converted to an integer.
data.protocol
event.idm.read_only_udm.network.ip_protocol The numeric value from data.protocol is converted to its string representation (e.g., 6 becomes "TCP", 17 becomes "UDP").
data.bytesOut
event.idm.read_only_udm.network.sent_bytes Value taken from data.bytesOut and converted to an unsigned integer.
data.packets
event.idm.read_only_udm.network.sent_packets Value taken from data.packets and converted to an integer.
data.identity.consoleSessionId
event.idm.read_only_udm.network.session_id Value taken from data.identity.consoleSessionId .
id
event.idm.read_only_udm.principal.asset.product_object_id Value taken from id .
source
event.idm.read_only_udm.principal.hostname Value taken from source .
data.sourceAddress , data.identity.ipAddress
event.idm.read_only_udm.principal.ip Values from data.sourceAddress and data.identity.ipAddress are merged into this field.
data.sourcePort
event.idm.read_only_udm.principal.port Value taken from data.sourcePort and converted to an integer.
data.request.headers.X-Forwarded-For.0
event.idm.read_only_udm.principal.resource.attribute.labels Value taken from data.request.headers.X-Forwarded-For.0 and added as a key-value pair where the key is "x forward".
oracle.compartmentid
event.idm.read_only_udm.principal.resource.attribute.labels Value taken from oracle.compartmentid and added as a key-value pair where the key is "compartmentid".
oracle.loggroupid
event.idm.read_only_udm.principal.resource.attribute.labels Value taken from oracle.loggroupid and added as a key-value pair where the key is "loggroupid".
oracle.vniccompartmentocid
event.idm.read_only_udm.principal.resource.attribute.labels Value taken from oracle.vniccompartmentocid and added as a key-value pair where the key is "vniccompartmentocid".
oracle.vnicocid
event.idm.read_only_udm.principal.resource.attribute.labels Value taken from oracle.vnicocid and added as a key-value pair where the key is "vnicocid".
oracle.vnicsubnetocid
event.idm.read_only_udm.principal.resource.attribute.labels Value taken from oracle.vnicsubnetocid and added as a key-value pair where the key is "vnicsubnetocid".
data.flowid
event.idm.read_only_udm.principal.resource.product_object_id Value taken from data.flowid .
data.identity.credentials
event.idm.read_only_udm.principal.user.attribute.labels Value taken from data.identity.credentials and added as a key-value pair where the key is "credentials".
data.identity.principalName
event.idm.read_only_udm.principal.user.user_display_name Value taken from data.identity.principalName .
data.identity.principalId
event.idm.read_only_udm.principal.user.userid Value taken from data.identity.principalId .
data.action
event.idm.read_only_udm.security_result.action Set to UNKNOWN_ACTION by default. If data.action is "REJECT", this is set to BLOCK . If data.action is "ACCEPT", this is set to ALLOW .
data.endTime
event.idm.read_only_udm.security_result.detection_fields Value taken from data.endTime and added as a key-value pair where the key is "endTime".
data.startTime
event.idm.read_only_udm.security_result.detection_fields Value taken from data.startTime and added as a key-value pair where the key is "startTime".
data.status
event.idm.read_only_udm.security_result.detection_fields Value taken from data.status and added as a key-value pair where the key is "status".
data.version
event.idm.read_only_udm.security_result.detection_fields Value taken from data.version and added as a key-value pair where the key is "version".
data.destinationAddress
event.idm.read_only_udm.target.ip Value taken from data.destinationAddress .
data.destinationPort
event.idm.read_only_udm.target.port Value taken from data.destinationPort and converted to an integer.
data.request.path
event.idm.read_only_udm.target.url Value taken from data.request.path .
event.idm.read_only_udm.metadata.product_name Set to "ORACLE CLOUD AUDIT".
event.idm.read_only_udm.metadata.vendor_name Set to "ORACLE".

Need more help? Get answers from Community members and Google SecOps professionals.

Design a Mobile Site
View Site in Mobile | Classic
Share by: