Collect WP Engine logs
This document explains how to ingest WP Engine logs to Google Security Operations using Google Cloud Storage V2.
WP Engine is a managed WordPress hosting platform that provides enterprise-grade hosting with built-in security, performance optimization, and CDN services. It generates access logs, error logs, and CDN event logs that can be collected via the WP Engine API.
Before you begin
Make sure you have the following prerequisites:
- A Google SecOps instance
- A GCP project with Cloud Storage API enabled
- Permissions to create and manage GCS buckets
- Permissions to manage IAM policies on GCS buckets
- Permissions to create Cloud Run services, Pub/Sub topics, and Cloud Scheduler jobs
- Privileged access to the WP Engine User Portal with API access permissions
- A WP Engine account with API access enabled
Create Google Cloud Storage bucket
- Go to the Google Cloud Console .
- Select your project or create a new one.
- In the navigation menu, go to Cloud Storage > Buckets.
- Click Create bucket.
-
Provide the following configuration details:
Setting Value Name your bucket Enter a globally unique name (for example, wpengine-logs)Location type Choose based on your needs (Region, Dual-region, Multi-region) Location Select the location (for example, us-central1)Storage class Standard (recommended for frequently accessed logs) Access control Uniform (recommended) Protection tools Optional: Enable object versioning or retention policy -
Click Create.
Collect WP Engine API credentials
Generate API credentials
- Sign in to the WP Engine User Portal .
- Click your profile name, then go to Profile > API Access.
- Click Generate Credentials.
-
Copy and save the following details in a secure location:
- API Username: The generated API username
- API Password: The generated API password (shown only once)
Get install name
- Sign in to the WP Engine User Portal .
- Go to Sitesin the navigation menu.
- Click on the site you want to collect logs from.
- Note the Install namedisplayed on the site overview page. Each environment (Production, Staging, Development) has a separate install name.
Test API access
-
Test your credentials before proceeding with the integration:
# Replace with your actual credentials WPE_USER = "your-api-username" WPE_PASSWORD = "your-api-password" # Test API access - list installs curl -v -u " ${ WPE_USER } : ${ WPE_PASSWORD } " "https://api.wpengineapi.com/v1/installs"
Create service account for Cloud Run function
The Cloud Run function needs a service account with permissions to write to GCS bucket and be invoked by Pub/Sub.
Create service account
- In the GCP Console, go to IAM & Admin > Service Accounts.
- Click Create Service Account.
- Provide the following configuration details:
- Service account name: Enter
wpengine-logs-collector-sa - Service account description: Enter
Service account for Cloud Run function to collect WP Engine logs
- Service account name: Enter
- Click Create and Continue.
- In the Grant this service account access to projectsection, add the following roles:
- Click Select a role.
- Search for and select Storage Object Admin.
- Click + Add another role.
- Search for and select Cloud Run Invoker.
- Click + Add another role.
- Search for and select Cloud Functions Invoker.
- Click Continue.
- Click Done.
These roles are required for:
- Storage Object Admin: Write logs to GCS bucket and manage state files
- Cloud Run Invoker: Allow Pub/Sub to invoke the function
- Cloud Functions Invoker: Allow function invocation
Grant IAM permissions on GCS bucket
Grant the service account write permissions on the GCS bucket:
- Go to Cloud Storage > Buckets.
- Click on your bucket name (for example,
wpengine-logs). - Go to the Permissionstab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Enter the service account email (for example,
wpengine-logs-collector-sa@PROJECT_ID.iam.gserviceaccount.com) - Assign roles: Select Storage Object Admin
- Add principals: Enter the service account email (for example,
- Click Save.
Create Pub/Sub topic
Create a Pub/Sub topic that Cloud Scheduler will publish to and the Cloud Run function will subscribe to.
- In the GCP Console, go to Pub/Sub > Topics.
- Click Create topic.
- Provide the following configuration details:
- Topic ID: Enter
wpengine-logs-trigger - Leave other settings as default
- Topic ID: Enter
- Click Create.
Create Cloud Run function to collect logs
The Cloud Run function will be triggered by Pub/Sub messages from Cloud Scheduler to fetch logs from the WP Engine API and write them to GCS.
- In the GCP Console, go to Cloud Run.
- Click Create service.
- Select Function(use an inline editor to create a function).
-
In the Configuresection, provide the following configuration details:
Setting Value Service name wpengine-logs-collectorRegion Select region matching your GCS bucket (for example, us-central1)Runtime Select Python 3.12or later -
In the Trigger (optional)section:
- Click + Add trigger.
- Select Cloud Pub/Sub.
- In Select a Cloud Pub/Sub topic, choose the topic
wpengine-logs-trigger. - Click Save.
-
In the Authenticationsection:
- Select Require authentication.
- Check Identity and Access Management (IAM).
-
Scroll down and expand Containers, Networking, Security.
-
Go to the Securitytab:
- Service account: Select the service account
wpengine-logs-collector-sa.
- Service account: Select the service account
-
Go to the Containerstab:
- Click Variables & Secrets.
- Click + Add variablefor each environment variable:
Variable Name Example Value Description GCS_BUCKETwpengine-logsGCS bucket name GCS_PREFIXwpenginePrefix for log files STATE_KEYwpengine/state.jsonState file path WPE_API_USERyour-api-usernameWP Engine API username WPE_API_PASSWORDyour-api-passwordWP Engine API password WPE_INSTALL_IDmyinstallWP Engine install name MAX_RECORDS5000Max records per run PAGE_SIZE100Records per page LOOKBACK_HOURS24Initial lookback period -
In the Variables & Secretssection, scroll down to Requests:
- Request timeout: Enter
600seconds (10 minutes)
- Request timeout: Enter
-
Go to the Settingstab:
- In the Resourcessection:
- Memory: Select 512 MiBor higher
- CPU: Select 1
- In the Resourcessection:
-
In the Revision scalingsection:
- Minimum number of instances: Enter
0 - Maximum number of instances: Enter
100(or adjust based on expected load)
- Minimum number of instances: Enter
-
Click Create.
-
Wait for the service to be created (1-2 minutes).
-
After the service is created, the inline code editorwill open automatically.
Add function code
- Enter mainin the Entry pointfield.
-
In the inline code editor, create two files:
- First file: main.py:
import functions_framework from google.cloud import storage import json import os import urllib3 from datetime import datetime , timezone , timedelta import time import base64 # Initialize HTTP client with timeouts http = urllib3 . PoolManager ( timeout = urllib3 . Timeout ( connect = 5.0 , read = 30.0 ), retries = False , ) # Initialize Storage client storage_client = storage . Client () # Environment variables GCS_BUCKET = os . environ . get ( 'GCS_BUCKET' ) GCS_PREFIX = os . environ . get ( 'GCS_PREFIX' , 'wpengine' ) STATE_KEY = os . environ . get ( 'STATE_KEY' , 'wpengine/state.json' ) WPE_API_USER = os . environ . get ( 'WPE_API_USER' ) WPE_API_PASSWORD = os . environ . get ( 'WPE_API_PASSWORD' ) WPE_INSTALL_ID = os . environ . get ( 'WPE_INSTALL_ID' ) MAX_RECORDS = int ( os . environ . get ( 'MAX_RECORDS' , '5000' )) PAGE_SIZE = int ( os . environ . get ( 'PAGE_SIZE' , '100' )) LOOKBACK_HOURS = int ( os . environ . get ( 'LOOKBACK_HOURS' , '24' )) # WP Engine API base URL API_BASE = 'https://api.wpengineapi.com/v1' # Log types to fetch LOG_TYPES = [ 'access' , 'error' ] def get_auth_header (): """Generate HTTP Basic auth header for WP Engine API.""" credentials = f " { WPE_API_USER } : { WPE_API_PASSWORD } " encoded = base64 . b64encode ( credentials . encode ( 'utf-8' )) . decode ( 'utf-8' ) return f "Basic { encoded } " @functions_framework . cloud_event def main ( cloud_event ): """ Cloud Run function triggered by Pub/Sub to fetch WP Engine logs and write to GCS. Args: cloud_event: CloudEvent object containing Pub/Sub message """ if not all ([ GCS_BUCKET , WPE_API_USER , WPE_API_PASSWORD , WPE_INSTALL_ID ]): print ( 'Error: Missing required environment variables' ) return try : bucket = storage_client . bucket ( GCS_BUCKET ) # Load state state = load_state ( bucket , STATE_KEY ) # Determine time window now = datetime . now ( timezone . utc ) last_offsets = {} if isinstance ( state , dict ) and state . get ( "last_offsets" ): last_offsets = state [ "last_offsets" ] print ( f "Fetching logs for install: { WPE_INSTALL_ID } " ) auth_header = get_auth_header () all_records = [] # Fetch both access and error log types for log_type in LOG_TYPES : last_offset = last_offsets . get ( log_type , 0 ) records = fetch_logs ( auth_header = auth_header , install_id = WPE_INSTALL_ID , log_type = log_type , start_offset = last_offset , page_size = PAGE_SIZE , max_records = MAX_RECORDS , ) # Tag records with log type for record in records : record [ '_wpe_log_type' ] = log_type all_records . extend ( records ) # Update offset for this log type if records : last_offsets [ log_type ] = last_offset + len ( records ) print ( f "Fetched { len ( records ) } { log_type } log records" ) if not all_records : print ( "No new log records found." ) save_state ( bucket , STATE_KEY , last_offsets ) return # Write to GCS as NDJSON timestamp = now . strftime ( '%Y%m %d _%H%M%S' ) object_key = f " { GCS_PREFIX } /logs_ { timestamp } .ndjson" blob = bucket . blob ( object_key ) ndjson = ' \n ' . join ([ json . dumps ( record , ensure_ascii = False ) for record in all_records ]) + ' \n ' blob . upload_from_string ( ndjson , content_type = 'application/x-ndjson' ) print ( f "Wrote { len ( all_records ) } records to gs:// { GCS_BUCKET } / { object_key } " ) # Update state save_state ( bucket , STATE_KEY , last_offsets ) print ( f "Successfully processed { len ( all_records ) } records" ) except Exception as e : print ( f 'Error processing logs: { str ( e ) } ' ) raise def load_state ( bucket , key ): """Load state from GCS.""" try : blob = bucket . blob ( key ) if blob . exists (): state_data = blob . download_as_text () return json . loads ( state_data ) except Exception as e : print ( f "Warning: Could not load state: { e } " ) return {} def save_state ( bucket , key , last_offsets : dict ): """Save the last offsets to GCS state file.""" try : state = { 'last_offsets' : last_offsets } blob = bucket . blob ( key ) blob . upload_from_string ( json . dumps ( state , indent = 2 ), content_type = 'application/json' ) print ( f "Saved state: last_offsets= { last_offsets } " ) except Exception as e : print ( f "Warning: Could not save state: { e } " ) def fetch_logs ( auth_header : str , install_id : str , log_type : str , start_offset : int , page_size : int , max_records : int ): """ Fetch logs from WP Engine API with offset-based pagination and rate limiting. Args: auth_header: HTTP Basic auth header install_id: WP Engine install name log_type: Log type to fetch (access or error) start_offset: Starting offset for pagination page_size: Number of records per page max_records: Maximum total records to fetch Returns: List of log records """ headers = { 'Authorization' : auth_header , 'Accept' : 'application/json' , 'User-Agent' : 'GoogleSecOps-WPEngineCollector/1.0' } records = [] offset = start_offset page_num = 0 backoff = 1.0 while True : page_num += 1 if len ( records ) > = max_records : print ( f "Reached max_records limit ( { max_records } ) for { log_type } " ) break limit = min ( page_size , max_records - len ( records )) url = f " { API_BASE } /installs/ { install_id } /logs?type= { log_type } & limit= { limit } & offset= { offset } " try : response = http . request ( 'GET' , url , headers = headers ) # Handle rate limiting with exponential backoff if response . status == 429 : retry_after = int ( response . headers . get ( 'Retry-After' , str ( int ( backoff )))) print ( f "Rate limited (429). Retrying after { retry_after } s..." ) time . sleep ( retry_after ) backoff = min ( backoff * 2 , 30.0 ) continue backoff = 1.0 if response . status != 200 : print ( f "HTTP Error: { response . status } " ) response_text = response . data . decode ( 'utf-8' ) print ( f "Response body: { response_text } " ) return [] data = json . loads ( response . data . decode ( 'utf-8' )) page_results = data . get ( 'results' , data . get ( 'data' , [])) if not page_results : print ( f "No more results (empty page) for { log_type } " ) break print ( f "Page { page_num } : Retrieved { len ( page_results ) } { log_type } events" ) records . extend ( page_results ) offset += len ( page_results ) # If we got fewer results than requested, no more pages if len ( page_results ) < limit : print ( f "Last page reached for { log_type } " ) break except Exception as e : print ( f "Error fetching { log_type } logs: { e } " ) return [] print ( f "Retrieved { len ( records ) } total { log_type } records from { page_num } pages" ) return records- Second file: requirements.txt:
functions-framework==3.* google-cloud-storage==2.* urllib3>=2.0.0 -
Click Deployto save and deploy the function.
-
Wait for deployment to complete (2-3 minutes).
Create Cloud Scheduler job
Cloud Scheduler will publish messages to the Pub/Sub topic at regular intervals, triggering the Cloud Run function.
- In the GCP Console, go to Cloud Scheduler.
- Click Create Job.
-
Provide the following configuration details:
Setting Value Name wpengine-logs-collector-hourlyRegion Select same region as Cloud Run function Frequency 0 * * * *(every hour, on the hour)Timezone Select timezone (UTC recommended) Target type Pub/Sub Topic Select the topic wpengine-logs-triggerMessage body {}(empty JSON object) -
Click Create.
Schedule frequency options
Choose frequency based on log volume and latency requirements:
| Frequency | Cron Expression | Use Case |
|---|---|---|
|
Every 5 minutes
|
*/5 * * * *
|
High-volume, low-latency |
|
Every 15 minutes
|
*/15 * * * *
|
Medium volume |
|
Every hour
|
0 * * * *
|
Standard (recommended) |
|
Every 6 hours
|
0 */6 * * *
|
Low volume, batch processing |
|
Daily
|
0 0 * * *
|
Historical data collection |
Test the integration
- In the Cloud Schedulerconsole, find your job.
- Click Force runto trigger the job manually.
- Wait a few seconds.
- Go to Cloud Run > Services.
- Click on
wpengine-logs-collector. - Click the Logstab.
-
Verify the function executed successfully. Look for:
Fetching logs for install: myinstall Page 1: Retrieved X access events Fetched X access log records Page 1: Retrieved X error events Fetched X error log records Wrote X records to gs://wpengine-logs/wpengine/logs_YYYYMMDD_HHMMSS.ndjson Successfully processed X records -
Go to Cloud Storage > Buckets.
-
Click on your bucket name (
wpengine-logs). -
Navigate to the
wpengine/folder. -
Verify that a new
.ndjsonfile was created with the current timestamp.
If you see errors in the logs:
- HTTP 401: Check API credentials in environment variables
- HTTP 403: Verify API access is enabled in WP Engine User Portal
- HTTP 429: Rate limiting - function will automatically retry with backoff
- Missing environment variables: Check all required variables are set
Configure a feed in Google SecOps to ingest WP Engine logs
- Go to SIEM Settings > Feeds.
- Click Add New Feed.
- Click Configure a single feed.
- In the Feed namefield, enter a name for the feed (for example,
WP Engine Logs). - Select Google Cloud Storage V2as the Source type.
- Select WPEngineas the Log type.
-
Click Get Service Account. A unique service account email will be displayed, for example:
chronicle-12345678@chronicle-gcp-prod.iam.gserviceaccount.com -
Copy this email address.
-
Click Next.
-
Specify values for the following input parameters:
-
Storage bucket URL: Enter the GCS bucket URI with the prefix path:
gs://wpengine-logs/wpengine/- Replace:
-
wpengine-logs: Your GCS bucket name. -
wpengine: Optional prefix/folder path where logs are stored (leave empty for root).
-
- Replace:
-
Source deletion option: Select the deletion option according to your preference:
- Never: Never deletes any files after transfers (recommended for testing).
- Delete transferred files: Deletes files after successful transfer.
-
Delete transferred files and empty directories: Deletes files and empty directories after successful transfer.
-
Maximum File Age: Include files modified in the last number of days (default is 180 days)
-
Asset namespace: The asset namespace
-
Ingestion labels: The label to be applied to the events from this feed
-
-
Click Next.
-
Review your new feed configuration in the Finalizescreen, and then click Submit.
Grant IAM permissions to the Google SecOps service account
The Google SecOps service account needs Storage Object Viewerrole on your GCS bucket.
- Go to Cloud Storage > Buckets.
- Click on your bucket name.
- Go to the Permissionstab.
- Click Grant access.
- Provide the following configuration details:
- Add principals: Paste the Google SecOps service account email
- Assign roles: Select Storage Object Viewer
- Click Save.
UDM mapping table
| Log Field | UDM Mapping | Logic |
|---|---|---|
|
request, sig, blog_id, kind, name, slug, ver
|
additional.fields | Merged with labels from request (as request_label), sig (as sig_label), blog_id (as blog_id_label), kind (as kind_label), name (as name_label), slug (as slug_label), ver (as ver_label) if each is not empty |
|
msg
|
metadata.description | Value copied directly |
| |
metadata.event_type | Set to "STATUS_UPDATE" if has_principal is true, else "GENERIC_EVENT" |
|
protocol
|
network.application_protocol | Value copied directly |
|
version
|
network.application_protocol_version | Converted to string |
|
method
|
network.http.method | Value copied directly |
|
user_agent
|
network.http.parsed_user_agent | Converted to parseduseragent |
|
secure_url
|
network.http.referral_url | Value copied directly |
|
response_code
|
network.http.response_code | Converted to string then to integer |
|
user_agent
|
network.http.user_agent | Value copied directly |
|
received_bytes
|
network.received_bytes | Converted to string then to uinteger |
|
Hostname
|
principal.asset.hostname | Value copied directly |
|
client_ip
|
principal.asset.ip | Value copied directly |
|
Hostname
|
principal.hostname | Value copied directly |
|
client_ip
|
principal.ip | Value copied directly |
|
port
|
principal.port | Converted to string then to integer |
|
pid
|
principal.process.pid | Converted to string |
|
scan_type, scan_value
|
security_result.description | Value from scan_value if not empty, else from scan_type if not empty |
Need more help? Get answers from Community members and Google SecOps professionals.

