Troubleshoot Log Analytics configuration

This document shows you how to resolve errors that might occur when using the Log Analytics page of the Google Cloud console.

Error messages

This section describes error messages you might see, and how to resolve the corresponding error conditions.

No completion signal within allotted timeframe error message

You enter a SQL query and select Run query. The query doesn't complete and you see the following error message:

The query failed to execute and return results due to error: No completion signal within allotted timeframe.

To resolve this error, do one of the following:

  • Shorten the interval over which logs are queried and then retry the query. For example, if a query interval is 14 days, shorten the interval to 7 days, and then run the query.

  • Create a linked BigQuery dataset and then run the query from the BigQuery interface. The BigQuery interface supports queries that require a longer execution time than the Cloud Logging interface. For more information, see Query a linked BigQuery dataset .

Queries against buckets with distinct CMEK keys error message

You enter a SQL query that queries multiple log buckets and select Run query. The query doesn't complete and you see the following error message:

Queries against buckets with distinct CMEK keys must have a key configured in the LogSettings.

To resolve this situation, do one of the following:

  • Configure your log buckets to use the same Cloud Key Management Service (Cloud KMS) key.
  • When the log buckets are in the same location, you can configure a folder or organization that is a parent resource for the log buckets with a default Cloud KMS key. The parent's default key must be in the same location as the log buckets. With this configuration, the parent's default key encrypts any temporary data generated by the Log Analytics query. For more information, see Log Analytics restrictions .

FROM clause must contain exactly one view error message

You enter a SQL query in the query pane of the Log Analytics page in the Google Cloud console, but the SQL parser displays the following error:

FROM clause must contain exactly one log view

The previous error is reported when the table specified in the FROM statement can't be resolved to a specific log view.

To resolve this error, ensure that your table name has the proper syntax:

  • Ensure that the table name follows the syntax required by the Log Analytics naming scheme. BigQuery and Log Analytics have different requirements for the table name. You can find the required syntax for the table name by viewing the default query .

  • If the Google Cloud project ID, region, bucket ID, or view ID of a log bucket contains period characters, (.) , then ensure that each of these fields is wrapped by single backquotes, (`) .

    For example, if a Google Cloud project ID is example.com:bluebird , then to query the _AllLogs view of the _Default log bucket, use the following syntax to specify the table:

     SELECT *
    FROM `example.com:bluebird`.`global`.`_Default`.`_AllLogs` 
    

    The previous query assumes that the _Default bucket is in the global region.

Unable to save a query

If you enter and run a SQL query and the Saveis disabled, then your organization or folder's default resource settings define a location that isn't allowed by the organization policy. To resolve this, ask the administrator of your organization to define a location in the default resource settings that matches a location that is allowed by your organization policy. For more information, see Configure default settings for organizations and folders .

If Savebutton is enabled, but you can't complete the dialog and save the query, then do the following:

  1. Ensure that the query doesn't contain syntax errors. You can only save valid queries.
  2. Optional: Copy the query into your clipboard.
  3. Reload the page.
  4. If you copied the query into your clipboard, then paste the query into the Querypane, run the query, and then perform the save operation.

Access denied to the Log Analytics page

You open the Log Analytics page in the Google Cloud console and a permission-denied error message is displayed.

To get the permissions that you need to load the Log Analytics page, run queries and view logs, ask your administrator to grant you the following IAM roles on your project:

You might also be able to get the required permissions through custom roles , or Logging predefined roles .

The permissions that you need to view log entries and run queries on the Log Analytics page are the same as those that you need to view logs on the Logs Explorer page. For information about additional roles that you need to query views on user-defined buckets or to query the _AllLogs view of the _Default log bucket, see Cloud Logging roles .

Upgrade of log bucket to use Log Analytics fails

You create a log bucket and select the option to use Log Analytics, or you upgrade an existing log bucket to use Log Analytics. The upgrade fails with an error condition similar to:

Failed precondition (HTTP 400): Constraint "my-constraint" violated for PROJECT_ID 
with location global.

The previous error message indicates that your organization has configured an organizational policy that restricts the regions that can be used. Log buckets that are eligible to be upgraded to use Log Analytics must use the global region. If you can remove the organizational policy restricting usage of the global region, then you can upgrade your log bucket. Otherwise, you can't upgrade your log buckets.

Creating a linked BigQuery dataset fails

You edit a log bucket to create a linked BigQuery dataset or you create a new log bucket and select the option to create a linked dataset; however, the linked dataset isn't created.

To resolve this error, ask the system administrator for the Google Cloud project to grant you an IAM role that includes the following permission:

  • logging.links.create

The previous permission is included in the Logging Admin ( roles/logging.admin ) and Logs Configuration Writer ( roles/logging.configWriter ) roles.

For information about roles and permissions, see Access control with IAM .

Deleting a linked BigQuery dataset fails

You no longer want the linked dataset but the option to delete that dataset is disabled.

To resolve this error, ask the system administrator for the Google Cloud project to grant you an IAM role that includes the following permission:

  • logging.links.delete

The previous permission is included in the Logging Admin ( roles/logging.admin ) and Logs Configuration Writer ( roles/logging.configWriter ) roles.

This permission lets you delete the linked dataset from the Logs Storagepage of the Google Cloud console. For more information about roles and permissions, see Access control with IAM .

Query engine settings button is missing

If the Settingsbutton isn't displayed next to the Run querybutton, then your Google Cloud project doesn't have reserved BigQuery slots enabled. To enable the Settingsbutton, configure reserved BigQuery slots for your project.

Run on BigQuery button is disabled

If the Run on BigQuerybutton is displayed but disabled, then a log view referenced by your query doesn't have a linked dataset. To run your query on your BigQuery slot reservations, create a linked BigQuery dataset on your log view.

You want to create an alerting policy to monitor the results of a SQL query. The setup steps required that you grant IAM roles to the Monitoring Service Account, but that account doesn't exist.

The Monitoring Service Account is called a service agent , because it is created and managed by Google Cloud. The account is created automatically when you configure a resource or service that requires the account. For example, if you create a Pub/Sub notification channel, then that action might cause the Monitoring Service Account to be created. Depending on the creation flow, the Monitoring Service Account might be granted the role of Monitoring Service Agent ( monitoring.NotificationServiceAgent ) role on your project. You can modify the roles granted to the account.

If the Monitoring Service Account doesn't exist, then to create an alerting policy that monitors the result of a SQL query you need to do the following:

  1. Manually create a service agent. For information about this step, see Create and grant roles to service agents .

  2. Grant the required roles to the service agent. For information about these roles, see Monitor your SQL query results: Before you begin .

There are duplicate log entries in my Log Analytics results

You run a query that is counting or reporting duplicate entries. Because the Logs Explorer removes duplicate entries based on log name, timestamp, and insert ID, you expect Log Analytics to de-duplicate log entries before a query is run.

Log Analytics doesn't perform the same type of deduplication that is performed by the Logs Explorer.

To resolve duplicate log entries, try the following:

  1. Determine if the duplicate log entries have different receive timestamp values. When the timestamps differ, that indicates that the same data was written to Logging multiple times.

    To resolve duplicate writes, investigate your logging integration for error messages or misconfigurations.

  2. If your bucket is configured to use Cloud Key Management Service keys, then ensure that you are within quota and that your key is consistently accessible . Going over quota or loss of key access can result in duplicate log entries.

    To resolve these failures, ensure that you don't exceed your quota and that your key is accessible.

  3. Modify your query to remove duplicate log entries.

    For example, assume that the JSON payload contains fieldA and fieldB , the first is a string and the second is numeric. Also, assume that the JSON payload contains a field labeled server , which contains a string. Next, consider the following query:

      SELECT 
      
     JSON_VALUE 
     ( 
     json_payload 
     . 
     fieldA 
     ) 
      
     AS 
      
     fieldA 
      
     SUM 
     ( 
     IFNULL 
     ( 
     SAFE_CAST 
     ( 
     JSON_VALUE 
     ( 
     json_payload 
     . 
     fieldB 
     ) 
      
     AS 
      
     INT64 
     ), 
      
     0 
     )) 
      
     AS 
      
     sum_fieldB 
     FROM 
      
     ` 
      TABLE_NAME_OF_LOG_VIEW 
     
     ` 
     WHERE 
      
     JSON_VALUE 
     ( 
     json_payload 
     . 
     server 
     ) 
      
     = 
      
     "test" 
     GROUP 
      
     BY 
      
     fieldA 
     ; 
     
    

    You can modify the query to remove duplicate log entries, where the log name, timestamp, and insert ID are examined to determine whether a log entry is a duplicate:

      WITH 
      
     deduplicated 
      
     AS 
      
     ( 
      
     SELECT 
      
     JSON_VALUE 
     ( 
     json_payload 
     . 
     fieldA 
     ) 
      
     AS 
      
     fieldA 
      
     IFNULL 
     ( 
     SAFE_CAST 
     ( 
     JSON_VALUE 
     ( 
     json_payload 
     . 
     fieldB 
     ) 
      
     AS 
      
     INT64 
     ), 
      
     0 
     ) 
      
     AS 
      
     fieldB 
      
     FROM 
      
     ` 
      TABLE_NAME_OF_LOG_VIEW 
     
     ` 
      
     a 
      
     WHERE 
      
     JSON_VALUE 
     ( 
     json_payload 
     . 
     server 
     ) 
      
     = 
      
     "test" 
      
     QUALIFY 
      
     ROW_NUMBER 
     () 
      
     OVER 
      
     ( 
     PARTITION 
      
     BY 
      
     a 
     . 
     log_name 
     , 
      
     a 
     . 
     timestamp 
     , 
      
     a 
     . 
     insert_id 
      
     ) 
      
     = 
      
     1 
      
     ) 
     SELECT 
      
     fieldA 
     , 
      
     SUM 
     ( 
     fieldB 
     ) 
      
     AS 
      
     sum_fieldB 
     FROM 
      
     deduplicated 
     GROUP 
      
     BY 
      
     fieldA 
     ;