This document provides details about the Ops Agent's default and custom configurations. Read this document if any of the following applies to you:
-  You want to change the configuration of the Ops Agent to achieve the following goals: -  Turn off the built-in logging or metrics ingestion. -  To turn off logging ingestion, see Example logging serviceconfigurations .
-  To turn off host-metrics ingestion, see Example metrics serviceconfigurations .
 
-  
-  Customize the file path of the log files that the agent collects logs from; see Logging receivers . 
-  Customize the structured log format that the agent uses to process the log entries, by parsing the JSON or by using regular expressions; see Logging processors . 
-  Change the sampling rate for metrics; see Metrics receivers . 
-  Customize which group or groups of metrics to enable. The agent collects all system metrics, like cpuandmemory, by default; see Metrics processors .
-  Customize how the agent rotates logs; see Log-rotation configuration . 
-  Collect metrics and logs from supported third-party applications. See Monitor third-party applications for the list of supported applications. 
-  Use the Prometheus receiver to collect custom metrics. 
-  Use the OpenTelemetry Protocol (OTLP) receiver to collect custom metrics and traces. 
 
-  
-  You're interested in learning the technical details of the Ops Agent's configuration. 
Configuration model
The Ops Agent uses a built-in default configuration; you can't directly modify this built-in configuration. Instead, you create a file of overrides that are merged with the built-in configuration when the agent restarts.
The building blocks of the configuration are as follows:
-  receivers: This element describes what is collected by the agent.
-  processors: This element describes how the agent can modify the collected information.
-  service: This element links receivers and processors together to create data flows, called pipelines . Theserviceelement contains apipelineselement, which can contain multiple pipelines.
The built-in configuration is made up of these elements, and you use the same elements to override that built-in configuration.
Built-in configuration
The built-in configuration for the Ops Agent defines the default collection for logs and metrics. The following shows the built-in configuration for Linux and for Windows:
Linux
By default, the Ops Agent collects file-based syslog 
logs and host
metrics.
For more information about the metrics collected, see Metrics ingested by the receivers .
Windows
By default, the Ops Agent collects Windows event logs from System 
, Application 
, and Security 
channels, as well as host metrics, IIS
metrics, and SQL Server metrics.
For more information about the metrics collected, see Metrics ingested by the receivers .
These configurations are discussed in more detail in Logging configuration and Metrics configuration .
User-specified configuration
To override the built-in configuration, you add new configuration elements to the user configuration file. Put your configuration for the Ops Agent in the following files:
- For Linux: /etc/google-cloud-ops-agent/config.yaml
- For Windows: C:\Program Files\Google\Cloud Operations\Ops Agent\config\config.yaml
Any user-specified configuration is merged with the built-in configuration when the agent restarts.
To override a built-in receiver, processor, or pipeline, redefine it
in your config.yaml 
file by declaring it with the same identifier.
Starting with Ops Agent version 2.31.0,
you can also configure the agent's log-rotation feature; for more information,
see Configure log rotation in the
Ops Agent 
.
For example, the built-in configuration for metrics includes a hostmetrics 
receiver that specifies a 60-second collection interval. To change the
collection interval for host metrics to 30 seconds, include a metrics receiver
called hostmetrics 
in your config.yaml 
file that sets the collection_interval 
value to 30 seconds, as shown in the following example:
  metrics 
 : 
  
 receivers 
 : 
  
 hostmetrics 
 : 
  
 type 
 : 
  
 hostmetrics 
  
 collection_interval 
 : 
  
 30s 
 
 
For other examples of changing the built-in configurations, see Logging configuration 
and Metrics configuration 
.
You can also turn off the collection of logging or metric data. These
changes are described in the example logging service 
configurations 
and metrics service 
configurations 
.
You can use this file to prevent the agent from collecting self logs and sending those logs to Cloud Logging. For more information, see Collection of self logs .
You also configure the agent's log-rotation feature by using this file; for more information, see Configure log rotation in the Ops Agent .
You can't configure the Ops Agent to export logs or metrics to services other than Cloud Logging and Cloud Monitoring.
Logging configurations
The logging 
configuration uses the configuration model 
described previously:
-  receivers: This element describes the data to collect from log files; this data is mapped into a <timestamp, record> model.
-  processors: This optional element describes how the agent can modify the collected information.
-  service: This element links receivers and processors together to create data flows, called pipelines . Theserviceelement contains apipelineselement, which can include multiple pipeline definitions.
Each receiver and each processor can be used in multiple pipelines.
The following sections describe each of these elements.
The Ops Agent sends logs to Cloud Logging. You can't configure it to export logs to other services. You can, however, configure Cloud Logging to export logs; for more information, see Route logs to supported destinations .
Logging receivers
The receivers 
element contains a set of receivers, each identified by
a RECEIVER_ID 
. A receiver describes how to retrieve the logs; for example,
by tailing files, by using a TCP port, or from the Windows Event Log.
Structure of logging receivers
Each receiver must have an identifier, RECEIVER_ID 
, and include a type 
element. The valid types are:
-  files: Collect logs by tailing files on disk.
-  fluent_forward(Ops Agent versions 2.12.0 and later): Collect logs sent via the Fluent Forward protocol over TCP.
-  tcp(Ops Agent versions 2.3.0 and later): Collect logs in JSON format by listening to a TCP port.
- Linux only: -  syslog: Collect Syslog messages over TCP or UDP.
-  systemd_journald(Ops Agent versions 2.4.0 and later): Collect systemd journal logs from the systemd-journald service.
 
-  
- Windows only: -  windows_event_log: Collect Windows Event Logs using the Windows Event Log API.
 
-  
- Third-party application log receivers
The receivers 
structure looks like the following:
receivers: RECEIVER_ID : type: files ... RECEIVER_ID_2 : type: syslog ...
Depending on the value of the type 
element, there might be other
configuration options, as follows:
-  filesreceivers:-  include_paths: Required. A list of filesystem paths to read by tailing each file. A wildcard (*) can be used in the paths; for example,/var/log/*.log(Linux) orC:\logs\*.log(Windows).For a list of common Linux application log files, see Common Linux log files . 
-  exclude_paths: Optional. A list of filesystem path patterns to exclude from the set matched byinclude_paths.
-  record_log_file_path: Optional. If set totrue, then the path to the specific file from which the log record was obtained appears in the output log entry as the value of theagent.googleapis.com/log_file_pathlabel. When using a wildcard, only the path of the file from which the record was obtained is recorded.
-  wildcard_refresh_interval: Optional. The interval at which wildcard file paths ininclude_pathsare refreshed. Given as a time duration, for example,30s,2m. This property might be useful under high logging throughputs where log files are rotated faster than the default interval. If not specified, the default interval is 60 seconds.
 
-  
-  fluent_forwardreceivers:-  listen_host: Optional. An IP address to listen on. The default value is127.0.0.1.
-  listen_port: Optional. A port to listen on. The default value is24224.
 
-  
-  syslogreceivers (for Linux only):-  transport_protocol: Supported values:tcp,udp.
-  listen_host: An IP address to listen on.
-  listen_port: A port to listen on.
 
-  
-  tcpreceivers:-  format: Required. Log format. Supported value:json.
-  listen_host: Optional. An IP address to listen on. The default value is127.0.0.1.
-  listen_port: Optional. A port to listen on. The default value is5170.
 
-  
-  windows_event_logreceivers (for Windows only):-  channels: Required. A list of Windows Event Log channels from which to read logs.
-  receiver_version: Optional. Controls which Windows Event Log API to use. Supported values are1and2. The default value is1.
-  render_as_xml: Optional. If set totrue, then all Event Log fields, except forjsonPayload.MessageandjsonPayload.StringInserts, are rendered as an XML document in a string field namedjsonPayload.raw_xml. The default value isfalse. This cannot be set totruewhenreceiver_versionis1.
 
-  
Examples of logging receivers
Sample files 
receiver:
  receivers 
 : 
  
  RECEIVER_ID 
 
 : 
  
 type 
 : 
  
 files 
  
 include_paths 
 : 
  
 [ 
 /var/log/*.log 
 ] 
  
 exclude_paths 
 : 
  
 [ 
 /var/log/not-this-one.log 
 ] 
  
 record_log_file_path 
 : 
  
 true 
 
 
Sample fluent_forward 
receiver:
  receivers 
 : 
  
  RECEIVER_ID 
 
 : 
  
 type 
 : 
  
 fluent_forward 
  
 listen_host 
 : 
  
 127.0.0.1 
  
 listen_port 
 : 
  
 24224 
 
 
Sample syslog 
receiver (Linux only):
  receivers 
 : 
  
  RECEIVER_ID 
 
 : 
  
 type 
 : 
  
 syslog 
  
 transport_protocol 
 : 
  
 tcp 
  
 listen_host 
 : 
  
 0.0.0.0 
  
 listen_port 
 : 
  
 5140 
 
 
Sample tcp 
receiver:
  receivers 
 : 
  
  RECEIVER_ID 
 
 : 
  
 type 
 : 
  
 tcp 
  
 format 
 : 
  
 json 
  
 listen_host 
 : 
  
 127.0.0.1 
  
 listen_port 
 : 
  
 5170 
 
 
Sample windows_event_log 
receiver (Windows only):
  receivers 
 : 
  
  RECEIVER_ID 
 
 : 
  
 type 
 : 
  
 windows_event_log 
  
 channels 
 : 
  
 [ 
 System 
 , 
 Application 
 , 
 Security 
 ] 
 
 
Sample windows_event_log 
receiver that overrides the built-in receiver to use
version 2 
:
  receivers 
 : 
  
 windows_event_log 
 : 
  
 type 
 : 
  
 windows_event_log 
  
 channels 
 : 
  
 [ 
 System 
 , 
 Application 
 , 
 Security 
 ] 
  
 receiver_version 
 : 
  
 2 
 
 
Sample systemd_journald 
receiver:
  receivers 
 : 
  
  RECEIVER_ID 
 
 : 
  
 type 
 : 
  
 systemd_journald 
 
 
Special fields in structured payloads
For processors and receivers that can ingest structured data (the fluent_forward 
and tcp 
receivers and the parse_json 
processor), you can
set special fields in the input that will map to specific fields in the LogEntry 
object that the agent writes to the Logging API.
When the Ops Agent receives external structured log data, it places
top-level fields into the LogEntry 
's jsonPayload 
field unless the field
name is listed in the following table:
| Record field | LogEntry field | 
|---|---|
| Option 1  Option 2   | timestamp | 
| receiver_id (not a record field) | logName | 
| logging.googleapis.com/httpRequest( HttpRequest 
) | httpRequest | 
| logging.googleapis.com/severity( string 
) | severity | 
| logging.googleapis.com/labels( struct of string:string 
) | labels | 
| logging.googleapis.com/operation( struct 
) | operation | 
| logging.googleapis.com/sourceLocation( struct 
) | sourceLocation | 
| logging.googleapis.com/trace( string 
) | trace | 
| logging.googleapis.com/spanId( string 
) | spanId | 
Any remaining structured record fields remain part of the jsonPayload 
structure.
Common Linux log files
The following table lists common log files for frequently used Linux applications:
| Application | Common log files | 
|---|---|
| apache | For information about Apache log files, see Monitoring third-party applications: Apache Web Server . | 
| cassandra | For information about Cassandra log files, see Monitoring third-party applications: Cassandra . | 
| chef | /var/log/chef-server/bookshelf/current | 
| gitlab | /home/git/gitlab/log/application.log | 
| jenkins | /var/log/jenkins/jenkins.log | 
| jetty | /var/log/jetty/out.log | 
| joomla | /var/www/joomla/logs/*.log | 
| magento | /var/www/magento/var/log/exception.log | 
| mediawiki | /var/log/mediawiki/*.log | 
| memcached | For information about Memcached log files, see Monitoring third-party applications: Memcached . | 
| mongodb | For information about MongoDB log files, see Monitoring third-party applications: MongoDB . | 
| mysql | For information about MySQL log files, see Monitoring third-party applications: MySQL . | 
| nginx | For information about nginx log files, see Monitoring third-party applications: nginx . | 
| postgres | For information about PostgreSQL log files, see Monitoring third-party applications: PostgreSQL . | 
| puppet | /var/log/puppet/http.log | 
| puppet-enterprise | /var/log/pe-activemq/activemq.log | 
| rabbitmq | For information about RabbitMQ log files, see Monitoring third-party applications: RabbitMQ . | 
| redis | For information about Redis log files, see Monitoring third-party applications: Redis . | 
| redmine | /var/log/redmine/*.log | 
| salt | /var/log/salt/key | 
| solr | For information about Apache Solr log files, see Monitoring third-party applications: Apache Solr . | 
| sugarcrm | /var/www/*/sugarcrm.log | 
| syslog | /var/log/syslog | 
| tomcat | For information about Apache Tomcat log files, see Monitoring third-party applications: Apache Tomcat . | 
| zookeeper | For information about Apache ZooKeeper log files, see Monitoring third-party applications: Apache ZooKeeper . | 
Default ingested labels
Logs can contain the following labels by default in the  LogEntry 
 
:
| Field | Sample Value | Description | 
|---|---|---|
| labels."compute.googleapis.com/resource_name" | test_vm | The name of the virtual machine from which this log originates. Written for all logs. | 
| labels."logging.googleapis.com/instrumentation_source" | agent.googleapis.com/apache_access | The value of the receiver typefrom which thus log originates, prefixed byagent.googleapis.com/. Written only by receivers from third-party integrations. | 
Logging processors
The optional processors 
element contains a set of processing directives, each
identified by a PROCESSOR_ID 
. A processor describes how to manipulate the
information collected by a receiver.
Each processor must have a unique identifier and include a type 
element. The
valid types are:
-  parse_json: Parse JSON-formatted structured logs.
-  parse_multiline: Parse multiline logs. (Linux only)
-  parse_regex: Parse text-formatted logs via regex patterns to turn them into JSON-formatted structured logs.
-  exclude_logs: Exclude logs that match specified rules (starting in 2.9.0).
-  modify_fields: Set/transform fields in log entries (starting in 2.14.0).
The processors 
structure looks like the following:
processors: PROCESSOR_ID : type: parse_json ... PROCESSOR_ID_2 : type: parse_regex ...
Depending on the value of the type 
element, there are other
configuration options, as follows.
 parse_json 
processor
 
 Configuration structure
  processors 
 : 
  
  PROCESSOR_ID 
 
 : 
  
 type 
 : 
  
 parse_json 
  
 time_key 
 : 
  
< field name within jsonPayload 
>  
 time_format 
 : 
  
< strptime format string 
> 
 
The parse_json 
processor parses the input JSON into the jsonPayload 
field
of the LogEntry 
. Other parts of the LogEntry 
can be parsed by setting
certain special top-level fields 
.
-  time_key: Optional. If the log entry provides a field with a timestamp, this option specifies the name of that field. The extracted value is used to set thetimestampfield of the resultingLogEntryand is removed from the payload.If the time_keyoption is specified, you must also specify the following:-  time_format: Required iftime_keyis used. This option specifies the format of thetime_keyfield so it can be recognized and analyzed properly. For details of the format, see thestrptime(3)guide.
 
-  
Example configuration
  processors 
 : 
  
  PROCESSOR_ID 
 
 : 
  
 type 
 : 
  
 parse_json 
  
 time_key 
 : 
  
 time 
  
 time_format 
 : 
  
 "%Y-%m-%dT%H:%M:%S.%L%Z" 
 
 
 parse_multiline 
processor
 
 Configuration structure
  processors 
 : 
  
  PROCESSOR_ID 
 
 : 
  
 type 
 : 
  
 parse_multiline 
  
 match_any 
 : 
  
 - 
  
 type 
 : 
  
< type of the exceptions 
>  
 language 
 : 
  
< language name 
> 
 
-  match_any: Required. A list of one or more rules.-  type: Required. Only a single value is supported:-  language_exceptions: Allows the processor to concatenate exceptions into oneLogEntry, based on the value of thelanguageoption.
 
-  
-  language: Required. Only a single value is supported:-  java: Concatenates common Java exceptions into oneLogEntry.
-  python: Concatenates common Python exceptions into oneLogEntry.
-  go: Concatenates common Go exceptions into oneLogEntry.
 
-  
 
-  
Example configuration
  logging 
 : 
  
 receivers 
 : 
  
 custom_file1 
 : 
  
 type 
 : 
  
 files 
  
 include_paths 
 : 
  
 - 
  
 /tmp/test-multiline28 
  
 processors 
 : 
  
 parse_java_multiline 
 : 
  
 type 
 : 
  
 parse_multiline 
  
 match_any 
 : 
  
 - 
  
 type 
 : 
  
 language_exceptions 
  
 language 
 : 
  
 java 
  
 extract_structure 
 : 
  
 type 
 : 
  
 parse_regex 
  
 field 
 : 
  
 message 
  
 regex 
 : 
  
 "^(?<time>[ 
 \ 
 d-]*T[ 
 \ 
 d:.Z]*) 
  
 (?<severity>[^ 
  
 ]*) 
  
 (?<file>[^ 
  
 :]*):(?<line>[ 
 \ 
 d]*) 
  
 - 
  
 (?<message>(.|\\n)*)$" 
  
 time_key 
 : 
  
 time 
  
 time_format 
 : 
  
 "%Y-%m-%dT%H:%M:%S.%L" 
  
 move_severity 
 : 
  
 type 
 : 
  
 modify_fields 
  
 fields 
 : 
  
 severity 
 : 
  
 move_from 
 : 
  
 jsonPayload.severity 
  
 service 
 : 
  
 pipelines 
 : 
  
 pipeline1 
 : 
  
 receivers 
 : 
  
 [ 
 custom_file1 
 ] 
  
 processors 
 : 
  
 [ 
 parse_java_multiline 
 , 
  
 extract_structure 
 , 
  
 move_severity 
 ] 
 
 
In the extract_structure 
processor, the field: message 
statement means
that the regular expression is applied to the log entry's jsonPayload.message 
field. By default, the files receiver places each line of the log file into a
log entry with a single payload field called jsonPayload.message 
.
The extract_structure 
processor places extracted fields into
subfields of the LogEntry.jsonPayload 
field. Other statements in the YAML
file cause two of the extracted fields, time 
and severity 
, to be moved.
The time_key: time 
statement pulls the LogEntry.jsonPayload.time 
field,
parses the timestamp, and then adds the LogEntry.timestamp 
field.
The move_severity 
processor moves the severity field from the LogEntry.jsonPayload.severity 
field to the LogEntry.severity 
field.
Example log file:
 2022-10-17T22:00:00.187512963Z ERROR HelloWorld:16 - javax.servlet.ServletException: Something bad happened
    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:60)
    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
    at com.example.myproject.ExceptionHandlerFilter.doFilter(ExceptionHandlerFilter.java:28)
    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)
    at com.example.myproject.OutputBufferFilter.doFilter(OutputBufferFilter.java:33)
Caused by: com.example.myproject.MyProjectServletException
    at com.example.myproject.MyServlet.doPost(MyServlet.java:169)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)
    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)
    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:30)
    ... 27 common frames omitted 
 
The agent ingests each line from the log file into Cloud Logging in the following format:
 {
  "insertId": "...",
  "jsonPayload": {
    "line": "16",
    "message": "javax.servlet.ServletException: Something bad happened\n    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:60)\n    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)\n    at com.example.myproject.ExceptionHandlerFilter.doFilter(ExceptionHandlerFilter.java:28)\n    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1157)\n    at com.example.myproject.OutputBufferFilter.doFilter(OutputBufferFilter.java:33)\nCaused by: com.example.myproject.MyProjectServletException\n    at com.example.myproject.MyServlet.doPost(MyServlet.java:169)\n    at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)\n    at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)\n    at org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)\n    at org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1166)\n    at com.example.myproject.OpenSessionInViewFilter.doFilter(OpenSessionInViewFilter.java:30)\n    ... 27 common frames omitted\n",
    "file": "HelloWorld"
  },
  "resource": {
    "type": "gce_instance",
    "labels": {
      "instance_id": "...",
      "project_id": "...",
      "zone": "..."
    }
  },
  "timestamp": "2022-10-17T22:00:00.187512963Z",
  "severity": "ERROR",
  "labels": {
    "compute.googleapis.com/resource_name": "..."
  },
  "logName": "projects/.../logs/custom_file",
  "receiveTimestamp": "2022-10-18T03:12:38.430364391Z"
} 
 
 parse_regex 
processor
 
 Configuration structure
  processors 
 : 
  
  PROCESSOR_ID 
 
 : 
  
 type 
 : 
  
 parse_regex 
  
 regex 
 : 
  
< regular expression 
>  
 time_key 
 : 
  
< field name within jsonPayload 
>  
 time_format 
 : 
  
< format string 
> 
 
-  time_key: Optional. If the log entry provides a field with a timestamp, this option specifies the name of that field. The extracted value is used to set thetimestampfield of the resultingLogEntryand is removed from the payload.If the time_keyoption is specified, you must also specify the following:-  time_format: Required iftime_keyis used. This option specifies the format of thetime_keyfield so it can be recognized and analyzed properly. For details of the format, see thestrptime(3)guide.
 
-  
-  regex: Required. The regular expression for parsing the field. The expression must include key names for the matched subexpressions; for example,"^(?<time>[^ ]*) (?<severity>[^ ]*) (?<msg>.*)$".The text matched by named capture groups will be placed into fields in the LogEntry'sjsonPayloadfield. To add additional structure to your logs, use themodify_fieldsprocessor .For a set of regular expressions for extracting information from common Linux application log files, see Common Linux log files . 
Example configuration
  processors 
 : 
  
  PROCESSOR_ID 
 
 : 
  
 type 
 : 
  
 parse_regex 
  
 regex 
 : 
  
 "^(?<time>[^ 
  
 ]*) 
  
 (?<severity>[^ 
  
 ]*) 
  
 (?<msg>.*)$" 
  
 time_key 
 : 
  
 time 
  
 time_format 
 : 
  
 "%Y-%m-%dT%H:%M:%S.%L%Z" 
 
 
 exclude_logs 
processor
 
 Configuration structure:
  type 
 : 
  
 exclude_logs 
 match_any 
 : 
  
 - 
  
< filter 
>  
 - 
  
< filter 
> 
 
The top-level configuration for this processor contains a single field, match_any 
, which contains a list of filter rules.
-  match_any: Required. A list of one or more rules. If a log entry matches any rule, then the Ops Agent doesn't ingest that entry.The logs that are ingested by Ops Agent follow the LogEntrystructure . Field names are case-sensitive. You can only specify rules based on the following fields and their subfields:-  httpRequest
-  jsonPayload
-  labels
-  operation
-  severity
-  sourceLocation
-  trace
-  spanId
 The following example rule severity =~ "(DEBUG|INFO)"uses a regular expression to exclude all DEBUGandINFOlevel logs.Rules follow the Cloud Logging query language syntax but only support a subset of the features that Logging query language supports: - Comparison operators: =,!=,:,=~,!~. Only string comparisons are supported.
- Navigation operator: .. For examplejsonPayload.message.
- Boolean operators: AND,OR,NOT.
- Grouping expressions with ().
 
-  
Example configuration
  processors 
 : 
  
  PROCESSOR_ID 
 
 : 
  
 type 
 : 
  
 exclude_logs 
  
 match_any 
 : 
  
 - 
  
 '(jsonPayload.message 
  
 =~ 
  
 "log 
  
 spam 
  
 1" 
  
 OR 
  
 jsonPayload.message 
  
 =~ 
  
 "log 
  
 spam 
  
 2") 
  
 AND 
  
 severity 
  
 = 
  
 "ERROR"' 
  
 - 
  
 'jsonPayload.application 
  
 = 
  
 "foo" 
  
 AND 
  
 severity 
  
 = 
  
 "INFO"' 
 
 
 modify_fields 
Processor
 
 The modify_fields 
processor allows customization of the structure and
contents of log entries.
Configuration structure
  type 
 : 
  
 modify_fields 
 fields 
 : 
  
< destination field 
> : 
  
 # Source 
  
 move_from 
 : 
  
< source field 
>  
 copy_from 
 : 
  
< source field 
>  
 static_value 
 : 
  
< string 
>  
  
 # Mutation 
  
 default_value 
 : 
  
< string 
>  
 map_values 
 : 
  
< old value 
> : 
  
< new value 
>  
 type 
 : 
  
 { 
 integer|float 
 } 
  
 omit_if 
 : 
  
< filter 
> 
 
The top-level configuration for this processor contains a single field, fields 
, which contains a map of output field names and corresponding
translations. For each output field, an optional source and zero or more
mutation operations are applied.
All field names use the dot-separated syntax from the Cloud Logging query language. Filters use the Cloud Logging query language.
All transformations are applied in parallel, which means that sources and filters operate on the original input log entry and therefore can not reference the new value of any other fields being modified by the same processor.
Source options: At most one specified source is allowed.
-  No source specified If no source value is specified, the existing value in the destination field will be modified. 
-  move_from: <source field>The value from <source field>will be used as the source for the destination field. Additionally,<source field>will be removed from the log entry. If a source field is referenced by bothmove_fromandcopy_from, the source field will still be removed.
-  copy_from: <source field>The value from <source field>will be used as the source for the destination field.<source field>will not be removed from the log entry unless it is also referenced by amove_fromoperation or otherwise modified.
-  static_value: <string>The static string <string>will be used as the source for the destination field.
Mutation options: Zero or more mutation operators may be applied to a single field. If multiple operators are supplied, they will always be applied in the following order.
-  default_value: <string>If the source field did not exist, the output value will be set to <string>. If the source field already exists (even if it contains an empty string), the original value is unmodified.
-  map_values: <map>If the input value matches one of the keys in <map>, the output value will be replaced with the corresponding value from the map.
-  map_values_exclusive: {true|false}In case the <source field>value does not match any keys specified in themap_valuespairs, the destination field will be forcefully unset ifmap_values_exclusiveis true, or left untouched ifmap_values_exclusiveis false.
-  type: {integer|float}The input value will be converted to an integer or a float. If the string cannot be converted to a number, the output value will be unset. If the string contains a float but the type is specified as integer, the number will be truncated to an integer.Note that the Cloud Logging API uses JSON and therefore it does not support a full 64-bit integer; if a 64-bit (or larger) integer is needed, it must be stored as a string in the log entry. 
-  omit_if: <filter>If the filter matches the input log record, the output field will be unset. This can be used to remove placeholder values, such as: httpRequest.referer : move_from : jsonPayload.referer omit_if : httpRequest.referer = "-"
Sample Configurations
The parse_json 
processor would transform a JSON file containing
  { 
  
 "http_status" 
 : 
  
 "400" 
 , 
  
 "path" 
 : 
  
 "/index.html" 
 , 
  
 "referer" 
 : 
  
 "-" 
 } 
 
 
into a LogEntry structure that looks like this:
  { 
  
 "jsonPayload" 
 : 
  
 { 
  
 "http_status" 
 : 
  
 "400" 
 , 
  
 "path" 
 : 
  
 "/index.html" 
 , 
  
 "referer" 
 : 
  
 "-" 
  
 } 
 } 
 
 
This could then be transformed with modify_fields 
into this LogEntry 
:
  { 
  
 "httpRequest" 
 : 
  
 { 
  
 "status" 
 : 
  
 400 
 , 
  
 "requestUrl" 
 : 
  
 "/index.html" 
 , 
  
 } 
 } 
 
 
by using this Ops Agent configuration:
  logging 
 : 
  
 receivers 
 : 
  
 in 
 : 
  
 type 
 : 
  
 files 
  
 include_paths 
 : 
  
 - 
  
 /var/log/http.json 
  
 processors 
 : 
  
 parse_json 
 : 
  
 type 
 : 
  
 parse_json 
  
 set_http_request 
 : 
  
 type 
 : 
  
 modify_fields 
  
 fields 
 : 
  
 httpRequest.status 
 : 
  
 move_from 
 : 
  
 jsonPayload.http_status 
  
 type 
 : 
  
 integer 
  
 httpRequest.requestUrl 
 : 
  
 move_from 
 : 
  
 jsonPayload.path 
  
 httpRequest.referer 
 : 
  
 move_from 
 : 
  
 jsonPayload.referer 
  
 omit_if 
 : 
  
 jsonPayload.referer = "-" 
  
 service 
 : 
  
 pipelines 
 : 
  
 pipeline 
 : 
  
 receivers 
 : 
  
 [ 
 in 
 ] 
  
 processors 
 : 
  
 [ 
 parse_json 
 , 
  
 set_http_request 
 ] 
 
 
This configuration reads JSON-formatted logs from /var/log/http.json 
and
populates part of the httpRequest 
structure from fields in the logs.
Logging service
The logging service customizes verbosity for the Ops Agent's own logs, and
links logging receivers and processors together into pipelines. The service 
section has the following elements:
-  log_level
-  pipelines
Log verbosity level
The log_level 
field, available with Ops Agent versions 2.6.0 and later,
customizes verbosity for Ops Agent logging submodule's own logs. The default
is info 
. Available options are: error 
, warn 
, info 
, debug 
, trace 
.
The following configuration customizes log verbosity for the logging submodule
to be debug 
instead:
  logging 
 : 
  
 service 
 : 
  
 log_level 
 : 
  
 debug 
 
 
Logging pipelines
The pipelines 
field can contain multiple pipeline IDs and definitions. Each pipeline 
value consists of the following elements:
-  receivers: Required for new pipelines. A list of receiver IDs, as described in Logging receivers . The order of the receivers IDs in the list doesn't matter. The pipeline collects data from all of the listed receivers.
-  processors: Optional. A list of processor IDs, as described in Logging processors . The order of the processor IDs in the list matters. Each record is run through the processors in the listed order.
Example logging service 
configurations
 
 A service 
configuration has the following structure:
service: log_level: CUSTOM_LOG_LEVEL pipelines: PIPELINE_ID : receivers: [...] processors: [...] PIPELINE_ID_2 : receivers: [...] processors: [...]
To stop the agent from collecting and sending either /var/log/message 
or /var/log/syslog 
entries, redefine the default pipeline with
an empty receivers 
list and no processors. This configuration does not
stop the agent's logging subcomponent, because the agent must be able
to collect logs for the monitoring subcomponent. The entire empty logging
configuration looks like the following:
  logging 
 : 
  
 service 
 : 
  
 pipelines 
 : 
  
 default_pipeline 
 : 
  
 receivers 
 : 
  
 [] 
 
 
The following service 
configuration defines a pipeline with the ID custom_pipeline 
:
  logging 
 : 
  
 service 
 : 
  
 pipelines 
 : 
  
 custom_pipeline 
 : 
  
 receivers 
 : 
  
 - 
  
  RECEIVER_ID 
 
  
 processors 
 : 
  
 - 
  
  PROCESSOR_ID 
 
 
 
Metrics configurations
The metrics 
configuration uses the configuration model 
described previously:
-  receivers: a list of receiver definitions. Areceiverdescribes the source of the metrics; for example, system metrics likecpuormemory. The receivers in this list can be shared among multiple pipelines.
-  processors: a list of processor definitions. Aprocessordescribes how to modify the metrics collected by a receiver.
-  service: contains apipelinessection that is a list ofpipelinedefinitions. Apipelineconnects a list ofreceiversand a list ofprocessorsto form the data flow.
The following sections describe each of these elements.
The Ops Agent sends metrics to Cloud Monitoring. You can't configure it to export metrics to other services.
Metrics receivers
The receivers 
element contains a set of receiver definitions. A receiver
describes from where to retrieve the metrics, such as cpu 
and memory 
.
A receiver can be shared among multiple pipelines.
Structure of metrics receivers
Each receiver must have an identifier, RECEIVER_ID 
, and include a type 
element. Valid built-in types 
are:
-  hostmetrics
-  iis(Windows only)
-  mssql(Windows only)
A receiver can also specify the operation collection_interval 
option. The
value is in the format of a duration, for example, 30s 
or 2m 
. The default
value is 60s 
.
Each of these receiver types collects a set of metrics; for information about the specific metrics included, see Metrics ingested by the receivers .
You can create only one receiver for each type. For example, you can't
define two receivers of type hostmetrics 
.
Changing the collection interval in the metrics receivers
Some critical workloads might require fast alerting. By reducing the the collection interval for the metrics, you can configure more sensitive alerts. For information on how alerts are evaluated, see Behavior of metric-based alerting policies .
For example, the following receiver changes the collection interval for host
metrics (the receiver ID is hostmetrics 
) from the default of 60 seconds to 10
seconds:
  metrics 
 : 
  
 receivers 
 : 
  
 hostmetrics 
 : 
  
 type 
 : 
  
 hostmetrics 
  
 collection_interval 
 : 
  
 10s 
 
 
You can also override the collection interval for the Windows iis 
and mssql 
metrics receivers using the same technique.
Metrics ingested by the receivers
The metrics ingested by the Ops Agent have identifiers that begin with the
following pattern: agent.googleapis.com/ GROUP 
 
.
The GROUP 
component identifies a set of related metrics; it
has values like cpu 
, network 
, and others.
The hostmetrics 
receiver
 
 The hostmetrics 
receiver ingests the following metric groups. For
more information, see the linked section for each group on the Ops Agent metrics 
page.
| Group | Metric | 
|---|---|
| cpu | CPU load at 1 minute intervals CPU load at 5 minute intervals CPU load at 15 minute intervals CPU usage, with labels for CPU number and CPU state CPU usage percent, with labels for CPU number and CPU state | 
| disk | Disk bytes read, with label for device Disk bytes written, with label for device Disk I/O time, with label for device Disk weighted I/O time, with label for device Disk pending operations, with label for device Disk merged operations, with labels for device and direction Disk operations, with labels for device and direction Disk operation time, with labels for device and direction Disk usage, with labels for device and state Disk utilization, with labels for device and state | 
| gpuLinux only; see About the gpumetrics 
for other important
          information. | Current number of GPU memory bytes used, by state Maximum amount of GPU memory, in bytes, that has been allocated by the process Percentage of time in the process lifetime that one or more kernels has been running on the GPU Percentage of time, since last sample, the GPU has been active | 
| interfaceLinux only | Total count of network errors Total count of packets sent over the network Total number of bytes sent over the network | 
| memory | Memory usage, with label for state (buffered, cached, free, slab, used) Memory usage percent, with label for state (buffered, cached, free, slab, used) | 
| network | TCP connection count, with labels for port and TCP state | 
| swap | Swap I/O operations, with label for direction Swap bytes used, with labels for device and state Swap percent used, with labels for device and state | 
| pagefileWindows only | Current percentage of pagefile used by state | 
| processes | Processes count, with label for state Processes forked count Per-process disk read I/O, with labels for process name + others Per-process disk write I/O, with labels for process name + others Per-process RSS usage, with labels for process name + others Per-process VM usage, with labels for process name + others | 
The iis 
receiver (Windows only)
 
 The iis 
receiver (Windows only) ingests metrics of the iis 
group.
For more information, see the Agent metrics 
page.
| Group | Metric | 
|---|---|
| iisWindows only | Currently open connections to IIS Network bytes transferred by IIS Connections opened to IIS Requests made to IIS | 
The mssql 
receiver (Windows only)
 
 The mssql 
receiver (Windows only) ingests metrics of the mssql 
group. For
more information, see the Ops Agent metrics 
page.
| Group | Metric | 
|---|---|
| mssqlWindows only | Currently open connections to SQL server SQL server total transactions per second SQL server write transactions per second | 
Metrics processors
The processor 
element contains a set of processor definitions. A processor
describes metrics from the receiver type to exclude. The only supported type
is exclude_metrics 
, which takes a metrics_pattern 
option. The value is
a list of globs that match the Ops Agent metric types 
you want to exclude from the group collected by a receiver. For example:
- To exclude all agent CPU metrics 
,
specify agent.googleapis.com/cpu/*.
- To exclude the agent CPU utilization metric, specify agent.googleapis.com/cpu/utilization.
- To exclude the client-side request-count metric from the metrics
collected by the Apache Cassandra third-party
integration 
, specify workloads.googleapis.com/cassandra.client.request.count.
- To exclude all client-side metrics from the metrics
collected by the Apache Cassandra third-party
integration 
, specify workloads.googleapis.com/cassandra.client.*.
Sample metrics processor
The following example shows the exclude_metrics 
processor supplied in
the built-in configurations. This processor supplies an empty metrics_pattern 
value, so it doesn't exclude any metrics.
  processors 
 : 
  
 metrics_filter 
 : 
  
 type 
 : 
  
 exclude_metrics 
  
 metrics_pattern 
 : 
  
 [] 
 
 
To disable the collection of all process metrics by the Ops Agent,
add the following to your config.yaml 
file:
metrics:
  processors:
    metrics_filter:
      type: exclude_metrics
      metrics_pattern:
      - agent.googleapis.com/processes/* 
This excludes process metrics from collection in the metrics_filter 
processor that applies to the default pipeline in the metrics 
service.
Metrics service
The metrics service customizes verbosity for the Ops Agent metrics module's own
logs and links metrics receivers and processors together into pipelines. The service 
section has two elements: log_level 
and pipelines 
.
Metrics verbosity level
 log_level 
, available with Ops Agent versions 2.6.0 and later, customizes
verbosity for Ops Agent metrics submodule's own logs. The default is info 
.
Available options are: error 
, warn 
, info 
, debug 
.
Metrics pipelines
The service 
section has a single element, pipelines 
, which can contain
multiple pipeline IDs and definitions. Each pipeline 
definition consists of the following elements:
-  receivers: Required for new pipelines. A list of receiver IDs, as described in Metrics receivers . The order of the receivers IDs in the list doesn't matter. The pipeline collects data from all of the listed receivers.
-  processors: Optional. A list of processor IDs, as described in Metrics processors . The order of the processor IDs in the list does matter. Each metric point is run through the processors in the listed order.
Example metrics service 
configurations
 
 A service 
configuration has the following structure:
service: log_level: CUSTOM_LOG_LEVEL pipelines: PIPELINE_ID : receivers: [...] processors: [...] PIPELINE_ID_2 : receivers: [...] processors: [...]
To turn off the built-in ingestion of host metrics, redefine the default
pipeline with an empty receivers 
list and no processors. The entire metrics
configuration looks like the following:
  metrics 
 : 
  
 service 
 : 
  
 pipelines 
 : 
  
 default_pipeline 
 : 
  
 receivers 
 : 
  
 [] 
 
 
The following example shows the built-in service 
configuration for
Windows:
  metrics 
 : 
  
 service 
 : 
  
 pipelines 
 : 
  
 default_pipeline 
 : 
  
 receivers 
 : 
  
 - 
  
 hostmetrics 
  
 - 
  
 iis 
  
 - 
  
 mssql 
  
 processors 
 : 
  
 - 
  
 metrics_filter 
 
 
The following service 
configuration customizes log verbosity for the metrics
submodule to be debug 
instead:
  metrics 
 : 
  
 service 
 : 
  
 log_level 
 : 
  
 debug 
 
 
Collection of self logs
By default, the Ops Agent's Fluent Bit self logs are sent to Cloud Logging. These logs can include a lot of information, and the additional volume might increase your costs to use Cloud Logging.
You can disable the collection of these self logs, starting with Ops Agent
version 2.44.0, by using the default_self_log_file_collection 
option.
To disable self-log collection, add a global 
section to your user-specified
configuration file and set the default_self_log_file_collection 
option
to the value false 
:
logging: ... metrics: ... global: default_self_log_file_collection: false
Log-rotation configuration
Starting with Ops Agent version 2.31.0, you can also set up the agent's log-rotation feature by using the configuration files. For more information, see Configure log rotation in the Ops Agent .

