Stay organized with collectionsSave and categorize content based on your preferences.
Ingestion metrics reference for Looker and BigQuery
The Ingestion metrics Explore interface provides a variety of measure fields
that you can use to create new dashboards. Dimensions and measures are the
fundamental components of a dashboard. A dimension is a field that can be used
to filter query results by grouping data. A measure is a field that calculates
a value using a SQL aggregate function, such as COUNT, SUM, AVG, MIN, or MAX.
Any field derived from other measure values is also considered a measure.
For information about the dimension fields and ingestion metrics schemas,
seeIngestion metrics schema.
Ingestion metrics fields
The following table describes the additional fields that you can use
asdimensions,filters, andmeasures:
Field
Description
timestamp
The Unix epoch time that represents the start time of the aggregated time interval associated with the metric.
total_entry_number
The number of logs ingested through the Ingestion API component
(i.e component == Ingestion API).
total_entry_number_in_million
The number of logs ingested through the Ingestion API component,
in millions.
total_entry_number_in_million_for_drill
The number of logs ingested through the Ingestion API component,
in millions rounded to 0 decimal places.
total_size_bytes
The log volume ingested through the Ingestion API component, in bytes.
total_size_bytes_GB
The log volume ingested through the Ingestion API component, in
GB (gigabyte) rounded to 2 decimals. A GB is 109bytes.
total_size_bytes_GB_for_drill
Same as total_size_bytes_GB.
total_size_bytes_GiB
The log volume ingested through the Ingestion API component, in
GiB (gibibyte) rounded to 2 decimals. A GiB is 230bytes.
total_events
The count of validated events during normalization (successfully ingested events).
total_error_events
The count of events that failed validation or failed parsing
during normalization.
total_error_count_in_million
The count of failed validation and failed parsing errors, in
millions rounded to 0 decimals.
total_normalized_events
The count of events that passed validation during normalization (successfully parsed events).
total_validation_error_events
The count of events that failed during normalization.
total_parsing_error_events
The count of events that failed to parse during normalization.
period
The reporting period as selected by thePeriod Filter.
Values includeThis PeriodandPrevious Period.
period_filter
The reporting period before the specified date or after the specified date.
log_type_for_drill
Only populated for non-null log types.
valid_log_type
Same as log_type_for_drill.
offered_gcp_log_type
43 (The count of Google Cloud log types offered by Google Security Operations.)
gcp_log_types_used
Percentage of the available Google Cloud log types that the
customer ingests.
gcp_log_type
Only populated for non-null Google Cloud log types.
total_log_volume_mb_per_hour
The total volume of logs (in all components), in MB per hour rounded to 2 decimals.
max_quota_limit_mb_per_second
The maximum quota limit, in MB per second.
Use case: Sample query
The following table contains values for a sample query:
Measure
Rows populated
Ingested log count
collector_id, log_type, log_count
Ingested volume
collector_id, log_type, log_volume
Normalized events
collector_id, log_type, event_count
Forwarder cpu usage
collector_id, log_type, cpu_used
The table has 4 components:
Forward
Ingestion API
Normalizer
Out Of Band (OOB)
Logs can be ingested to Google SecOps by OOB, the Forwarder,
direct customer calls to Ingestion API, or internal service calls to the Ingestion
API (for example, ETD, HTTPS Push webhooks, or Azure event hub integration).
All logs ingested into Google Security Operations flow through theIngestion API.
After that, the logs are normalized by theNormalizer component.
Log count
Number of ingested logs:
SELECT
*
FROM
`chronicle-catfood.datalake.ingestion_metrics`
WHERE
log_count IS NOT NULL
AND component = 'Ingestion API'
LIMIT
2 ;
Volume of ingested logs:
SELECT
*
FROM
`chronicle-catfood.datalake.ingestion_metrics`
WHERE
log_volume IS NOT NULL
AND component = 'Ingestion API'
LIMIT
2 ;
Apply thelogtypeorcollectorIDfilter, and in theWHEREclause, addlog_type = <LOGTYPE>orcollector_id = <COLLECTOR_ID>.
SelectAddGROUP BYin the query with the appropriate field to perform a group query.
TheNormalizercomponent handles parsing errors, which occur when events are
generated. These errors are recorded in thedrop_reason_codeandstatecolumns.
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThe Ingestion metrics Explore interface provides measure fields to create dashboards, using dimensions for filtering and measures for calculating values with SQL aggregate functions.\u003c/p\u003e\n"],["\u003cp\u003eDimensions and measures are used to filter and calculate data, with dimensions being fields used to group data, and measures being fields that use SQL functions like COUNT, SUM, AVG, MIN, or MAX.\u003c/p\u003e\n"],["\u003cp\u003eThe table lists available fields, including timestamp, various entry and size metrics (e.g., total_entry_number, total_size_bytes), event counts (e.g., total_events, total_error_events), and period-related filters.\u003c/p\u003e\n"],["\u003cp\u003eAdditional fields related to Google Cloud log types and volume are available, such as offered_gcp_log_type, gcp_log_types_used, total_log_volume_mb_per_hour, and max_quota_limit_mb_per_second.\u003c/p\u003e\n"],["\u003cp\u003eThe document offers a reference of the different fields that can be used to create dashboards in the Ingestion Metrics Explore interface, such as timestamp, various entry and size metrics, event counts, and period-related filters.\u003c/p\u003e\n"]]],[],null,["Ingestion metrics reference for Looker and BigQuery\n\nThe Ingestion metrics Explore interface provides a variety of measure fields\nthat you can use to create new dashboards. Dimensions and measures are the\nfundamental components of a dashboard. A dimension is a field that can be used\nto filter query results by grouping data. A measure is a field that calculates\na value using a SQL aggregate function, such as COUNT, SUM, AVG, MIN, or MAX.\nAny field derived from other measure values is also considered a measure.\n\nFor information about the dimension fields and ingestion metrics schemas,\nsee [Ingestion metrics schema](/chronicle/docs/reference/ingestion-metrics-schema).\n\nIngestion metrics fields\n\nThe following table describes the additional fields that you can use\nas *dimensions* , *filters* , and *measures*:\n\n| Field | Description |\n|-----------------------------------------|-----------------------------------------------------------------------------------------------------------------------------|\n| timestamp | The Unix epoch time that represents the start time of the aggregated time interval associated with the metric. |\n| total_entry_number | The number of logs ingested through the Ingestion API component (i.e component == Ingestion API). |\n| total_entry_number_in_million | The number of logs ingested through the Ingestion API component, in millions. |\n| total_entry_number_in_million_for_drill | The number of logs ingested through the Ingestion API component, in millions rounded to 0 decimal places. |\n| total_size_bytes | The log volume ingested through the Ingestion API component, in bytes. |\n| total_size_bytes_GB | The log volume ingested through the Ingestion API component, in GB (gigabyte) rounded to 2 decimals. A GB is 10^9^ bytes. |\n| total_size_bytes_GB_for_drill | Same as total_size_bytes_GB. |\n| total_size_bytes_GiB | The log volume ingested through the Ingestion API component, in GiB (gibibyte) rounded to 2 decimals. A GiB is 2^30^ bytes. |\n| total_events | The count of validated events during normalization (successfully ingested events). |\n| total_error_events | The count of events that failed validation or failed parsing during normalization. |\n| total_error_count_in_million | The count of failed validation and failed parsing errors, in millions rounded to 0 decimals. |\n| total_normalized_events | The count of events that passed validation during normalization (successfully parsed events). |\n| total_validation_error_events | The count of events that failed during normalization. |\n| total_parsing_error_events | The count of events that failed to parse during normalization. |\n| period | The reporting period as selected by the **Period Filter** . Values include `This Period` and `Previous Period`. |\n| period_filter | The reporting period before the specified date or after the specified date. |\n| log_type_for_drill | Only populated for non-null log types. |\n| valid_log_type | Same as log_type_for_drill. |\n| offered_gcp_log_type | 43 (The count of Google Cloud log types offered by Google Security Operations.) |\n| gcp_log_types_used | Percentage of the available Google Cloud log types that the customer ingests. |\n| gcp_log_type | Only populated for non-null Google Cloud log types. |\n| total_log_volume_mb_per_hour | The total volume of logs (in all components), in MB per hour rounded to 2 decimals. |\n| max_quota_limit_mb_per_second | The maximum quota limit, in MB per second. |\n\nUse case: Sample query\n\nThe following table contains values for a sample query:\n\n|---------------------|-------------------------------------|\n| **Measure** | **Rows populated** |\n| Ingested log count | collector_id, log_type, log_count |\n| Ingested volume | collector_id, log_type, log_volume |\n| Normalized events | collector_id, log_type, event_count |\n| Forwarder cpu usage | collector_id, log_type, cpu_used |\n\n| **Note:** All times shown in the table use UTC.\n| **Note:** The table stores different measures as separate rows. Depending on the measure, only relevant columns are populated, while the non-relevant columns remain as null. The `Component`, `start_time` and `end_time` columns are populated for all measures.\n\nThe table has 4 components:\n\n1. Forward\n2. Ingestion API\n3. Normalizer\n4. Out Of Band (OOB)\n\nLogs can be ingested to Google SecOps by OOB, the Forwarder,\ndirect customer calls to Ingestion API, or internal service calls to the Ingestion\nAPI (for example, ETD, HTTPS Push webhooks, or Azure event hub integration).\n\nAll logs ingested into Google Security Operations flow through the [Ingestion API](/chronicle/docs/reference/ingestion-api).\nAfter that, the logs are normalized by the [Normalizer component](/chronicle/docs/reference/ingestion-metrics-schema#normalizer_ingestion_schema).\n\nLog count\n\n- Number of ingested logs:\n\n SELECT\n *\n FROM\n `chronicle-catfood.datalake.ingestion_metrics`\n WHERE\n log_count IS NOT NULL\n AND component = 'Ingestion API'\n LIMIT\n 2 ;\n\n| **Note:** Count the logs only after including `use component filter` and `specify Ingestion API`.\n\n- Volume of ingested logs:\n\n SELECT\n *\n FROM\n `chronicle-catfood.datalake.ingestion_metrics`\n WHERE\n log_volume IS NOT NULL\n AND component = 'Ingestion API'\n LIMIT\n 2 ;\n\n| **Note:** Count the logs only after including `use component filter` and `specify Ingestion API`.\n\n1. Apply the `logtype` or `collectorID` filter, and in the `WHERE` clause, add `log_type = \u003cLOGTYPE\u003e` or `collector_id = \u003cCOLLECTOR_ID\u003e`.\n2. Select **Add `GROUP BY`** in the query with the appropriate field to perform a group query.\n\nThe **Normalizer** component handles parsing errors, which occur when events are\ngenerated. These errors are recorded in the `drop_reason_code` and `state` columns."]]