JOBS_BY_ORGANIZATION view
The INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
view contains near real-time
metadata about all jobs submitted in the organization that is associated with
the current project.
Required role
To get the permission that
you need to query the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
view,
ask your administrator to grant you the BigQuery Resource Viewer
( roles/bigquery.resourceViewer
)
IAM role on your organization.
For more information about granting roles, see Manage access to projects, folders, and organizations
.
This predefined role contains the bigquery.jobs.listAll
permission,
which is required to
query the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
view.
You might also be able to get this permission with custom roles or other predefined roles .
The schema table is only available to users with defined Google Cloud organizations.
For more information about BigQuery permissions, see Access control with IAM .
Schema
The underlying data is partitioned by the creation_time
column and
clustered by project_id
and user_email
. The query_info
column contains
additional information about your query jobs.
The INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
view has the following schema:
bi_engine_statistics
RECORD
NULL
.cache_hit
BOOLEAN
cache_hit
for your parent query is NULL
.creation_time
TIMESTAMP
end_time
TIMESTAMP
DONE
state.folder_numbers
REPEATED INTEGER
folder_numbers
is [1, 2, 3]
, then folder 1
immediately contains the project, folder 2
contains 1
, and folder 3
contains 2
. This column is only
populated in JOBS_BY_FOLDER
.job_creation_reason.code
STRING
Possible values are:
-
REQUESTED: job creation was requested. -
LONG_RUNNING: the query request ran beyond a system defined timeout specified by the timeoutMs field in theQueryRequest. As a result it was considered a long running operation for which a job was created. -
LARGE_RESULTS: the results from the query cannot fit in the in-line response. -
OTHER: the system has determined that the query needs to be executed as a job.
job_id
STRING
bquxjob_1234
.job_stages
RECORD
Note : This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.
job_type
STRING
QUERY
, LOAD
, EXTRACT
, COPY
, or NULL
. A NULL
value indicates a background
job.labels
RECORD
parent_job_id
STRING
priority
STRING
INTERACTIVE
and BATCH
.project_id
STRING
project_number
INTEGER
referenced_tables
RECORD
reservation_id
STRING
RESERVATION_ADMIN_PROJECT:RESERVATION_LOCATION.RESERVATION_NAME
.In this output:
-
RESERVATION_ADMIN_PROJECT: the name of the Google Cloud project that administers the reservation -
RESERVATION_LOCATION: the location of the reservation -
RESERVATION_NAME: the name of the reservation
edition
STRING
start_time
TIMESTAMP
PENDING
state to either RUNNING
or DONE
.state
STRING
PENDING
, RUNNING
, and DONE
.statement_type
STRING
DELETE
, INSERT
, SCRIPT
, SELECT
, or UPDATE
. See QueryStatementType
for list of valid values.total_bytes_billed
INTEGER
Note : This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.
total_bytes_processed
INTEGER
Total bytes processed by the job.
Note : This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.
total_modified_partitions
INTEGER
LOAD
and QUERY
jobs.total_slot_ms
INTEGER
RUNNING
state,
including retries.total_services_sku_slot_ms
INTEGER
"SERVICES_SKU"
.user_email
STRING
query_info.resource_warning
STRING
A successful query job can have the
resource_warning
field populated. With resource_warning
, you get additional data points to optimize your queries and to set up monitoring for performance trends of an equivalent set of queries by using query_hashes
.query_info.query_hashes.normalized_literals
STRING
normalized_literals
is a hexadecimal STRING
hash that ignores comments, parameter values, UDFs, and literals.
The hash value will differ when underlying views change, or if the query implicitly
references columns, such as SELECT *
, and the table schema changes.This field appears for successful GoogleSQL queries that are not cache hits.
query_info.optimization_details
STRUCT
transferred_bytes
INTEGER
materialized_view_statistics
RECORD
metadata_cache_statistics
RECORD
query_dialect
STRING
-
GOOGLE_SQL: Job was requested to use GoogleSQL. -
LEGACY_SQL: Job was requested to use LegacySQL. -
DEFAULT_LEGACY_SQL: No query dialect was specified in the job request. BigQuery used the default value of LegacySQL. -
DEFAULT_GOOGLE_SQL: No query dialect was specified in the job request. BigQuery used the default value of GoogleSQL.
This field is only populated for query jobs. The default selection of query dialect can be controlled by the configuration settings .
continuous_query_info.output_watermark
TIMESTAMP
For stability, we recommend that you explicitly list columns in your information schema queries instead of
using a wildcard ( SELECT *
). Explicitly listing columns prevents queries from
breaking if the underlying schema changes.
Data retention
This view displays running jobs along with job history for the past 180 days.
If a project migrates to an organization (either from having no organization or
from a different one), job information predating the migration date isn't
accessible through the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
view, as the
view only retains data starting from the migration date.
Scope and syntax
Queries against this view must include a region qualifier . The following table explains the region scope for this view:
| View name | Resource scope | Region scope |
|---|---|---|
[ PROJECT_ID
.]`region- REGION
`.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
|
Organization that contains the specified project | REGION
|
- Optional:
PROJECT_ID: the ID of your Google Cloud project. If not specified, the default project is used. -
REGION: any dataset region name . For example,`region-us`.
Examples
To run the query against a project other than your default project, add the project ID in the following format:
` PROJECT_ID `.`region- REGION_NAME `.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
-
PROJECT_ID: the ID of the project -
REGION_NAME: the region for your project
For example, `myproject`.`region-us`.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
.
Find top five jobs that scanned the most bytes today
The following example demonstrates how to find the five jobs that scanned the
most bytes in an organization for the current day. You can filter further on statement_type
to query for additional information such as loads, exports,
and queries.
SELECT job_id , user_email , total_bytes_billed FROM `region- REGION_NAME ` . INFORMATION_SCHEMA . JOBS_BY_ORGANIZATION WHERE EXTRACT ( DATE FROM creation_time ) = current_date () ORDER BY total_bytes_billed DESC LIMIT 5 ;
The result is similar to the following:
+--------------+--------------+---------------------------+ | job_id | user_email | total_bytes_billed | +--------------+--------------+---------------------------+ | bquxjob_1 | abc@xyz.com | 999999 | | bquxjob_2 | def@xyz.com | 888888 | | bquxjob_3 | ghi@xyz.com | 777777 | +--------------+--------------+---------------------------+
Aggregate Connected Sheets usage by user at the organization level
The following query provides a summary of the top Connected Sheets users in your organization over the last 30 days, ranked by their total billed data. The query aggregates the total number of queries, total bytes billed, and total slot milliseconds for each user. This information is useful for understanding adoption and for identifying top consumers of resources.
SELECT
user_email
,
COUNT
(
*
)
AS
total_queries
,
SUM
(
total_bytes_billed
)
AS
total_bytes_billed
,
SUM
(
total_slot_ms
)
AS
total_slot_ms
FROM
`region- REGION_NAME
.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION`
WHERE
-- Filter for jobs created in the last 30 days
creation_time
> =
TIMESTAMP_SUB
(
CURRENT_TIMESTAMP
(),
INTERVAL
30
DAY
)
-- Filter for jobs originating from Connected Sheets
AND
job_id
LIKE
'sheets_dataconnector%'
-- Filter for completed jobs
AND
state
=
'DONE'
AND
(
statement_type
IS
NULL
OR
statement_type
<>
'SCRIPT'
)
GROUP
BY
1
ORDER
BY
total_bytes_billed
DESC
;
Replace REGION_NAME
with the region for your project.
For example, region-us
.
The result looks similar to the following:
+---------------------+---------------+--------------------+-----------------+ | user_email | total_queries | total_bytes_billed | total_slot_ms | +---------------------+---------------+--------------------+-----------------+ | alice@example.com | 152 | 12000000000 | 3500000 | | bob@example.com | 45 | 8500000000 | 2100000 | | charles@example.com | 210 | 1100000000 | 1800000 | +---------------------+---------------+--------------------+-----------------+
Find job logs of Connected Sheets queries at the organization-level
The following query provides a detailed log of every individual query run by Connected Sheets. This information is useful for auditing and identifying specific high-cost queries.
SELECT
job_id
,
creation_time
,
user_email
,
project_id
,
total_bytes_billed
,
total_slot_ms
,
query
FROM
`region- REGION_NAME
.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION`
WHERE
creation_time
> =
TIMESTAMP_SUB
(
CURRENT_TIMESTAMP
(),
INTERVAL
30
DAY
)
AND
job_id
LIKE
'sheets_dataconnector%'
AND
state
=
'DONE'
AND
(
statement_type
IS
NULL
OR
statement_type
<>
'SCRIPT'
)
ORDER
BY
creation_time
DESC
;
Replace REGION_NAME
with the region for your project.
For example, region-us
.
The result looks similar to the following:
+---------------------------------+---------------------------------+-----------------+------------+--------------------+---------------+--------------------------------+ | job_id | creation_time | user_email | project_id | total_bytes_billed | total_slot_ms | query | +---------------------------------+---------------------------------+-----------------+------------+--------------------+---------------+--------------------------------+ | sheets_dataconnector_bquxjob_1 | 2025-11-06 00:26:53.077000 UTC | abc@example.com | my_project | 12000000000 | 3500000 | SELECT ... FROM dataset.table1 | | sheets_dataconnector_bquxjob_2 | 2025-11-06 00:24:04.294000 UTC | xyz@example.com | my_project | 8500000000 | 2100000 | SELECT ... FROM dataset.table2 | | sheets_dataconnector_bquxjob_3 | 2025-11-03 23:17:25.975000 UTC | bob@example.com | my_project | 1100000000 | 1800000 | SELECT ... FROM dataset.table3 | +---------------------------------+---------------------------------+-----------------+------------+--------------------+---------------+--------------------------------+

