JOBS_BY_ORGANIZATION view

The INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION view contains near real-time metadata about all jobs submitted in the organization that is associated with the current project.

Required role

To get the permission that you need to query the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION view, ask your administrator to grant you the BigQuery Resource Viewer ( roles/bigquery.resourceViewer ) IAM role on your organization. For more information about granting roles, see Manage access to projects, folders, and organizations .

This predefined role contains the bigquery.jobs.listAll permission, which is required to query the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION view.

You might also be able to get this permission with custom roles or other predefined roles .

The schema table is only available to users with defined Google Cloud organizations.

For more information about BigQuery permissions, see Access control with IAM .

Schema

The underlying data is partitioned by the creation_time column and clustered by project_id and user_email . The query_info column contains additional information about your query jobs.

The INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION view has the following schema:

Column name
Data type
Value
bi_engine_statistics
RECORD
If the project is configured to use the BI Engine , then this field contains BiEngineStatistics . Otherwise NULL .
cache_hit
BOOLEAN
Whether the query results of this job were from a cache. If you have a multi-query statement job , cache_hit for your parent query is NULL .
creation_time
TIMESTAMP
( Partitioning column ) Creation time of this job. Partitioning is based on the UTC time of this timestamp.
destination_table
RECORD
Destination table for results, if any.
end_time
TIMESTAMP
The end time of this job, in milliseconds since the epoch. This field represents the time when the job enters the DONE state.
error_result
RECORD
Details of any errors as ErrorProto objects.
folder_numbers
REPEATED INTEGER
Number IDs of folders that contain the project, starting with the folder that immediately contains the project, followed by the folder that contains the child folder, and so forth. For example, if folder_numbers is [1, 2, 3] , then folder 1 immediately contains the project, folder 2 contains 1 , and folder 3 contains 2 . This column is only populated in JOBS_BY_FOLDER .
job_creation_reason.code
STRING
Specifies the high level reason why a job was created.
Possible values are:
  • REQUESTED : job creation was requested.
  • LONG_RUNNING : the query request ran beyond a system defined timeout specified by the timeoutMs field in the QueryRequest . As a result it was considered a long running operation for which a job was created.
  • LARGE_RESULTS : the results from the query cannot fit in the in-line response.
  • OTHER : the system has determined that the query needs to be executed as a job.
job_id
STRING
The ID of the job if a job was created. Otherwise, the query ID of a query using optional job creation mode. For example, bquxjob_1234 .
job_stages
RECORD
Query stages of the job.

Note : This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.

job_type
STRING
The type of the job. Can be QUERY , LOAD , EXTRACT , COPY , or NULL . A NULL value indicates a background job.
labels
RECORD
Array of labels applied to the job as key-value pairs.
parent_job_id
STRING
ID of the parent job, if any.
priority
STRING
The priority of this job. Valid values include INTERACTIVE and BATCH .
project_id
STRING
( Clustering column ) The ID of the project.
project_number
INTEGER
The number of the project.
referenced_tables
RECORD
Array of tables referenced by the job. Only populated for query jobs that are not cache hits.
reservation_id
STRING
Name of the primary reservation assigned to this job, in the format RESERVATION_ADMIN_PROJECT:RESERVATION_LOCATION.RESERVATION_NAME .
In this output:
  • RESERVATION_ADMIN_PROJECT : the name of the Google Cloud project that administers the reservation
  • RESERVATION_LOCATION : the location of the reservation
  • RESERVATION_NAME : the name of the reservation
edition
STRING
The edition associated with the reservation assigned to this job. For more information about editions, see Introduction to BigQuery editions .
session_info
RECORD
Details about the session in which this job ran, if any.
start_time
TIMESTAMP
The start time of this job, in milliseconds since the epoch. This field represents the time when the job transitions from the PENDING state to either RUNNING or DONE .
state
STRING
Running state of the job. Valid states include PENDING , RUNNING , and DONE .
statement_type
STRING
The type of query statement. For example, DELETE , INSERT , SCRIPT , SELECT , or UPDATE . See QueryStatementType for list of valid values.
timeline
RECORD
Query timeline of the job. Contains snapshots of query execution.
total_bytes_billed
INTEGER
If the project is configured to use on-demand pricing , then this field contains the total bytes billed for the job. If the project is configured to use flat-rate pricing , then you are not billed for bytes and this field is informational only.

Note : This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.

total_bytes_processed
INTEGER

Total bytes processed by the job.

Note : This column's values are empty for queries that read from tables with row-level access policies. For more information, see best practices for row-level security in BigQuery.

total_modified_partitions
INTEGER
The total number of partitions the job modified. This field is populated for LOAD and QUERY jobs.
total_slot_ms
INTEGER
Slot milliseconds for the job over its entire duration in the RUNNING state, including retries.
total_services_sku_slot_ms
INTEGER
Total slot milliseconds for the job that runs on external services and is billed on the services SKU. This field is only populated for jobs that have external service costs, and is the total of the usage for costs whose billing method is "SERVICES_SKU" .
transaction_id
STRING
ID of the transaction in which this job ran, if any.
user_email
STRING
( Clustering column ) Email address or service account of the user who ran the job.
query_info.resource_warning
STRING
The warning message that appears if the resource usage during query processing is above the internal threshold of the system.
A successful query job can have the resource_warning field populated. With resource_warning , you get additional data points to optimize your queries and to set up monitoring for performance trends of an equivalent set of queries by using query_hashes .
query_info.query_hashes.normalized_literals
STRING
Contains the hash value of the query. normalized_literals is a hexadecimal STRING hash that ignores comments, parameter values, UDFs, and literals. The hash value will differ when underlying views change, or if the query implicitly references columns, such as SELECT * , and the table schema changes.
This field appears for successful GoogleSQL queries that are not cache hits.
query_info.performance_insights
RECORD
Performance insights for the job.
query_info.optimization_details
STRUCT
The history-based optimizations for the job. Only the JOBS_BY_PROJECT view has this column.
transferred_bytes
INTEGER
Total bytes transferred for cross-cloud queries, such as BigQuery Omni cross-cloud transfer jobs.
materialized_view_statistics
RECORD
Statistics of materialized views considered in a query job. ( Preview )
metadata_cache_statistics
RECORD
search_statistics
RECORD
query_dialect
STRING
This field will be available sometime in May, 2025. The query dialect used for the job. Valid values include:
  • GOOGLE_SQL : Job was requested to use GoogleSQL.
  • LEGACY_SQL : Job was requested to use LegacySQL.
  • DEFAULT_LEGACY_SQL : No query dialect was specified in the job request. BigQuery used the default value of LegacySQL.
  • DEFAULT_GOOGLE_SQL : No query dialect was specified in the job request. BigQuery used the default value of GoogleSQL.

This field is only populated for query jobs. The default selection of query dialect can be controlled by the configuration settings .
continuous
BOOLEAN
Whether the job is a continuous query .
continuous_query_info.output_watermark
TIMESTAMP
Represents the point up to which the continuous query has successfully processed data.
vector_search_statistics
RECORD

For stability, we recommend that you explicitly list columns in your information schema queries instead of using a wildcard ( SELECT * ). Explicitly listing columns prevents queries from breaking if the underlying schema changes.

Data retention

This view displays running jobs along with job history for the past 180 days. If a project migrates to an organization (either from having no organization or from a different one), job information predating the migration date isn't accessible through the INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION view, as the view only retains data starting from the migration date.

Scope and syntax

Queries against this view must include a region qualifier . The following table explains the region scope for this view:

View name Resource scope Region scope
[ PROJECT_ID .]`region- REGION `.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
Organization that contains the specified project REGION
Replace the following:
  • Optional: PROJECT_ID : the ID of your Google Cloud project. If not specified, the default project is used.
  • REGION : any dataset region name . For example, `region-us` .

Examples

To run the query against a project other than your default project, add the project ID in the following format:

` PROJECT_ID 
`.`region- REGION_NAME 
`.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION
Replace the following:
  • PROJECT_ID : the ID of the project
  • REGION_NAME : the region for your project

For example, `myproject`.`region-us`.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION .

Find top five jobs that scanned the most bytes today

The following example demonstrates how to find the five jobs that scanned the most bytes in an organization for the current day. You can filter further on statement_type to query for additional information such as loads, exports, and queries.

 SELECT 
  
 job_id 
 , 
  
 user_email 
 , 
  
 total_bytes_billed 
 FROM 
  
 `region- REGION_NAME 
` 
 . 
 INFORMATION_SCHEMA 
 . 
 JOBS_BY_ORGANIZATION 
 WHERE 
  
 EXTRACT 
 ( 
 DATE 
  
 FROM 
  
 creation_time 
 ) 
  
 = 
  
 current_date 
 () 
 ORDER 
  
 BY 
  
 total_bytes_billed 
  
 DESC 
 LIMIT 
  
 5 
 ; 

The result is similar to the following:

+--------------+--------------+---------------------------+
| job_id       |  user_email  |  total_bytes_billed       |
+--------------+--------------+---------------------------+
| bquxjob_1    |  abc@xyz.com |    999999                 |
| bquxjob_2    |  def@xyz.com |    888888                 |
| bquxjob_3    |  ghi@xyz.com |    777777                 |
+--------------+--------------+---------------------------+

Aggregate Connected Sheets usage by user at the organization level

The following query provides a summary of the top Connected Sheets users in your organization over the last 30 days, ranked by their total billed data. The query aggregates the total number of queries, total bytes billed, and total slot milliseconds for each user. This information is useful for understanding adoption and for identifying top consumers of resources.

  SELECT 
  
 user_email 
 , 
  
 COUNT 
 ( 
 * 
 ) 
  
 AS 
  
 total_queries 
 , 
  
 SUM 
 ( 
 total_bytes_billed 
 ) 
  
 AS 
  
 total_bytes_billed 
 , 
  
 SUM 
 ( 
 total_slot_ms 
 ) 
  
 AS 
  
 total_slot_ms 
 FROM 
  
 `region- REGION_NAME 
.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION` 
 WHERE 
  
 -- Filter for jobs created in the last 30 days 
  
 creation_time 
  
> = 
  
 TIMESTAMP_SUB 
 ( 
 CURRENT_TIMESTAMP 
 (), 
  
 INTERVAL 
  
 30 
  
 DAY 
 ) 
  
 -- Filter for jobs originating from Connected Sheets 
  
 AND 
  
 job_id 
  
 LIKE 
  
 'sheets_dataconnector%' 
  
 -- Filter for completed jobs 
  
 AND 
  
 state 
  
 = 
  
 'DONE' 
  
 AND 
  
 ( 
 statement_type 
  
 IS 
  
 NULL 
  
 OR 
  
 statement_type 
 <> 
 'SCRIPT' 
 ) 
 GROUP 
  
 BY 
  
 1 
 ORDER 
  
 BY 
  
 total_bytes_billed 
  
 DESC 
 ; 
 

Replace REGION_NAME with the region for your project. For example, region-us .

The result looks similar to the following:

+---------------------+---------------+--------------------+-----------------+
| user_email          | total_queries | total_bytes_billed | total_slot_ms   |
+---------------------+---------------+--------------------+-----------------+
| alice@example.com   | 152           | 12000000000        | 3500000         |
| bob@example.com     | 45            | 8500000000         | 2100000         |
| charles@example.com | 210           | 1100000000         | 1800000         |
+---------------------+---------------+--------------------+-----------------+

Find job logs of Connected Sheets queries at the organization-level

The following query provides a detailed log of every individual query run by Connected Sheets. This information is useful for auditing and identifying specific high-cost queries.

  SELECT 
  
 job_id 
 , 
  
 creation_time 
 , 
  
 user_email 
 , 
  
 project_id 
 , 
  
 total_bytes_billed 
 , 
  
 total_slot_ms 
 , 
  
 query 
 FROM 
  
 `region- REGION_NAME 
.INFORMATION_SCHEMA.JOBS_BY_ORGANIZATION` 
 WHERE 
  
 creation_time 
  
> = 
  
 TIMESTAMP_SUB 
 ( 
 CURRENT_TIMESTAMP 
 (), 
  
 INTERVAL 
  
 30 
  
 DAY 
 ) 
  
 AND 
  
 job_id 
  
 LIKE 
  
 'sheets_dataconnector%' 
  
 AND 
  
 state 
  
 = 
  
 'DONE' 
  
 AND 
  
 ( 
 statement_type 
  
 IS 
  
 NULL 
  
 OR 
  
 statement_type 
 <> 
 'SCRIPT' 
 ) 
 ORDER 
  
 BY 
  
 creation_time 
  
 DESC 
 ; 
 

Replace REGION_NAME with the region for your project. For example, region-us .

The result looks similar to the following:

+---------------------------------+---------------------------------+-----------------+------------+--------------------+---------------+--------------------------------+
| job_id                          | creation_time                   | user_email      | project_id | total_bytes_billed | total_slot_ms |  query                         |
+---------------------------------+---------------------------------+-----------------+------------+--------------------+---------------+--------------------------------+
| sheets_dataconnector_bquxjob_1  | 2025-11-06 00:26:53.077000 UTC  | abc@example.com | my_project | 12000000000        | 3500000       | SELECT ... FROM dataset.table1 |
| sheets_dataconnector_bquxjob_2  | 2025-11-06 00:24:04.294000 UTC  | xyz@example.com | my_project | 8500000000         | 2100000       | SELECT ... FROM dataset.table2 |
| sheets_dataconnector_bquxjob_3  | 2025-11-03 23:17:25.975000 UTC  | bob@example.com | my_project | 1100000000         | 1800000       | SELECT ... FROM dataset.table3 |
+---------------------------------+---------------------------------+-----------------+------------+--------------------+---------------+--------------------------------+
Design a Mobile Site
View Site in Mobile | Classic
Share by: