Tool: get_stream
Get details of the stream specified by the provided resource name
parameter.
- The resource
nameparameter is in the form:projects/{project name}/locations/{location}/streams/{stream name}, for example:projects/my-project/locations/us-central1/streams/my-streams.
The following sample demonstrate how to use curl
to invoke the get_stream
MCP tool.
| Curl Request |
|---|
curl --location 'https://datastream.googleapis.com/mcp' \ --header 'content-type: application/json' \ --header 'accept: application/json, text/event-stream' \ --data '{ "method": "tools/call", "params": { "name": "get_stream", "arguments": { // provide these details according to the tool' s MCP specification } } , "jsonrpc" : "2.0" , "id" : 1 } ' |
Input Schema
Request message for getting a stream.
GetStreamRequest
| JSON representation |
|---|
{ "name" : string } |
| Fields | |
|---|---|
name
|
Required. The name of the stream resource to get. |
Output Schema
A resource representing streaming data from a source to a destination.
Stream
| JSON representation |
|---|
{ "name" : string , "createTime" : string , "updateTime" : string , "labels" : { string : string , ... } , "displayName" : string , "sourceConfig" : { object ( |
name
string
Output only. Identifier. The stream's name.
createTime
string (
Timestamp
format)
Output only. The creation time of the stream.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
updateTime
string (
Timestamp
format)
Output only. The last update time of the stream.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
labels
map (key: string, value: string)
Labels.
An object containing a list of "key": value
pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }
.
displayName
string
Required. Display name.
sourceConfig
object (
SourceConfig
)
Required. Source connection profile configuration.
destinationConfig
object (
DestinationConfig
)
Required. Destination connection profile configuration.
state
enum (
State
)
The state of the stream.
errors[]
object (
Error
)
Output only. Errors on the Stream.
lastRecoveryTime
string (
Timestamp
format)
Output only. If the stream was recovered, the time of the last recovery. Note: This field is currently experimental.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
ruleSets[]
object (
RuleSet
)
Optional. Rule sets to apply to the stream.
backfill_strategy
. Stream backfill strategy. backfill_strategy
can be only one of the following:backfillAll
object (
BackfillAllStrategy
)
Automatically backfill objects included in the stream source configuration. Specific objects can be excluded.
backfillNone
object (
BackfillNoneStrategy
)
Do not automatically backfill any objects.
Union field _customer_managed_encryption_key
.
_customer_managed_encryption_key
can be only one of the following:
customerManagedEncryptionKey
string
Immutable. A reference to a KMS encryption key. If provided, it will be used to encrypt the data. If left blank, data will be encrypted using an internal Stream-specific encryption key provisioned through KMS.
Union field _satisfies_pzs
.
_satisfies_pzs
can be only one of the following:
satisfiesPzs
boolean
Output only. Reserved for future use.
Union field _satisfies_pzi
.
_satisfies_pzi
can be only one of the following:
satisfiesPzi
boolean
Output only. Reserved for future use.
Timestamp
| JSON representation |
|---|
{ "seconds" : string , "nanos" : integer } |
| Fields | |
|---|---|
seconds
|
Represents seconds of UTC time since Unix epoch 1970-01-01T00:00:00Z. Must be between -62135596800 and 253402300799 inclusive (which corresponds to 0001-01-01T00:00:00Z to 9999-12-31T23:59:59Z). |
nanos
|
Non-negative fractions of a second at nanosecond resolution. This field is the nanosecond portion of the duration, not an alternative to seconds. Negative second values with fractions must still have non-negative nanos values that count forward in time. Must be between 0 and 999,999,999 inclusive. |
LabelsEntry
| JSON representation |
|---|
{ "key" : string , "value" : string } |
| Fields | |
|---|---|
key
|
|
value
|
|
SourceConfig
| JSON representation |
|---|
{ "sourceConnectionProfile" : string , // Union field |
sourceConnectionProfile
string
Required. Source connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}
source_stream_config
. Stream configuration that is specific to the data source type. source_stream_config
can be only one of the following:oracleSourceConfig
object (
OracleSourceConfig
)
Oracle data source configuration.
mysqlSourceConfig
object (
MysqlSourceConfig
)
MySQL data source configuration.
postgresqlSourceConfig
object (
PostgresqlSourceConfig
)
PostgreSQL data source configuration.
sqlServerSourceConfig
object (
SqlServerSourceConfig
)
SQLServer data source configuration.
salesforceSourceConfig
object (
SalesforceSourceConfig
)
Salesforce data source configuration.
mongodbSourceConfig
object (
MongodbSourceConfig
)
MongoDB data source configuration.
spannerSourceConfig
object (
SpannerSourceConfig
)
Spanner data source configuration.
OracleSourceConfig
| JSON representation |
|---|
{ "includeObjects" : { object ( |
includeObjects
object (
OracleRdbms
)
The Oracle objects to include in the stream.
excludeObjects
object (
OracleRdbms
)
The Oracle objects to exclude from the stream.
maxConcurrentCdcTasks
integer
Maximum number of concurrent CDC tasks. The number should be non-negative. If not set (or set to 0), the system's default value is used.
maxConcurrentBackfillTasks
integer
Maximum number of concurrent backfill tasks. The number should be non-negative. If not set (or set to 0), the system's default value is used.
large_objects_handling
. The configuration for handle Oracle large objects. large_objects_handling
can be only one of the following:dropLargeObjects
object (
DropLargeObjects
)
Drop large object values.
streamLargeObjects
object (
StreamLargeObjects
)
Stream large object values.
cdc_method
. Configuration to select the CDC method. cdc_method
can be only one of the following:logMiner
object (
LogMiner
)
Use LogMiner.
binaryLogParser
object (
BinaryLogParser
)
Use Binary Log Parser.
OracleRdbms
| JSON representation |
|---|
{
"oracleSchemas"
:
[
{
object (
|
| Fields | |
|---|---|
oracleSchemas[]
|
Oracle schemas/databases in the database server. |
OracleSchema
| JSON representation |
|---|
{
"schema"
:
string
,
"oracleTables"
:
[
{
object (
|
| Fields | |
|---|---|
schema
|
The schema name. |
oracleTables[]
|
Tables in the schema. |
OracleTable
| JSON representation |
|---|
{
"table"
:
string
,
"oracleColumns"
:
[
{
object (
|
| Fields | |
|---|---|
table
|
The table name. |
oracleColumns[]
|
Oracle columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. |
OracleColumn
| JSON representation |
|---|
{ "column" : string , "dataType" : string , "length" : integer , "precision" : integer , "scale" : integer , "encoding" : string , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer } |
| Fields | |
|---|---|
column
|
The column name. |
dataType
|
The Oracle data type. |
length
|
Column length. |
precision
|
Column precision. |
scale
|
Column scale. |
encoding
|
Column encoding. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
BinaryLogParser
| JSON representation |
|---|
{ // Union field |
log_file_access
. Configuration to specify how the log file should be accessed. log_file_access
can be only one of the following:oracleAsmLogFileAccess
object (
OracleAsmLogFileAccess
)
Use Oracle ASM.
logFileDirectories
object (
LogFileDirectories
)
Use Oracle directories.
LogFileDirectories
| JSON representation |
|---|
{ "onlineLogDirectory" : string , "archivedLogDirectory" : string } |
| Fields | |
|---|---|
onlineLogDirectory
|
Required. Oracle directory for online logs. |
archivedLogDirectory
|
Required. Oracle directory for archived logs. |
MysqlSourceConfig
| JSON representation |
|---|
{ "includeObjects" : { object ( |
includeObjects
object (
MysqlRdbms
)
The MySQL objects to retrieve from the source.
excludeObjects
object (
MysqlRdbms
)
The MySQL objects to exclude from the stream.
maxConcurrentCdcTasks
integer
Maximum number of concurrent CDC tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
maxConcurrentBackfillTasks
integer
Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
cdc_method
. The CDC method to use for the stream. cdc_method
can be only one of the following:binaryLogPosition
object (
BinaryLogPosition
)
Use Binary log position based replication.
gtid
object (
Gtid
)
Use GTID based replication.
MysqlRdbms
| JSON representation |
|---|
{
"mysqlDatabases"
:
[
{
object (
|
| Fields | |
|---|---|
mysqlDatabases[]
|
Mysql databases on the server |
MysqlDatabase
| JSON representation |
|---|
{
"database"
:
string
,
"mysqlTables"
:
[
{
object (
|
| Fields | |
|---|---|
database
|
The database name. |
mysqlTables[]
|
Tables in the database. |
MysqlTable
| JSON representation |
|---|
{
"table"
:
string
,
"mysqlColumns"
:
[
{
object (
|
| Fields | |
|---|---|
table
|
The table name. |
mysqlColumns[]
|
MySQL columns in the database. When unspecified as part of include/exclude objects, includes/excludes everything. |
MysqlColumn
| JSON representation |
|---|
{ "column" : string , "dataType" : string , "length" : integer , "collation" : string , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer , "precision" : integer , "scale" : integer } |
| Fields | |
|---|---|
column
|
The column name. |
dataType
|
The MySQL data type. Full data types list can be found here: https://dev.mysql.com/doc/refman/8.0/en/data-types.html |
length
|
Column length. |
collation
|
Column collation. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
precision
|
Column precision. |
scale
|
Column scale. |
PostgresqlSourceConfig
| JSON representation |
|---|
{ "includeObjects" : { object ( |
| Fields | |
|---|---|
includeObjects
|
The PostgreSQL objects to include in the stream. |
excludeObjects
|
The PostgreSQL objects to exclude from the stream. |
replicationSlot
|
Required. Immutable. The name of the logical replication slot that's configured with the pgoutput plugin. |
publication
|
Required. The name of the publication that includes the set of all tables that are defined in the stream's include_objects. |
maxConcurrentBackfillTasks
|
Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used. |
PostgresqlRdbms
| JSON representation |
|---|
{
"postgresqlSchemas"
:
[
{
object (
|
| Fields | |
|---|---|
postgresqlSchemas[]
|
PostgreSQL schemas in the database server. |
PostgresqlSchema
| JSON representation |
|---|
{
"schema"
:
string
,
"postgresqlTables"
:
[
{
object (
|
| Fields | |
|---|---|
schema
|
The schema name. |
postgresqlTables[]
|
Tables in the schema. |
PostgresqlTable
| JSON representation |
|---|
{
"table"
:
string
,
"postgresqlColumns"
:
[
{
object (
|
| Fields | |
|---|---|
table
|
The table name. |
postgresqlColumns[]
|
PostgreSQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. |
PostgresqlColumn
| JSON representation |
|---|
{ "column" : string , "dataType" : string , "length" : integer , "precision" : integer , "scale" : integer , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer } |
| Fields | |
|---|---|
column
|
The column name. |
dataType
|
The PostgreSQL data type. |
length
|
Column length. |
precision
|
Column precision. |
scale
|
Column scale. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
SqlServerSourceConfig
| JSON representation |
|---|
{ "includeObjects" : { object ( |
includeObjects
object (
SqlServerRdbms
)
The SQLServer objects to include in the stream.
excludeObjects
object (
SqlServerRdbms
)
The SQLServer objects to exclude from the stream.
maxConcurrentCdcTasks
integer
Max concurrent CDC tasks.
maxConcurrentBackfillTasks
integer
Max concurrent backfill tasks.
cdc_method
. Configuration to select the CDC read method for the stream. cdc_method
can be only one of the following:transactionLogs
object (
SqlServerTransactionLogs
)
CDC reader reads from transaction logs.
changeTables
object (
SqlServerChangeTables
)
CDC reader reads from change tables.
SqlServerRdbms
| JSON representation |
|---|
{
"schemas"
:
[
{
object (
|
| Fields | |
|---|---|
schemas[]
|
SQLServer schemas in the database server. |
SqlServerSchema
| JSON representation |
|---|
{
"schema"
:
string
,
"tables"
:
[
{
object (
|
| Fields | |
|---|---|
schema
|
The schema name. |
tables[]
|
Tables in the schema. |
SqlServerTable
| JSON representation |
|---|
{
"table"
:
string
,
"columns"
:
[
{
object (
|
| Fields | |
|---|---|
table
|
The table name. |
columns[]
|
SQLServer columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. |
SqlServerColumn
| JSON representation |
|---|
{ "column" : string , "dataType" : string , "length" : integer , "precision" : integer , "scale" : integer , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer } |
| Fields | |
|---|---|
column
|
The column name. |
dataType
|
The SQLServer data type. |
length
|
Column length. |
precision
|
Column precision. |
scale
|
Column scale. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
SalesforceSourceConfig
| JSON representation |
|---|
{ "includeObjects" : { object ( |
| Fields | |
|---|---|
includeObjects
|
The Salesforce objects to retrieve from the source. |
excludeObjects
|
The Salesforce objects to exclude from the stream. |
pollingInterval
|
Required. Salesforce objects polling interval. The interval at which new changes will be polled for each object. The duration must be from A duration in seconds with up to nine fractional digits, ending with ' |
SalesforceOrg
| JSON representation |
|---|
{
"objects"
:
[
{
object (
|
| Fields | |
|---|---|
objects[]
|
Salesforce objects in the database server. |
SalesforceObject
| JSON representation |
|---|
{
"objectName"
:
string
,
"fields"
:
[
{
object (
|
| Fields | |
|---|---|
objectName
|
The object name. |
fields[]
|
Salesforce fields. When unspecified as part of include objects, includes everything, when unspecified as part of exclude objects, excludes nothing. |
SalesforceField
| JSON representation |
|---|
{ "name" : string , "dataType" : string , "nillable" : boolean } |
| Fields | |
|---|---|
name
|
The field name. |
dataType
|
The data type. |
nillable
|
Indicates whether the field can accept nil values. |
Duration
| JSON representation |
|---|
{ "seconds" : string , "nanos" : integer } |
| Fields | |
|---|---|
seconds
|
Signed seconds of the span of time. Must be from -315,576,000,000 to +315,576,000,000 inclusive. Note: these bounds are computed from: 60 sec/min * 60 min/hr * 24 hr/day * 365.25 days/year * 10000 years |
nanos
|
Signed fractions of a second at nanosecond resolution of the span of time. Durations less than one second are represented with a 0 |
MongodbSourceConfig
| JSON representation |
|---|
{ "includeObjects" : { object ( |
| Fields | |
|---|---|
includeObjects
|
The MongoDB collections to include in the stream. |
excludeObjects
|
The MongoDB collections to exclude from the stream. |
maxConcurrentBackfillTasks
|
Optional. Maximum number of concurrent backfill tasks. The number should be non-negative and less than or equal to 50. If not set (or set to 0), the system's default value is used |
jsonMode
|
Optional. MongoDB JSON mode to use for the stream. |
MongodbCluster
| JSON representation |
|---|
{
"databases"
:
[
{
object (
|
| Fields | |
|---|---|
databases[]
|
MongoDB databases in the cluster. |
MongodbDatabase
| JSON representation |
|---|
{
"database"
:
string
,
"collections"
:
[
{
object (
|
| Fields | |
|---|---|
database
|
The database name. |
collections[]
|
Collections in the database. |
MongodbCollection
| JSON representation |
|---|
{
"collection"
:
string
,
"fields"
:
[
{
object (
|
| Fields | |
|---|---|
collection
|
The collection name. |
fields[]
|
Fields in the collection. |
MongodbField
| JSON representation |
|---|
{ "field" : string } |
| Fields | |
|---|---|
field
|
The field name. |
SpannerSourceConfig
| JSON representation |
|---|
{ "changeStreamName" : string , "spannerRpcPriority" : enum ( |
| Fields | |
|---|---|
changeStreamName
|
Required. Immutable. The change stream name to use for the stream. |
spannerRpcPriority
|
Optional. The RPC priority to use for the stream. |
fgacRole
|
Optional. The FGAC role to use for the stream. |
maxConcurrentCdcTasks
|
Optional. Maximum number of concurrent CDC tasks. |
maxConcurrentBackfillTasks
|
Optional. Maximum number of concurrent backfill tasks. |
includeObjects
|
Optional. The Spanner objects to retrieve from the data source. If some objects are both included and excluded, an error will be thrown. |
excludeObjects
|
Optional. The Spanner objects to avoid retrieving. If some objects are both included and excluded, an error will be thrown. |
backfillDataBoostEnabled
|
Optional. Whether to use Data Boost for Spanner backfills. Defaults to false if not set. |
SpannerDatabase
| JSON representation |
|---|
{
"schemas"
:
[
{
object (
|
| Fields | |
|---|---|
schemas[]
|
Optional. Spanner schemas in the database. |
SpannerSchema
| JSON representation |
|---|
{
"schema"
:
string
,
"tables"
:
[
{
object (
|
| Fields | |
|---|---|
schema
|
Required. The schema name. |
tables[]
|
Optional. Spanner tables in the schema. |
SpannerTable
| JSON representation |
|---|
{
"table"
:
string
,
"columns"
:
[
{
object (
|
| Fields | |
|---|---|
table
|
Required. The table name. |
columns[]
|
Optional. Spanner columns in the table. |
SpannerColumn
| JSON representation |
|---|
{ "column" : string , "dataType" : string , "isPrimaryKey" : boolean , "ordinalPosition" : string } |
| Fields | |
|---|---|
column
|
Required. The column name. |
dataType
|
Optional. Spanner data type. |
isPrimaryKey
|
Optional. Whether or not the column is a primary key. |
ordinalPosition
|
Optional. The ordinal position of the column in the table. |
DestinationConfig
| JSON representation |
|---|
{ "destinationConnectionProfile" : string , // Union field |
destinationConnectionProfile
string
Required. Destination connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}
destination_stream_config
. Stream configuration that is specific to the data destination type. destination_stream_config
can be only one of the following:gcsDestinationConfig
object (
GcsDestinationConfig
)
A configuration for how data should be loaded to Cloud Storage.
bigqueryDestinationConfig
object (
BigQueryDestinationConfig
)
BigQuery destination configuration.
GcsDestinationConfig
| JSON representation |
|---|
{ "path" : string , "fileRotationMb" : integer , "fileRotationInterval" : string , // Union field |
path
string
Path inside the Cloud Storage bucket to write data to.
fileRotationMb
integer
The maximum file size to be saved in the bucket.
fileRotationInterval
string (
Duration
format)
The maximum duration for which new events are added before a file is closed and a new file is created. Values within the range of 15-60 seconds are allowed.
A duration in seconds with up to nine fractional digits, ending with ' s
'. Example: "3.5s"
.
file_format
. File Format that the data should be written in. file_format
can be only one of the following:avroFileFormat
object (
AvroFileFormat
)
AVRO file format configuration.
jsonFileFormat
object (
JsonFileFormat
)
JSON file format configuration.
JsonFileFormat
| JSON representation |
|---|
{ "schemaFileFormat" : enum ( |
| Fields | |
|---|---|
schemaFileFormat
|
The schema file format along JSON data files. |
compression
|
Compression of the loaded JSON file. |
BigQueryDestinationConfig
| JSON representation |
|---|
{ "dataFreshness" : string , "blmtConfig" : { object ( |
dataFreshness
string (
Duration
format)
The guaranteed data freshness (in seconds) when querying tables created by the stream. Editing this field will only affect new tables created in the future, but existing tables will not be impacted. Lower values mean that queries will return fresher data, but may result in higher cost.
A duration in seconds with up to nine fractional digits, ending with ' s
'. Example: "3.5s"
.
blmtConfig
object (
BlmtConfig
)
Optional. Big Lake Managed Tables (BLMT) configuration.
dataset_config
. Target dataset(s) configuration. dataset_config
can be only one of the following:singleTargetDataset
object (
SingleTargetDataset
)
Single destination dataset.
sourceHierarchyDatasets
object (
SourceHierarchyDatasets
)
Source hierarchy datasets.
Union field write_mode
.
write_mode
can be only one of the following:
merge
object (
Merge
)
The standard mode
appendOnly
object (
AppendOnly
)
Append only mode
SingleTargetDataset
| JSON representation |
|---|
{ "datasetId" : string } |
| Fields | |
|---|---|
datasetId
|
The dataset ID of the target dataset. DatasetIds allowed characters: https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#datasetreference . |
SourceHierarchyDatasets
| JSON representation |
|---|
{ "datasetTemplate" : { object ( |
datasetTemplate
object (
DatasetTemplate
)
The dataset template to use for dynamic dataset creation.
Union field _project_id
.
_project_id
can be only one of the following:
projectId
string
Optional. The project id of the BigQuery dataset. If not specified, the project will be inferred from the stream resource.
DatasetTemplate
| JSON representation |
|---|
{ "location" : string , "datasetIdPrefix" : string , "kmsKeyName" : string } |
| Fields | |
|---|---|
location
|
Required. The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
datasetIdPrefix
|
If supplied, every created dataset will have its name prefixed by the provided value. The prefix and name will be separated by an underscore. i.e. |
kmsKeyName
|
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key. i.e. projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}. See https://cloud.google.com/bigquery/docs/customer-managed-encryption for more information. |
BlmtConfig
| JSON representation |
|---|
{ "bucket" : string , "rootPath" : string , "connectionName" : string , "fileFormat" : enum ( |
| Fields | |
|---|---|
bucket
|
Required. The Cloud Storage bucket name. |
rootPath
|
The root path inside the Cloud Storage bucket. |
connectionName
|
Required. The bigquery connection. Format: |
fileFormat
|
Required. The file format. |
tableFormat
|
Required. The table format. |
BackfillAllStrategy
| JSON representation |
|---|
{ // Union field |
excluded_objects
. List of objects to exclude. excluded_objects
can be only one of the following:oracleExcludedObjects
object (
OracleRdbms
)
Oracle data source objects to avoid backfilling.
mysqlExcludedObjects
object (
MysqlRdbms
)
MySQL data source objects to avoid backfilling.
postgresqlExcludedObjects
object (
PostgresqlRdbms
)
PostgreSQL data source objects to avoid backfilling.
sqlServerExcludedObjects
object (
SqlServerRdbms
)
SQLServer data source objects to avoid backfilling
salesforceExcludedObjects
object (
SalesforceOrg
)
Salesforce data source objects to avoid backfilling
mongodbExcludedObjects
object (
MongodbCluster
)
MongoDB data source objects to avoid backfilling
spannerExcludedObjects
object (
SpannerDatabase
)
Spanner data source objects to avoid backfilling.
Error
| JSON representation |
|---|
{ "reason" : string , "errorUuid" : string , "message" : string , "errorTime" : string , "details" : { string : string , ... } } |
| Fields | |
|---|---|
reason
|
A title that explains the reason for the error. |
errorUuid
|
A unique identifier for this specific error, allowing it to be traced throughout the system in logs and API responses. |
message
|
A message containing more information about the error that occurred. |
errorTime
|
The time when the error occurred. Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: |
details
|
Additional information about the error. An object containing a list of |
DetailsEntry
| JSON representation |
|---|
{ "key" : string , "value" : string } |
| Fields | |
|---|---|
key
|
|
value
|
|
RuleSet
| JSON representation |
|---|
{ "customizationRules" : [ { object ( |
| Fields | |
|---|---|
customizationRules[]
|
Required. List of customization rules to apply. |
objectFilter
|
Required. Object filter to apply the customization rules to. |
CustomizationRule
| JSON representation |
|---|
{ // Union field |
rule
. The rule to apply. rule
can be only one of the following:bigqueryPartitioning
object (
BigQueryPartitioning
)
BigQuery partitioning rule.
bigqueryClustering
object (
BigQueryClustering
)
BigQuery clustering rule.
BigQueryPartitioning
| JSON representation |
|---|
{ "requirePartitionFilter" : boolean , // Union field |
requirePartitionFilter
boolean
Optional. If true, queries over the table require a partition filter.
partitioning
. Partitioning to apply on the table. partitioning
can be only one of the following:integerRangePartition
object (
IntegerRangePartition
)
Integer range partitioning.
timeUnitPartition
object (
TimeUnitPartition
)
Time unit column partitioning.
ingestionTimePartition
object (
IngestionTimePartition
)
Ingestion time partitioning.
IntegerRangePartition
| JSON representation |
|---|
{ "column" : string , "start" : string , "end" : string , "interval" : string } |
| Fields | |
|---|---|
column
|
Required. The partitioning column. |
start
|
Required. The starting value for range partitioning (inclusive). |
end
|
Required. The ending value for range partitioning (exclusive). |
interval
|
Required. The interval of each range within the partition. |
TimeUnitPartition
| JSON representation |
|---|
{
"column"
:
string
,
"partitioningTimeGranularity"
:
enum (
|
| Fields | |
|---|---|
column
|
Required. The partitioning column. |
partitioningTimeGranularity
|
Optional. Partition granularity. |
IngestionTimePartition
| JSON representation |
|---|
{
"partitioningTimeGranularity"
:
enum (
|
| Fields | |
|---|---|
partitioningTimeGranularity
|
Optional. Partition granularity |
BigQueryClustering
| JSON representation |
|---|
{ "columns" : [ string ] } |
| Fields | |
|---|---|
columns[]
|
Required. Column names to set as clustering columns. |
ObjectFilter
| JSON representation |
|---|
{ // Union field |
filter_type
. The filter to apply. filter_type
can be only one of the following:sourceObjectIdentifier
object (
SourceObjectIdentifier
)
Specific source object identifier.
SourceObjectIdentifier
| JSON representation |
|---|
{ // Union field |
source_identifier
. The identifier for an object in the data source. source_identifier
can be only one of the following:oracleIdentifier
object (
OracleObjectIdentifier
)
Oracle data source object identifier.
mysqlIdentifier
object (
MysqlObjectIdentifier
)
Mysql data source object identifier.
postgresqlIdentifier
object (
PostgresqlObjectIdentifier
)
PostgreSQL data source object identifier.
sqlServerIdentifier
object (
SqlServerObjectIdentifier
)
SQLServer data source object identifier.
salesforceIdentifier
object (
SalesforceObjectIdentifier
)
Salesforce data source object identifier.
mongodbIdentifier
object (
MongodbObjectIdentifier
)
MongoDB data source object identifier.
spannerIdentifier
object (
SpannerObjectIdentifier
)
Spanner data source object identifier.
OracleObjectIdentifier
| JSON representation |
|---|
{ "schema" : string , "table" : string } |
| Fields | |
|---|---|
schema
|
Required. The schema name. |
table
|
Required. The table name. |
MysqlObjectIdentifier
| JSON representation |
|---|
{ "database" : string , "table" : string } |
| Fields | |
|---|---|
database
|
Required. The database name. |
table
|
Required. The table name. |
PostgresqlObjectIdentifier
| JSON representation |
|---|
{ "schema" : string , "table" : string } |
| Fields | |
|---|---|
schema
|
Required. The schema name. |
table
|
Required. The table name. |
SqlServerObjectIdentifier
| JSON representation |
|---|
{ "schema" : string , "table" : string } |
| Fields | |
|---|---|
schema
|
Required. The schema name. |
table
|
Required. The table name. |
SalesforceObjectIdentifier
| JSON representation |
|---|
{ "objectName" : string } |
| Fields | |
|---|---|
objectName
|
Required. The object name. |
MongodbObjectIdentifier
| JSON representation |
|---|
{ "database" : string , "collection" : string } |
| Fields | |
|---|---|
database
|
Required. The database name. |
collection
|
Required. The collection name. |
SpannerObjectIdentifier
| JSON representation |
|---|
{ "schema" : string , "table" : string } |
| Fields | |
|---|---|
schema
|
Optional. The schema name. |
table
|
Required. The table name. |
Tool Annotations
Destructive Hint: ❌ | Idempotent Hint: ✅ | Read Only Hint: ✅ | Open World Hint: ❌

