- Resource: Stream
- SourceConfig
- OracleSourceConfig
- OracleRdbms
- OracleSchema
- OracleTable
- OracleColumn
- DropLargeObjects
- StreamLargeObjects
- LogMiner
- BinaryLogParser
- OracleAsmLogFileAccess
- LogFileDirectories
- MysqlSourceConfig
- MysqlRdbms
- MysqlDatabase
- MysqlTable
- MysqlColumn
- BinaryLogPosition
- Gtid
- PostgresqlSourceConfig
- PostgresqlRdbms
- PostgresqlSchema
- PostgresqlTable
- PostgresqlColumn
- SqlServerSourceConfig
- SqlServerRdbms
- SqlServerSchema
- SqlServerTable
- SqlServerColumn
- SqlServerTransactionLogs
- SqlServerChangeTables
- SalesforceSourceConfig
- SalesforceOrg
- SalesforceObject
- SalesforceField
- MongodbSourceConfig
- MongodbCluster
- MongodbDatabase
- MongodbCollection
- MongodbField
- DestinationConfig
- GcsDestinationConfig
- AvroFileFormat
- JsonFileFormat
- SchemaFileFormat
- JsonCompression
- BigQueryDestinationConfig
- SingleTargetDataset
- SourceHierarchyDatasets
- DatasetTemplate
- BlmtConfig
- FileFormat
- TableFormat
- Merge
- AppendOnly
- State
- BackfillAllStrategy
- BackfillNoneStrategy
- Methods
Resource: Stream
A resource representing streaming data from a source to a destination.
JSON representation |
---|
{ "name" : string , "createTime" : string , "updateTime" : string , "labels" : { string : string , ... } , "displayName" : string , "sourceConfig" : { object ( |
name
string
Output only. Identifier. The stream's name.
createTime
string (
Timestamp
format)
Output only. The creation time of the stream.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
updateTime
string (
Timestamp
format)
Output only. The last update time of the stream.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
labels
map (key: string, value: string)
Labels.
An object containing a list of "key": value
pairs. Example: { "name": "wrench", "mass": "1.3kg", "count": "3" }
.
displayName
string
Required. Display name.
sourceConfig
object (
SourceConfig
)
Required. Source connection profile configuration.
destinationConfig
object (
DestinationConfig
)
Required. Destination connection profile configuration.
state
enum (
State
)
The state of the stream.
errors[]
object (
Error
)
Output only. Errors on the Stream.
lastRecoveryTime
string (
Timestamp
format)
Output only. If the stream was recovered, the time of the last recovery. Note: This field is currently experimental.
Uses RFC 3339, where generated output will always be Z-normalized and use 0, 3, 6 or 9 fractional digits. Offsets other than "Z" are also accepted. Examples: "2014-10-02T15:01:23Z"
, "2014-10-02T15:01:23.045123456Z"
or "2014-10-02T15:01:23+05:30"
.
backfill_strategy
. Stream backfill strategy. backfill_strategy
can be only one of the following:backfillAll
object (
BackfillAllStrategy
)
Automatically backfill objects included in the stream source configuration. Specific objects can be excluded.
backfillNone
object (
BackfillNoneStrategy
)
Do not automatically backfill any objects.
customerManagedEncryptionKey
string
Immutable. A reference to a KMS encryption key. If provided, it will be used to encrypt the data. If left blank, data will be encrypted using an internal Stream-specific encryption key provisioned through KMS.
satisfiesPzs
boolean
Output only. Reserved for future use.
satisfiesPzi
boolean
Output only. Reserved for future use.
SourceConfig
The configuration of the stream source.
JSON representation |
---|
{ "sourceConnectionProfile" : string , // Union field |
sourceConnectionProfile
string
Required. Source connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}
source_stream_config
. Stream configuration that is specific to the data source type. source_stream_config
can be only one of the following:oracleSourceConfig
object (
OracleSourceConfig
)
Oracle data source configuration.
mysqlSourceConfig
object (
MysqlSourceConfig
)
MySQL data source configuration.
postgresqlSourceConfig
object (
PostgresqlSourceConfig
)
PostgreSQL data source configuration.
sqlServerSourceConfig
object (
SqlServerSourceConfig
)
SQLServer data source configuration.
salesforceSourceConfig
object (
SalesforceSourceConfig
)
Salesforce data source configuration.
mongodbSourceConfig
object (
MongodbSourceConfig
)
MongoDB data source configuration.
OracleSourceConfig
Oracle data source configuration
JSON representation |
---|
{ "includeObjects" : { object ( |
includeObjects
object (
OracleRdbms
)
Oracle objects to include in the stream.
excludeObjects
object (
OracleRdbms
)
Oracle objects to exclude from the stream.
maxConcurrentCdcTasks
integer
Maximum number of concurrent CDC tasks. The number should be non-negative. If not set (or set to 0), the system's default value is used.
maxConcurrentBackfillTasks
integer
Maximum number of concurrent backfill tasks. The number should be non-negative. If not set (or set to 0), the system's default value is used.
large_objects_handling
. The configuration for handle Oracle large objects. large_objects_handling
can be only one of the following:dropLargeObjects
object (
DropLargeObjects
)
Drop large object values.
streamLargeObjects
object (
StreamLargeObjects
)
Stream large object values.
cdc_method
. Configuration to select the CDC method. cdc_method
can be only one of the following:logMiner
object (
LogMiner
)
Use LogMiner.
binaryLogParser
object (
BinaryLogParser
)
Use Binary Log Parser.
OracleRdbms
Oracle database structure.
JSON representation |
---|
{
"oracleSchemas"
:
[
{
object (
|
Fields | |
---|---|
oracleSchemas[]
|
Oracle schemas/databases in the database server. |
OracleSchema
Oracle schema.
JSON representation |
---|
{
"schema"
:
string
,
"oracleTables"
:
[
{
object (
|
Fields | |
---|---|
schema
|
Schema name. |
oracleTables[]
|
Tables in the schema. |
OracleTable
Oracle table.
JSON representation |
---|
{
"table"
:
string
,
"oracleColumns"
:
[
{
object (
|
Fields | |
---|---|
table
|
Table name. |
oracleColumns[]
|
Oracle columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. |
OracleColumn
Oracle Column.
JSON representation |
---|
{ "column" : string , "dataType" : string , "length" : integer , "precision" : integer , "scale" : integer , "encoding" : string , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer } |
Fields | |
---|---|
column
|
Column name. |
dataType
|
The Oracle data type. |
length
|
Column length. |
precision
|
Column precision. |
scale
|
Column scale. |
encoding
|
Column encoding. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
DropLargeObjects
This type has no fields.
Configuration to drop large object values.
StreamLargeObjects
This type has no fields.
Configuration to stream large object values.
LogMiner
This type has no fields.
Configuration to use LogMiner CDC method.
BinaryLogParser
Configuration to use Binary Log Parser CDC technique.
JSON representation |
---|
{ // Union field |
log_file_access
. Configuration to specify how the log file should be accessed. log_file_access
can be only one of the following:oracleAsmLogFileAccess
object (
OracleAsmLogFileAccess
)
Use Oracle ASM.
logFileDirectories
object (
LogFileDirectories
)
Use Oracle directories.
OracleAsmLogFileAccess
This type has no fields.
Configuration to use Oracle ASM to access the log files.
LogFileDirectories
Configuration to specify the Oracle directories to access the log files.
JSON representation |
---|
{ "onlineLogDirectory" : string , "archivedLogDirectory" : string } |
Fields | |
---|---|
onlineLogDirectory
|
Required. Oracle directory for online logs. |
archivedLogDirectory
|
Required. Oracle directory for archived logs. |
MysqlSourceConfig
MySQL source configuration
JSON representation |
---|
{ "includeObjects" : { object ( |
includeObjects
object (
MysqlRdbms
)
MySQL objects to retrieve from the source.
excludeObjects
object (
MysqlRdbms
)
MySQL objects to exclude from the stream.
maxConcurrentCdcTasks
integer
Maximum number of concurrent CDC tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
maxConcurrentBackfillTasks
integer
Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used.
cdc_method
. The CDC method to use for the stream. cdc_method
can be only one of the following:binaryLogPosition
object (
BinaryLogPosition
)
Use Binary log position based replication.
gtid
object (
Gtid
)
Use GTID based replication.
MysqlRdbms
MySQL database structure
JSON representation |
---|
{
"mysqlDatabases"
:
[
{
object (
|
Fields | |
---|---|
mysqlDatabases[]
|
Mysql databases on the server |
MysqlDatabase
MySQL database.
JSON representation |
---|
{
"database"
:
string
,
"mysqlTables"
:
[
{
object (
|
Fields | |
---|---|
database
|
Database name. |
mysqlTables[]
|
Tables in the database. |
MysqlTable
MySQL table.
JSON representation |
---|
{
"table"
:
string
,
"mysqlColumns"
:
[
{
object (
|
Fields | |
---|---|
table
|
Table name. |
mysqlColumns[]
|
MySQL columns in the database. When unspecified as part of include/exclude objects, includes/excludes everything. |
MysqlColumn
MySQL Column.
JSON representation |
---|
{ "column" : string , "dataType" : string , "length" : integer , "collation" : string , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer , "precision" : integer , "scale" : integer } |
Fields | |
---|---|
column
|
Column name. |
dataType
|
The MySQL data type. Full data types list can be found here: https://dev.mysql.com/doc/refman/8.0/en/data-types.html |
length
|
Column length. |
collation
|
Column collation. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
precision
|
Column precision. |
scale
|
Column scale. |
BinaryLogPosition
This type has no fields.
Use Binary log position based replication.
Gtid
This type has no fields.
Use GTID based replication.
PostgresqlSourceConfig
PostgreSQL data source configuration
JSON representation |
---|
{ "includeObjects" : { object ( |
Fields | |
---|---|
includeObjects
|
PostgreSQL objects to include in the stream. |
excludeObjects
|
PostgreSQL objects to exclude from the stream. |
replicationSlot
|
Required. Immutable. The name of the logical replication slot that's configured with the pgoutput plugin. |
publication
|
Required. The name of the publication that includes the set of all tables that are defined in the stream's includeObjects. |
maxConcurrentBackfillTasks
|
Maximum number of concurrent backfill tasks. The number should be non negative. If not set (or set to 0), the system's default value will be used. |
PostgresqlRdbms
PostgreSQL database structure.
JSON representation |
---|
{
"postgresqlSchemas"
:
[
{
object (
|
Fields | |
---|---|
postgresqlSchemas[]
|
PostgreSQL schemas in the database server. |
PostgresqlSchema
PostgreSQL schema.
JSON representation |
---|
{
"schema"
:
string
,
"postgresqlTables"
:
[
{
object (
|
Fields | |
---|---|
schema
|
Schema name. |
postgresqlTables[]
|
Tables in the schema. |
PostgresqlTable
PostgreSQL table.
JSON representation |
---|
{
"table"
:
string
,
"postgresqlColumns"
:
[
{
object (
|
Fields | |
---|---|
table
|
Table name. |
postgresqlColumns[]
|
PostgreSQL columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. |
PostgresqlColumn
PostgreSQL Column.
JSON representation |
---|
{ "column" : string , "dataType" : string , "length" : integer , "precision" : integer , "scale" : integer , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer } |
Fields | |
---|---|
column
|
Column name. |
dataType
|
The PostgreSQL data type. |
length
|
Column length. |
precision
|
Column precision. |
scale
|
Column scale. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
SqlServerSourceConfig
SQLServer data source configuration
JSON representation |
---|
{ "includeObjects" : { object ( |
includeObjects
object (
SqlServerRdbms
)
SQLServer objects to include in the stream.
excludeObjects
object (
SqlServerRdbms
)
SQLServer objects to exclude from the stream.
maxConcurrentCdcTasks
integer
Max concurrent CDC tasks.
maxConcurrentBackfillTasks
integer
Max concurrent backfill tasks.
cdc_method
. Configuration to select the CDC read method for the stream. cdc_method
can be only one of the following:transactionLogs
object (
SqlServerTransactionLogs
)
CDC reader reads from transaction logs.
changeTables
object (
SqlServerChangeTables
)
CDC reader reads from change tables.
SqlServerRdbms
SQLServer database structure.
JSON representation |
---|
{
"schemas"
:
[
{
object (
|
Fields | |
---|---|
schemas[]
|
SQLServer schemas in the database server. |
SqlServerSchema
SQLServer schema.
JSON representation |
---|
{
"schema"
:
string
,
"tables"
:
[
{
object (
|
Fields | |
---|---|
schema
|
Schema name. |
tables[]
|
Tables in the schema. |
SqlServerTable
SQLServer table.
JSON representation |
---|
{
"table"
:
string
,
"columns"
:
[
{
object (
|
Fields | |
---|---|
table
|
Table name. |
columns[]
|
SQLServer columns in the schema. When unspecified as part of include/exclude objects, includes/excludes everything. |
SqlServerColumn
SQLServer Column.
JSON representation |
---|
{ "column" : string , "dataType" : string , "length" : integer , "precision" : integer , "scale" : integer , "primaryKey" : boolean , "nullable" : boolean , "ordinalPosition" : integer } |
Fields | |
---|---|
column
|
Column name. |
dataType
|
The SQLServer data type. |
length
|
Column length. |
precision
|
Column precision. |
scale
|
Column scale. |
primaryKey
|
Whether or not the column represents a primary key. |
nullable
|
Whether or not the column can accept a null value. |
ordinalPosition
|
The ordinal position of the column in the table. |
SqlServerTransactionLogs
This type has no fields.
Configuration to use Transaction Logs CDC read method.
SqlServerChangeTables
This type has no fields.
Configuration to use Change Tables CDC read method.
SalesforceSourceConfig
Salesforce source configuration
JSON representation |
---|
{ "includeObjects" : { object ( |
Fields | |
---|---|
includeObjects
|
Salesforce objects to retrieve from the source. |
excludeObjects
|
Salesforce objects to exclude from the stream. |
pollingInterval
|
Required. Salesforce objects polling interval. The interval at which new changes will be polled for each object. The duration must be between 5 minutes and 24 hours. A duration in seconds with up to nine fractional digits, ending with ' |
SalesforceOrg
Salesforce organization structure.
JSON representation |
---|
{
"objects"
:
[
{
object (
|
Fields | |
---|---|
objects[]
|
Salesforce objects in the database server. |
SalesforceObject
Salesforce object.
JSON representation |
---|
{
"objectName"
:
string
,
"fields"
:
[
{
object (
|
Fields | |
---|---|
objectName
|
Object name. |
fields[]
|
Salesforce fields. When unspecified as part of include objects, includes everything, when unspecified as part of exclude objects, excludes nothing. |
SalesforceField
Salesforce field.
JSON representation |
---|
{ "name" : string , "dataType" : string , "nillable" : boolean } |
Fields | |
---|---|
name
|
Field name. |
dataType
|
The data type. |
nillable
|
Indicates whether the field can accept nil values. |
MongodbSourceConfig
MongoDB source configuration.
JSON representation |
---|
{ "includeObjects" : { object ( |
Fields | |
---|---|
includeObjects
|
MongoDB collections to include in the stream. |
excludeObjects
|
MongoDB collections to exclude from the stream. |
maxConcurrentBackfillTasks
|
Optional. Maximum number of concurrent backfill tasks. The number should be non-negative and less than or equal to 50. If not set (or set to 0), the system's default value is used |
MongodbCluster
MongoDB Cluster structure.
JSON representation |
---|
{
"databases"
:
[
{
object (
|
Fields | |
---|---|
databases[]
|
MongoDB databases in the cluster. |
MongodbDatabase
MongoDB Database.
JSON representation |
---|
{
"database"
:
string
,
"collections"
:
[
{
object (
|
Fields | |
---|---|
database
|
Database name. |
collections[]
|
Collections in the database. |
MongodbCollection
MongoDB Collection.
JSON representation |
---|
{
"collection"
:
string
,
"fields"
:
[
{
object (
|
Fields | |
---|---|
collection
|
Collection name. |
fields[]
|
Fields in the collection. |
MongodbField
MongoDB Field.
JSON representation |
---|
{ "field" : string } |
Fields | |
---|---|
field
|
Field name. |
DestinationConfig
The configuration of the stream destination.
JSON representation |
---|
{ "destinationConnectionProfile" : string , // Union field |
destinationConnectionProfile
string
Required. Destination connection profile resource. Format: projects/{project}/locations/{location}/connectionProfiles/{name}
destination_stream_config
. Stream configuration that is specific to the data destination type. destination_stream_config
can be only one of the following:gcsDestinationConfig
object (
GcsDestinationConfig
)
A configuration for how data should be loaded to Cloud Storage.
bigqueryDestinationConfig
object (
BigQueryDestinationConfig
)
BigQuery destination configuration.
GcsDestinationConfig
Google Cloud Storage destination configuration
JSON representation |
---|
{ "path" : string , "fileRotationMb" : integer , "fileRotationInterval" : string , // Union field |
path
string
Path inside the Cloud Storage bucket to write data to.
fileRotationMb
integer
The maximum file size to be saved in the bucket.
fileRotationInterval
string (
Duration
format)
The maximum duration for which new events are added before a file is closed and a new file is created. Values within the range of 15-60 seconds are allowed.
A duration in seconds with up to nine fractional digits, ending with ' s
'. Example: "3.5s"
.
file_format
. File Format that the data should be written in. file_format
can be only one of the following:avroFileFormat
object (
AvroFileFormat
)
AVRO file format configuration.
jsonFileFormat
object (
JsonFileFormat
)
JSON file format configuration.
AvroFileFormat
This type has no fields.
AVRO file format configuration.
JsonFileFormat
JSON file format configuration.
JSON representation |
---|
{ "schemaFileFormat" : enum ( |
Fields | |
---|---|
schemaFileFormat
|
The schema file format along JSON data files. |
compression
|
Compression of the loaded JSON file. |
SchemaFileFormat
Schema file format.
Enums | |
---|---|
SCHEMA_FILE_FORMAT_UNSPECIFIED
|
Unspecified schema file format. |
NO_SCHEMA_FILE
|
Do not attach schema file. |
AVRO_SCHEMA_FILE
|
Avro schema format. |
JsonCompression
Json file compression.
Enums | |
---|---|
JSON_COMPRESSION_UNSPECIFIED
|
Unspecified json file compression. |
NO_COMPRESSION
|
Do not compress JSON file. |
GZIP
|
Gzip compression. |
BigQueryDestinationConfig
BigQuery destination configuration
JSON representation |
---|
{ "dataFreshness" : string , "blmtConfig" : { object ( |
dataFreshness
string (
Duration
format)
The guaranteed data freshness (in seconds) when querying tables created by the stream. Editing this field will only affect new tables created in the future, but existing tables will not be impacted. Lower values mean that queries will return fresher data, but may result in higher cost.
A duration in seconds with up to nine fractional digits, ending with ' s
'. Example: "3.5s"
.
blmtConfig
object (
BlmtConfig
)
Optional. Big Lake Managed Tables (BLMT) configuration.
dataset_config
. Target dataset(s) configuration. dataset_config
can be only one of the following:singleTargetDataset
object (
SingleTargetDataset
)
Single destination dataset.
sourceHierarchyDatasets
object (
SourceHierarchyDatasets
)
Source hierarchy datasets.
Union field write_mode
.
write_mode
can be only one of the following:
merge
object (
Merge
)
The standard mode
appendOnly
object (
AppendOnly
)
Append only mode
SingleTargetDataset
A single target dataset to which all data will be streamed.
JSON representation |
---|
{ "datasetId" : string } |
Fields | |
---|---|
datasetId
|
The dataset ID of the target dataset. DatasetIds allowed characters: https://cloud.google.com/bigquery/docs/reference/rest/v2/datasets#datasetreference . |
SourceHierarchyDatasets
Destination datasets are created so that hierarchy of the destination data objects matches the source hierarchy.
JSON representation |
---|
{
"datasetTemplate"
:
{
object (
|
Fields | |
---|---|
datasetTemplate
|
The dataset template to use for dynamic dataset creation. |
projectId
|
Optional. The project id of the BigQuery dataset. If not specified, the project will be inferred from the stream resource. |
DatasetTemplate
Dataset template used for dynamic dataset creation.
JSON representation |
---|
{ "location" : string , "datasetIdPrefix" : string , "kmsKeyName" : string } |
Fields | |
---|---|
location
|
Required. The geographic location where the dataset should reside. See https://cloud.google.com/bigquery/docs/locations for supported locations. |
datasetIdPrefix
|
If supplied, every created dataset will have its name prefixed by the provided value. The prefix and name will be separated by an underscore. i.e. |
kmsKeyName
|
Describes the Cloud KMS encryption key that will be used to protect destination BigQuery table. The BigQuery Service Account associated with your project requires access to this encryption key. i.e. projects/{project}/locations/{location}/keyRings/{key_ring}/cryptoKeys/{cryptoKey}. See https://cloud.google.com/bigquery/docs/customer-managed-encryption for more information. |
BlmtConfig
The configuration for BLMT.
JSON representation |
---|
{ "bucket" : string , "rootPath" : string , "connectionName" : string , "fileFormat" : enum ( |
Fields | |
---|---|
bucket
|
Required. The Cloud Storage bucket name. |
rootPath
|
The root path inside the Cloud Storage bucket. |
connectionName
|
Required. The bigquery connection. Format: |
fileFormat
|
Required. The file format. |
tableFormat
|
Required. The table format. |
FileFormat
Supported file formats for BigLake managed tables.
Enums | |
---|---|
FILE_FORMAT_UNSPECIFIED
|
Default value. |
PARQUET
|
Parquet file format. |
TableFormat
Supported table formats for BigLake managed tables.
Enums | |
---|---|
TABLE_FORMAT_UNSPECIFIED
|
Default value. |
ICEBERG
|
Iceberg table format. |
Merge
This type has no fields.
Merge mode defines that all changes to a table will be merged at the destination table.
AppendOnly
This type has no fields.
AppendOnly mode defines that all changes to a table will be written to the destination table.
State
Stream state.
Enums | |
---|---|
STATE_UNSPECIFIED
|
Unspecified stream state. |
NOT_STARTED
|
The stream has been created but has not yet started streaming data. |
RUNNING
|
The stream is running. |
PAUSED
|
The stream is paused. |
MAINTENANCE
|
The stream is in maintenance mode. Updates are rejected on the resource in this state. |
FAILED
|
The stream is experiencing an error that is preventing data from being streamed. |
FAILED_PERMANENTLY
|
The stream has experienced a terminal failure. |
STARTING
|
The stream is starting, but not yet running. |
DRAINING
|
The Stream is no longer reading new events, but still writing events in the buffer. |
BackfillAllStrategy
Backfill strategy to automatically backfill the Stream's objects. Specific objects can be excluded.
JSON representation |
---|
{ // Union field |
excluded_objects
. List of objects to exclude. excluded_objects
can be only one of the following:oracleExcludedObjects
object (
OracleRdbms
)
Oracle data source objects to avoid backfilling.
mysqlExcludedObjects
object (
MysqlRdbms
)
MySQL data source objects to avoid backfilling.
postgresqlExcludedObjects
object (
PostgresqlRdbms
)
PostgreSQL data source objects to avoid backfilling.
sqlServerExcludedObjects
object (
SqlServerRdbms
)
SQLServer data source objects to avoid backfilling
salesforceExcludedObjects
object (
SalesforceOrg
)
Salesforce data source objects to avoid backfilling
mongodbExcludedObjects
object (
MongodbCluster
)
MongoDB data source objects to avoid backfilling
BackfillNoneStrategy
This type has no fields.
Backfill strategy to disable automatic backfill for the Stream's objects.
Methods |
|
---|---|
|
Use this method to create a stream. |
|
Use this method to delete a stream. |
|
Use this method to get details about a stream. |
|
Use this method to list streams in a project and location. |
|
Use this method to update the configuration of a stream. |
|
Use this method to start, resume or recover a stream with a non default CDC strategy. |