The AstraDB to BigQuery template is a batch pipeline that reads records from AstraDB and writes them to BigQuery.
If the destination table doesn't exist in BigQuery, the pipeline creates a table with the following values:
- The
Dataset ID
, which is inherited from the Cassandra keyspace. - The
Table ID
, which is inherited from the Cassandra table.
The schema of the destination table is inferred from the source Cassandra table.
-
List
andSet
are mapped to BigQueryREPEATED
fields. -
Map
are mapped to BigQueryRECORD
fields. - All other types are mapped to BigQuery fields with the corresponding types.
- Cassandra user-defined types (UDTs) and tuple data types are not supported.
Pipeline requirements
- AstraDB account with a token
Template parameters
Required parameters
- astraToken: The token value or secret resource ID. For example,
AstraCS:abcdefghij
. - astraDatabaseId: The database unique identifier (UUID). For example,
cf7af129-d33a-498f-ad06-d97a6ee6eb7
. - astraKeyspace: The name of the Cassandra keyspace inside of the Astra database.
- astraTable: The name of the table inside of the Cassandra database. For example,
my_table
.
Optional parameters
- astraQuery: The query to use to filter rows instead of reading the whole table.
- astraDatabaseRegion: If not provided, a default is chosen, which is useful with multi-region databases.
- minTokenRangesCount: The minimal number of splits to use to distribute the query.
- outputTableSpec: The BigQuery table location to write the output to. Use the format
<PROJECT_ID>:<DATASET_NAME>.<TABLE_NAME>
. The table's schema must match the input objects.
Run the template
Console
- Go to the Dataflow Create job from template page. Go to Create job from template
- In the Job name field, enter a unique job name.
- Optional: For Regional endpoint
, select a value from the drop-down menu. The default
region is
us-central1
.For a list of regions where you can run a Dataflow job, see Dataflow locations .
- From the Dataflow template drop-down menu, select the AstraDB to BigQuery template.
- In the provided parameter fields, enter your parameter values.
- Click Run job .
gcloud
In your shell or terminal, run the template:
gcloud dataflow flex-template run JOB_NAME \ --template-file-gcs-location = gs://dataflow-templates- REGION_NAME / VERSION /flex/AstraDB_To_BigQuery \ --project = PROJECT_ID \ --region = REGION_NAME \ --parameters \ astraToken = ASTRA_TOKEN , \ astraDatabaseId = ASTRA_DATABASE_ID , \ astraKeyspace = ASTRA_KEYSPACE , \ astraTable = ASTRA_TABLE , \
Replace the following:
-
JOB_NAME
: a unique job name of your choice -
VERSION
: the version of the template that you want to useYou can use the following values:
-
latest
to use the latest version of the template, which is available in the non-datedparent folder in the bucket— gs://dataflow-templates- REGION_NAME /latest/ - the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates- REGION_NAME /
-
-
REGION_NAME
: the region where you want to deploy your Dataflow job—for example,us-central1
-
ASTRA_TOKEN
: the Astra token -
ASTRA_DATABASE_ID
: the database identifier -
ASTRA_KEYSPACE
: the Cassandra keyspace -
ASTRA_TABLE
: the Cassandra table
API
To run the template using the REST API, send an HTTP POST request. For more information on the
API and its authorization scopes, see projects.templates.launch
.
POST h tt ps : //dataflow.googleapis.com/v1b3/projects/ PROJECT_ID /locations/ LOCATION /flexTemplates:launch { "launchParameter" : { "jobName" : " JOB_NAME " , "parameters" : { "astraToken" : " ASTRA_TOKEN " , "astraDatabaseId" : " ASTRA_DATABASE_ID " , "astraKeyspace" : " ASTRA_KEYSPACE " , "astraTable" : " ASTRA_TABLE " , }, "containerSpecGcsPath" : "gs://dataflow-templates- LOCATION / VERSION /flex/AstraDB_To_BigQuery" , "environment" : { "maxWorkers" : "10" } } }
Replace the following:
-
PROJECT_ID
: the Google Cloud project ID where you want to run the Dataflow job -
JOB_NAME
: a unique job name of your choice -
VERSION
: the version of the template that you want to useYou can use the following values:
-
latest
to use the latest version of the template, which is available in the non-datedparent folder in the bucket— gs://dataflow-templates- REGION_NAME /latest/ - the version name, like
2023-09-12-00_RC00
, to use a specific version of the template, which can be found nested in the respective dated parent folder in the bucket— gs://dataflow-templates- REGION_NAME /
-
-
LOCATION
: the region where you want to deploy your Dataflow job—for example,us-central1
-
ASTRA_TOKEN
: the Astra token -
ASTRA_DATABASE_ID
: the database identifier -
ASTRA_KEYSPACE
: the Cassandra keyspace -
ASTRA_TABLE
: the Cassandra table
What's next
- Learn about Dataflow templates .
- See the list of Google-provided templates .