Bulk Compress Cloud Storage Files template

The Bulk Compress Cloud Storage Files template is a batch pipeline that compresses files on Cloud Storage to a specified location. This template can be useful when you need to compress large batches of files as part of a periodic archival process. The supported compression modes are: BZIP2 , DEFLATE , GZIP . Files output to the destination location will follow a naming schema of original filename appended with the compression mode extension. The extensions appended will be one of: .bzip2 , .deflate , .gz .

Any errors which occur during the compression process will be output to the failure file in CSV format of filename, error message. If no failures occur while running the pipeline, the error file will still be created but will contain no error records.

Pipeline requirements

  • The compression must be in one of the following formats: BZIP2 , DEFLATE , GZIP .
  • The output directory must exist prior to running the pipeline.

Template parameters

Required parameters

  • inputFilePattern: The Cloud Storage location of the files you'd like to process. For example, gs://your-bucket/your-files/*.txt .
  • outputDirectory: The path and filename prefix for writing output files. Must end with a slash. DateTime formatting is used to parse directory path for date & time formatters. For example, gs://your-bucket/your-path .
  • outputFailureFile: The error log output file to use for write failures that occur during compression. The contents will be one line for each file which failed compression. Note that this parameter will allow the pipeline to continue processing in the event of a failure. For example, gs://your-bucket/compressed/failed.csv .
  • compression: The compression algorithm used to compress the matched files. Valid algorithms: BZIP2, DEFLATE, GZIP.

Optional parameters

  • outputFilenameSuffix: Output filename suffix of the files to write. Defaults to .bzip2, .deflate or .gz depending on the compression algorithm.

Run the template

Console

  1. Go to the Dataflow Create job from template page.
  2. Go to Create job from template
  3. In the Job name field, enter a unique job name.
  4. Optional: For Regional endpoint , select a value from the drop-down menu. The default region is us-central1 .

    For a list of regions where you can run a Dataflow job, see Dataflow locations .

  5. From the Dataflow template drop-down menu, select the Bulk Compress Files on Cloud Storage template.
  6. In the provided parameter fields, enter your parameter values.
  7. Click Run job .

gcloud

In your shell or terminal, run the template:

gcloud  
dataflow  
 jobs 
  
run  
 JOB_NAME 
  
 \ 
  
--gcs-location  
gs://dataflow-templates- REGION_NAME 
/ VERSION 
/Bulk_Compress_GCS_Files  
 \ 
  
--region  
 REGION_NAME 
  
 \ 
  
--parameters  
 \ 
 inputFilePattern 
 = 
gs:// BUCKET_NAME 
/uncompressed/*.txt, \ 
 outputDirectory 
 = 
gs:// BUCKET_NAME 
/compressed, \ 
 outputFailureFile 
 = 
gs:// BUCKET_NAME 
/failed/failure.csv, \ 
 compression 
 = 
 COMPRESSION 

Replace the following:

  • JOB_NAME : a unique job name of your choice
  • REGION_NAME : the region where you want to deploy your Dataflow job—for example, us-central1
  • VERSION : the version of the template that you want to use

    You can use the following values:

  • BUCKET_NAME : the name of your Cloud Storage bucket
  • COMPRESSION : your choice of compression algorithm

API

To run the template using the REST API, send an HTTP POST request. For more information on the API and its authorization scopes, see projects.templates.launch .

 POST 
  
 h 
 tt 
 ps 
 : 
 //dataflow.googleapis.com/v1b3/projects/ PROJECT_ID 
/locations/ LOCATION 
/templates:launch?gcsPath=gs://dataflow-templates- LOCATION 
/ VERSION 
/Bulk_Compress_GCS_Files { 
  
 "jobName" 
 : 
  
 " JOB_NAME 
" 
 , 
  
 "parameters" 
 : 
  
 { 
  
 "inputFilePattern" 
 : 
  
 "gs:// BUCKET_NAME 
/uncompressed/*.txt" 
 , 
  
 "outputDirectory" 
 : 
  
 "gs:// BUCKET_NAME 
/compressed" 
 , 
  
 "outputFailureFile" 
 : 
  
 "gs:// BUCKET_NAME 
/failed/failure.csv" 
 , 
  
 "compression" 
 : 
  
 " COMPRESSION 
" 
  
 }, 
  
 "environment" 
 : 
  
 { 
  
 "zone" 
 : 
  
 "us-central1-f" 
  
 } 
 } 
 

Replace the following:

  • PROJECT_ID : the Google Cloud project ID where you want to run the Dataflow job
  • JOB_NAME : a unique job name of your choice
  • LOCATION : the region where you want to deploy your Dataflow job—for example, us-central1
  • VERSION : the version of the template that you want to use

    You can use the following values:

  • BUCKET_NAME : the name of your Cloud Storage bucket
  • COMPRESSION : your choice of compression algorithm

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: