Both Workflows and Cloud Composer can be used for service orchestration to combine services to implement application functionality or perform data processing. Although they are conceptually similar, each is designed for a different set of use cases. This page helps you choose the right product for your use case.
Key differences
The core difference between Workflows and Cloud Composer is what type of architecture each product is designed to support.
Workflowsorchestrates multiple HTTP-based services into a durable and stateful workflow. It has low latency and can handle a high number of executions. It's also completely serverless.
Workflows is great for chaining microservices together, automating infrastructure tasks like starting or stopping a VM, and integrating with external systems. Workflows connectors also support simple sequences of operations in Google Cloud services such as Cloud Storage and BigQuery.
Cloud Composeris designed to orchestrate data driven workflows (particularly ETL/ELT). It's built on the Apache Airflow project, but Cloud Composer is fully managed. Cloud Composer supports your pipelines wherever they are, including on-premises or across multiple cloud platforms. All logic in Cloud Composer, including tasks and scheduling, is expressed in Python as Directed Acyclic Graph (DAG) definition files.
Cloud Composer is best for batch workloads that can handle a few seconds of latency between task executions. You can use Cloud Composer to orchestrate services in your data pipelines, such as triggering a job in BigQuery or starting a Dataflow pipeline. You can use pre-existing operators to communicate with various services, and there are over 150 operators for Google Cloud alone.
Detailed feature comparison
-
Source code for airflow.models.xcom . Apache Airflow documentation . August 2, 2021. ↩

