You can install additional components like Hudi when you create a Dataproc cluster using the Optional components feature. This page describes how you can optionally install the Hudi component on a Dataproc cluster.
When installed on a Dataproc cluster, the Apache Hudi component installs Hudi libraries and configures Spark and Hive in the cluster to work with Hudi.
Compatible Dataproc image versions
You can install the Hudi component on Dataproc clusters created with the following Dataproc image versions:
Hudi related properties
When you create a Dataproc with Hudi cluster, the following Spark and Hive properties are configured to work with Hudi.
/etc/spark/conf/spark-defaults.conf 
spark.serializer 
org.apache.spark.serializer.KryoSerializer 
spark.sql.catalog.spark_catalog 
org.apache.spark.sql.hudi.catalog.HoodieCatalog 
spark.sql.extensions 
org.apache.spark.sql.hudi.HoodieSparkSessionExtension 
spark.driver.extraClassPath 
/usr/lib/hudi/lib/hudi-spark spark-version 
-bundle_ scala-version 
- hudi-version 
.jar 
spark.executor.extraClassPath 
/usr/lib/hudi/lib/hudi-spark spark-version 
-bundle_ scala-version 
- hudi-version 
.jar 
/etc/hive/conf/hive-site.xml 
hive.aux.jars.path 
file:///usr/lib/hudi/lib/hudi-hadoop-mr-bundle- version 
.jar 
Install the component
Install the Hudi component when you create a Dataproc cluster.
The Dataproc image release version pages list the Hudi component version included in each Dataproc image release.
Console
- Enable the component. - In the Google Cloud console, open the Dataproc Create a cluster page. The Set up clusterpanel is selected.
- In the Componentssection: - Under Optional components, select the Hudicomponent.
 
 
gcloud command
To create a Dataproc cluster that includes the Hudi component,
use the command with the --optional-components 
flag.
gcloud dataproc clusters create CLUSTER_NAME \ --region= REGION \ --optional-components=HUDI \ --image-version= DATAPROC_VERSION \ --properties= PROPERTIES
Replace the following:
- CLUSTER_NAME : Required. The new cluster name.
- REGION : Required. The cluster region .
- DATAPROC_IMAGE : Optional. You can use this optional this flag to specify a non-default Dataproc image version (see Default Dataproc image version ).
-  PROPERTIES 
: Optional. You can use this optional this flag to
  set Hudi component properties 
,
  which are specified with the  hudi:file-prefix Example:properties=hudi:hoodie.datasource.write.table.type=COPY_ON_WRITE).- Hudi component version property: You can optionally specify the  dataproc:hudi.versionproperty . Note:The Hudi component version is set by Dataproc to be compatible with the Dataproc cluster image version. If you set this property, cluster creation can fail if the specified version is not compatible with the cluster image.
- Spark and Hive properties: Dataproc sets Hudi-related Spark and Hive properties when the cluster is created. You do not need to set them when creating the cluster or submitting jobs.
 
- Hudi component version property: You can optionally specify the  
REST API
The Hudi component
can be installed through the Dataproc API using  SoftwareConfig.Component 
 
as part of a  clusters.create 
 
request.
Submit a job to read and write Hudi tables
After creating a cluster with the Hudi component , you can submit Spark and Hive jobs that read and write Hudi tables.
 gcloud CLI 
example:
gcloud dataproc jobs submit pyspark \ --cluster= CLUSTER_NAME \ --region= region \ JOB_FILE \ -- JOB_ARGS
Sample PySpark job
The following PySpark file creates, reads, and writes a Hudi table.
The following gcloud CLI command submits the sample PySpark file to Dataproc.
gcloud dataproc jobs submit pyspark \ --cluster= CLUSTER_NAME \ gs:// BUCKET_NAME /pyspark_hudi_example.py \ -- TABLE_NAME gs:// BUCKET_NAME / TABLE_NAME
Use the Hudi CLI
The Hudi CLI is located at /usr/lib/hudi/cli/hudi-cli.sh 
on the
Dataproc cluster master node. You can use the Hudi CLI
to view Hudi table schemas, commits, and statistics, and to manually perform
administrative operations, such as schedule compactions (see Using hudi-cli 
).
To start the Hudi CLI and connect to a Hudi table:
- SSH into the master node .
- Run /usr/lib/hudi/cli/hudi-cli.sh. The command prompt changes tohudi->.
- Run connect --path gs:// my-bucket / my-hudi-table.
- Run commands, such as desc, which describes the table schema, orcommits show, which shows the commit history.
- To stop the CLI session, run exit.
What's next
- See the Hudi Quick Start guide .

