Stay organized with collectionsSave and categorize content based on your preferences.
The following examples create and use aKerberos enabledDataproc cluster withRangerandSolrcomponents to control
access by users to Hadoop, YARN, and HIVE resources.
In a Ranger with Kerberos cluster, Dataproc
maps a Kerberos user to the system user by stripping the Kerberos user's
realm and instance. For example, Kerberos principaluser1/cluster-m@MY.REALMis mapped to systemuser1, and
Ranger policies are defined to allow or deny permissions foruser1.
After the cluster is running, navigate to the DataprocClusterspage on Google Cloud console,
then select the cluster's name to open theCluster detailspage. Click theWeb Interfacestab to display a list of Component Gateway links to the web interfaces ofdefault and optional componentsinstalled on the cluster. Click the Ranger link.
Sign in to Ranger by entering the "admin" username and the Ranger admin
password.
The Ranger admin UI opens in a local browser.
YARN access policy
This example creates a Ranger policy to allow and deny user access to theYARN root.default queue.
Selectyarn-dataprocfrom the Ranger Admin UI.
On theyarn-dataproc Policiespage, clickAdd New Policy.
On theCreate Policypage, the following fields
are entered or selected:
Policy Name: "yarn-policy-1"
Queue: "root.default"
Audit Logging: "Yes"
Allow Conditions:
Select User: "userone"
Permissions: "Select All" to grant all permissions
Deny Conditions:
Select User: "usertwo"
Permissions: "Select All" to deny all permissions
ClickAddto save the policy. The policy is listed
on theyarn-dataproc Policiespage:
Run a Hadoop mapreduce job in the master SSH session window as userone:
userone@example-cluster-m:~$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduced-examples.
jar pi 5 10
The Ranger UI shows thatuseronewas
allowed to submit the job.
Run the Hadoop mapreduce job from the VM master SSH session
window asusertwo:
usertwo@example-cluster-m:~$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduced-examples.
jar pi 5 10
The Ranger UI shows thatusertwowas
denied access to submit the job.
HDFS access policy
This example creates a Ranger policy to allow and deny user access to the
HDFS/tmpdirectory.
Selecthadoop-dataprocfrom the Ranger Admin UI.
On thehadoop-dataproc Policiespage, clickAdd New Policy.
On theCreate Policypage, the following fields
are entered or selected:
Policy Name: "hadoop-policy-1"
Resource Path: "/tmp"
Audit Logging: "Yes"
Allow Conditions:
Select User: "userone"
Permissions: "Select All" to grant all permissions
Deny Conditions:
Select User: "usertwo"
Permissions: "Select All" to deny all permissions
ClickAddto save the policy. The policy is listed
on thehadoop-dataproc Policiespage:
Access the HDFS/tmpdirectory as userone:
userone@example-cluster-m:~$ hadoop fs -ls /tmp
The Ranger UI shows thatuseronewas allowed access to the HDFS /tmp directory.
Access the HDFS/tmpdirectory asusertwo:
usertwo@example-cluster-m:~$ hadoop fs -ls /tmp
The Ranger UI shows thatusertwowas denied access to the HDFS /tmp directory.
Hive access policy
This example creates a Ranger policy to allow and deny user access to a Hive
table.
Create a smallemployeetable using the hive CLI on the master instance.
hive> CREATE TABLE IF NOT EXISTS employee (eid int, name String); INSERT INTO employee VALUES (1 , 'bob') , (2 , 'alice'), (3 , 'john');
Selecthive-dataprocfrom the Ranger Admin UI.
On thehive-dataproc Policiespage, clickAdd New Policy.
On theCreate Policypage, the following fields
are entered or selected:
Policy Name: "hive-policy-1"
database: "default"
table: "employee"
Hive Column: "*"
Audit Logging: "Yes"
Allow Conditions:
Select User: "userone"
Permissions: "Select All" to grant all permissions
Deny Conditions:
Select User: "usertwo"
Permissions: "Select All" to deny all permissions
ClickAddto save the policy. The policy is listed
on thehive-dataproc Policiespage:
Run a query from the VM master SSH session against Hive employee table as userone:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eThis guide demonstrates creating a Kerberos-enabled Dataproc cluster with Ranger and Solr components to manage access to Hadoop, YARN, and HIVE resources.\u003c/p\u003e\n"],["\u003cp\u003eRanger policies can be defined to grant or deny permissions to specific users, such as allowing \u003ccode\u003euserone\u003c/code\u003e and denying \u003ccode\u003eusertwo\u003c/code\u003e access to resources.\u003c/p\u003e\n"],["\u003cp\u003eThe examples show how to implement access control for YARN, HDFS, and Hive resources using Ranger policies, specifically controlling access to queues, directories, and tables.\u003c/p\u003e\n"],["\u003cp\u003eRanger allows for fine-grained control over Hive access, including masking specific columns (like employee names) and applying row-level filters based on user permissions.\u003c/p\u003e\n"],["\u003cp\u003eThe Ranger web UI can be accessed through the component gateway, and Dataproc maps Kerberos principals to system users by removing the Kerberos realm and instance.\u003c/p\u003e\n"]]],[],null,["The following examples create and use a\n[Kerberos enabled](/dataproc/docs/concepts/configuring-clusters/security)\nDataproc cluster with\n[Ranger](/dataproc/docs/concepts/components/ranger) and\n[Solr](/dataproc/docs/concepts/components/solr) components to control\naccess by users to Hadoop, YARN, and HIVE resources.\n\nNotes:\n\n- The Ranger Web UI can be accessed through the\n [Component Gateway](/dataproc/docs/concepts/accessing/dataproc-gateways).\n\n- In a Ranger with Kerberos cluster, Dataproc\n maps a Kerberos user to the system user by stripping the Kerberos user's\n realm and instance. For example, Kerberos principal\n `user1/cluster-m@MY.REALM` is mapped to system `user1`, and\n Ranger policies are defined to allow or deny permissions for `user1`.\n\n1. [Set up the Ranger admin password](/dataproc/docs/concepts/components/ranger#installation_steps).\n\n2. [Set up the Kerberos root principal password](/dataproc/docs/concepts/configuring-clusters/security#set_up_your_kerberos_root_principal_password).\n\n3. Create the cluster.\n\n 1. The following `gcloud` command can be run in a local terminal window or from a project's [Cloud Shell](https://console.cloud.google.com/?cloudshell=true). \n\n ```\n gcloud dataproc clusters create cluster-name \\\n --region=region \\\n --optional-components=SOLR,RANGER \\\n --enable-component-gateway \\\n --properties=\"dataproc:ranger.kms.key.uri=projects/project-id/locations/global/keyRings/keyring/cryptoKeys/key,dataproc:ranger.admin.password.uri=gs://bucket/admin-password.encrypted\" \\\n --kerberos-root-principal-password-uri=gs://bucket/kerberos-root-principal-password.encrypted \\\n --kerberos-kms-key=projects/project-id/locations/global/keyRings/keyring/cryptoKeys/key\n ```\n4. After the cluster is running, navigate to the Dataproc\n [Clusters](https://console.cloud.google.com/dataproc/clusters) page on Google Cloud console,\n then select the cluster's name to open the\n **Cluster details** page. Click the **Web Interfaces**\n tab to display a list of Component Gateway links to the web interfaces of\n [default and optional components](/dataproc/docs/concepts/components/overview)\n installed on the cluster. Click the Ranger link.\n\n5. Sign in to Ranger by entering the \"admin\" username and the Ranger admin\n password.\n\n \u003cbr /\u003e\n\n6. The Ranger admin UI opens in a local browser.\n\n \u003cbr /\u003e\n\n| The following examples create Ranger policies to allow or deny access to two OS users and Kerberos principals: `userone` and `usertwo`.\n\nYARN access policy\n\nThis example creates a Ranger policy to allow and deny user access to the\n[YARN root.default queue](https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/CapacityScheduler.html#Configuration).\n\n1. Select `yarn-dataproc` from the Ranger Admin UI.\n\n \u003cbr /\u003e\n\n2. On the **yarn-dataproc Policies** page, click **Add New Policy** .\n On the **Create Policy** page, the following fields\n are entered or selected:\n\n - `Policy Name`: \"yarn-policy-1\"\n - `Queue`: \"root.default\"\n - `Audit Logging`: \"Yes\"\n - `Allow Conditions`:\n - `Select User`: \"userone\"\n - `Permissions`: \"Select All\" to grant all permissions\n - `Deny Conditions`:\n\n - `Select User`: \"usertwo\"\n - `Permissions`: \"Select All\" to deny all permissions\n\n \u003cbr /\u003e\n\n Click **Add** to save the policy. The policy is listed\n on the **yarn-dataproc Policies** page:\n3. Run a Hadoop mapreduce job in the master SSH session window as userone:\n\n ```\n userone@example-cluster-m:~$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduced-examples.\n jar pi 5 10\n ```\n\n \u003cbr /\u003e\n\n 1. The Ranger UI shows that `userone` was allowed to submit the job.\n4. Run the Hadoop mapreduce job from the VM master SSH session\n window as `usertwo`:\n\n ```\n usertwo@example-cluster-m:~$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduced-examples.\n jar pi 5 10\n ```\n\n \u003cbr /\u003e\n\n 1. The Ranger UI shows that `usertwo` was denied access to submit the job.\n\nHDFS access policy\n\nThis example creates a Ranger policy to allow and deny user access to the\nHDFS `/tmp` directory.\n\n1. Select `hadoop-dataproc` from the Ranger Admin UI.\n\n \u003cbr /\u003e\n\n2. On the **hadoop-dataproc Policies** page, click **Add New Policy** .\n On the **Create Policy** page, the following fields\n are entered or selected:\n\n - `Policy Name`: \"hadoop-policy-1\"\n - `Resource Path`: \"/tmp\"\n - `Audit Logging`: \"Yes\"\n - `Allow Conditions`:\n - `Select User`: \"userone\"\n - `Permissions`: \"Select All\" to grant all permissions\n - `Deny Conditions`:\n\n - `Select User`: \"usertwo\"\n - `Permissions`: \"Select All\" to deny all permissions\n\n \u003cbr /\u003e\n\n Click **Add** to save the policy. The policy is listed\n on the **hadoop-dataproc Policies** page:\n3. Access the HDFS `/tmp` directory as userone:\n\n ```\n userone@example-cluster-m:~$ hadoop fs -ls /tmp\n ```\n\n \u003cbr /\u003e\n\n 1. The Ranger UI shows that `userone` was allowed access to the HDFS /tmp directory.\n4. Access the HDFS `/tmp` directory as `usertwo`:\n\n ```\n usertwo@example-cluster-m:~$ hadoop fs -ls /tmp\n ```\n\n \u003cbr /\u003e\n\n 1. The Ranger UI shows that `usertwo` was denied access to the HDFS /tmp directory.\n\nHive access policy\n\nThis example creates a Ranger policy to allow and deny user access to a Hive\ntable.\n\n1. Create a small `employee` table using the hive CLI on the master instance.\n\n ```\n hive\u003e CREATE TABLE IF NOT EXISTS employee (eid int, name String); INSERT INTO employee VALUES (1 , 'bob') , (2 , 'alice'), (3 , 'john');\n ```\n\n \u003cbr /\u003e\n\n2. Select `hive-dataproc` from the Ranger Admin UI.\n\n \u003cbr /\u003e\n\n3. On the **hive-dataproc Policies** page, click **Add New Policy** .\n On the **Create Policy** page, the following fields\n are entered or selected:\n\n - `Policy Name`: \"hive-policy-1\"\n - `database`: \"default\"\n - `table`: \"employee\"\n - `Hive Column`: \"\\*\"\n - `Audit Logging`: \"Yes\"\n - `Allow Conditions`:\n - `Select User`: \"userone\"\n - `Permissions`: \"Select All\" to grant all permissions\n - `Deny Conditions`:\n\n - `Select User`: \"usertwo\"\n - `Permissions`: \"Select All\" to deny all permissions\n\n \u003cbr /\u003e\n\n Click **Add** to save the policy. The policy is listed\n on the **hive-dataproc Policies** page:\n4. Run a query from the VM master SSH session against Hive employee table as userone:\n\n ```\n userone@example-cluster-m:~$ beeline -u \"jdbc:hive2://$(hostname -f):10000/default;principal=hive/$(hostname -f)@REALM\" -e \"select * from employee;\"\n ```\n\n \u003cbr /\u003e\n\n 1. The userone query succeeds: \n\n ```\n Connected to: Apache Hive (version 2.3.6)\n Driver: Hive JDBC (version 2.3.6)\n Transaction isolation: TRANSACTION_REPEATABLE_READ\n +---------------+----------------+\n | employee.eid | employee.name |\n +---------------+----------------+\n | 1 | bob |\n | 2 | alice |\n | 3 | john |\n +---------------+----------------+\n 3 rows selected (2.033 seconds)\n ```\n5. Run a query from the VM master SSH session against Hive employee table as usertwo:\n\n ```\n usertwo@example-cluster-m:~$ beeline -u \"jdbc:hive2://$(hostname -f):10000/default;principal=hive/$(hostname -f)@REALM\" -e \"select * from employee;\"\n ```\n\n \u003cbr /\u003e\n\n 1. usertwo is denied access to the table: \n\n ```\n Error: Could not open client transport with JDBC Uri:\n ...\n Permission denied: user=usertwo, access=EXECUTE, inode=\"/tmp/hive\"\n ```\n\nFine-Grained Hive Access\n\nRanger supports Masking and Row Level Filters on Hive. This example\nbuilds on the previous `hive-policy-1` by adding masking and filter\npolicies.\n\n1. Select `hive-dataproc` from the Ranger Admin UI, then select the\n **Masking** tab and click **Add New Policy**.\n\n \u003cbr /\u003e\n\n 1. On the **Create Policy** page, the following fields\n are entered or selected to create a policy to mask (nullify)\n the employee name column.:\n\n - `Policy Name`: \"hive-masking policy\"\n - `database`: \"default\"\n - `table`: \"employee\"\n - `Hive Column`: \"name\"\n - `Audit Logging`: \"Yes\"\n - `Mask Conditions`:\n - `Select User`: \"userone\"\n - `Access Types`: \"select\" add/edit permissions\n - `Select Masking Option`: \"nullify\"\n\n \u003cbr /\u003e\n\n Click **Add** to save the policy.\n2. Select `hive-dataproc` from the Ranger Admin UI, then select the\n **Row Level Filter** tab and click **Add New Policy**.\n\n \u003cbr /\u003e\n\n 1. On the **Create Policy** page, the following fields\n are entered or selected to create a policy to\n filter (return) rows where `eid` is not equal to `1`:\n\n - `Policy Name`: \"hive-filter policy\"\n - `Hive Database`: \"default\"\n - `Hive Table`: \"employee\"\n - `Audit Logging`: \"Yes\"\n - `Mask Conditions`:\n - `Select User`: \"userone\"\n - `Access Types`: \"select\" add/edit permissions\n - `Row Level Filter`: \"eid != 1\" filter expression\n\n \u003cbr /\u003e\n\n Click **Add** to save the policy.\n 2. Repeat the previous query from the VM master SSH session\n against Hive employee table as userone:\n\n ```\n userone@example-cluster-m:~$ beeline -u \"jdbc:hive2://$(hostname -f):10000/default;principal=hive/$(hostname -f)@REALM\" -e \"select * from employee;\"\n ```\n\n \u003cbr /\u003e\n\n 1. The query returns with the name column masked out and bob (eid=1) filtered from the results.: \n\n ```\n Transaction isolation: TRANSACTION_REPEATABLE_READ\n +---------------+----------------+\n | employee.eid | employee.name |\n +---------------+----------------+\n | 2 | NULL |\n | 3 | NULL |\n +---------------+----------------+\n 2 rows selected (0.47 seconds)\n ```"]]