Version 1.9. This version is no longer supported. For information about how to upgrade to version 1.10, seeUpgrading Anthos on bare metalin the 1.10 documentation. For more information about supported and unsupported versions, see theVersion historypage in the latest documentation.
With Google Distributed Cloud, you can define four types of clusters:
admin- A cluster used to manage user clusters.
user- A cluster used to run workloads.
standalone- A single cluster that can administer itself, and
that can also run workloads,
but can't create or manage other user clusters.
hybrid- A single cluster for both admin and workloads, that can also
manage user clusters.
In this quickstart, you deploy atwo-node hybrid clusterwith
Google Distributed Cloud. You learn how to create a cluster, and how to monitor
the cluster creation process.
This quickstart assumes you have a basic understanding ofKubernetes.
Prepare for Google Distributed Cloud
Before creating a cluster in Google Distributed Cloud, you must do the following:
For this quickstart, create a new Google Cloud project that organizes all your Google Cloud resources.
To create a cluster in Google Distributed Cloud, you need a Google Cloud project where your account has either of the following roles:
This quickstart uses thekubectlandbmctltools to create and set up clusters. To install these tools, you needgcloud.Google Cloud CLIincludes thegcloudandkubectlcommand-line tools.
To install the required tools, complete the following steps:
On your admin workstation, install and initialize Google Cloud CLI usingthese instructions. This process installsgcloud.
Update Google Cloud CLI:
gcloud components update
Log in with your Google account to manage your services and service accounts:
gcloud auth login --update-adc
A new browser tab opens and you are prompted to choose an account.
Usegcloudto installkubectl:
gcloud components install kubectl
Configure a Linux admin workstation
After you installgcloudandkubectl, configure a Linux admin workstation.
Do not useCloud Shellas your admin workstation.
InstallDocker version 19.03or later. To learn how to configure Docker, go to the page
corresponding to your Linux distribution:
To userootaccess, set up SSH on both the admin workstation and the
remote cluster node machines. Initially, you needrootSSH password
authentication enabled on the remote cluster node machines to share keys
from the admin workstation. Once the keys are in place, you can disable SSH
password authentication.
Generate a private/public key pair on the admin workstation. Don't set
a passphrase for the keys. You need the keys to use SSH for secure,
passwordless connections between the admin workstation and the
cluster node machines. Generate the keys with the following command:
ssh-keygen -t rsa
You can also useSUDOuser access to the cluster node
machines to set up SSH, but for passwordless, non-root user connections you
need to update the cluster configuration file with the
appropriate credentials. For more information, go to the#Node access
configurationsection in thesample cluster config
file.
Add the generated public key to the cluster node machines. By default,
the public keys are stored in theid_rsa.pubidentity file.
Disable SSH password authentication on the cluster node machines and use
the following command on the admin workstation to verify the public key
authentication works between the admin workstation and the cluster node machines.
You use thebmctlcommand-line tool to create clusters in Google Distributed Cloud.
Thebmctlcommand automatically sets up the Google service accounts and enables the APIs you need to use Google Distributed Cloud in your specified project.
If you want to create your own service accounts or do other manual project setup yourself instead, seeEnabling Google services and service accountsbefore you create clusters withbmctl.
Ensure thatbmctlis installed correctly by viewing the help information:
./bmctl -h
Create your cluster nodes
Create two machines to serve as nodes for your cluster:
One machine functions as the control plane node.
One machine functions as the worker node.
Go tohardwareand operating system
requirements (Centos,RHEL, andUbuntu) to learn more
about the requirements for the cluster nodes.
Create a cluster
To create a cluster:
Usebmctlto create a config file.
Edit the config file to customize it for your cluster and network.
Usebmctlto create the cluster from the config file.
Create a config file
To create a config file, and enable service accounts and APIs automatically,
make sure you are in thebaremetaldirectory, and issue
thebmctlcommand with the following flags:
The command above creates a config file under thebaremetaldirectory at the following path:bmctl-workspace/cluster1/cluster1.yaml
Edit the config file
To edit the config file:
Open thebmctl-workspace/cluster1/cluster1.yamlconfig file in an editor.
Edit the file with your specific node and network requirements. Use the sample config
file below for reference. This quickstart doesn't use or include information on OpenID Connect (OIDC).
# gcrKeyPath: < to GCR service account key>gcrKeyPath:baremetal/gcr.json# sshPrivateKeyPath: < to SSH private key, used for node access>sshPrivateKeyPath:.ssh/id_rsa# gkeConnectAgentServiceAccountKeyPath: < to Connect agent service account key>gkeConnectAgentServiceAccountKeyPath:baremetal/connect-agent.json# gkeConnectRegisterServiceAccountKeyPath: < to Hub registration service account key>gkeConnectRegisterServiceAccountKeyPath:baremetal/connect-register.json# cloudOperationsServiceAccountKeyPath: < to Cloud Operations service account key>cloudOperationsServiceAccountKeyPath:baremetal/cloud-ops.json---apiVersion:v1kind:Namespacemetadata:name:cluster-cluster1---apiVersion:baremetal.cluster.gke.io/v1kind:Clustermetadata:name:cluster1namespace:cluster-cluster1spec:# Cluster type. This can be:# 1) admin: to create an admin cluster. This can later be used to create user clusters.# 2) user: to create a user cluster. Requires an existing admin cluster.# 3) hybrid: to create a hybrid cluster that runs admin cluster components and user workloads.# 4) standalone: to create a cluster that manages itself, runs user workloads, but does not manage other clusters.type:hybrid# Anthos cluster version.anthosBareMetalVersion:1.9.8# GKE connect configurationgkeConnect:projectID:PROJECT_ID# Control plane configurationcontrolPlane:nodePoolSpec:nodes:# Control plane node pools. Typically, this is either a single machine# or 3 machines if using a high availability deployment.-address:CONTROL_PLANE_NODE_IP# Cluster networking configurationclusterNetwork:# Pods specify the IP ranges from which pod networks are allocated.pods:cidrBlocks:-192.168.0.0/16# Services specify the network ranges from which service virtual IPs are allocated.# This can be any RFC 1918 range that does not conflict with any other IP range# in the cluster and node pool resources.services:cidrBlocks:-172.26.232.0/24# Load balancer configurationloadBalancer:# Load balancer mode can be either 'bundled' or 'manual'.# In 'bundled' mode a load balancer will be installed on load balancer nodes during cluster creation.# In 'manual' mode the cluster relies on a manually-configured external load balancer.mode:bundled# Load balancer port configurationports:# Specifies the port the load balancer serves the Kubernetes control plane on.# In 'manual' mode the external load balancer must be listening on this port.controlPlaneLBPort:443# There are two load balancer virtual IP (VIP) addresses: one for the control plane# and one for the L7 Ingress service. The VIPs must be in the same subnet as the load balancer nodes.# These IP addresses do not correspond to physical network interfaces.vips:# ControlPlaneVIP specifies the VIP to connect to the Kubernetes API server.# This address must not be in the address pools below.controlPlaneVIP:CONTROL_PLANE_VIP# IngressVIP specifies the VIP shared by all services for ingress traffic.# Allowed only in non-admin clusters.# This address must be in the address pools below.ingressVIP:INGRESS_VIP# AddressPools is a list of non-overlapping IP ranges for the data plane load balancer.# All addresses must be in the same subnet as the load balancer nodes.# Address pool configuration is only valid for 'bundled' LB mode in non-admin clusters.# addressPools:# - name: pool1# addresses:# # Each address must be either in the CIDR form (1.2.3.0/24)# # or range form (1.2.3.1-1.2.3.5).# -LOAD_BALANCER_ADDRESS_POOL-# A load balancer nodepool can be configured to specify nodes used for load balancing.# These nodes are part of the kubernetes cluster and run regular workloads as well as load balancers.# If the node pool config is absent then the control plane nodes are used.# Node pool configuration is only valid for 'bundled' LB mode.# nodePoolSpec:# nodes:# - address:LOAD_BALANCER_NODE_IP;# Proxy configuration# proxy:# url: http://[username:password@]domain# # A list of IPs, hostnames or domains that should not be proxied.# noProxy:# - 127.0.0.1# - localhost# Logging and MonitoringclusterOperations:# Cloud project for logs and metrics.projectID:PROJECT_ID# Cloud location for logs and metrics.location:us-central1# Whether collection of application logs/metrics should be enabled (in addition to# collection of system logs/metrics which correspond to system components such as# Kubernetes control plane or cluster management agents).# enableApplication: false# Storage configurationstorage:# lvpNodeMounts specifies the config for local PersistentVolumes backed by mounted disks.# These disks need to be formatted and mounted by the user, which can be done before or after# cluster creation.lvpNodeMounts:# path specifies the host machine path where mounted disks will be discovered and a local PV# will be created for each mount.path:/mnt/localpv-disk# storageClassName specifies the StorageClass that PVs will be created with. The StorageClass# is created during cluster creation.storageClassName:local-disks# lvpShare specifies the config for local PersistentVolumes backed by subdirectories in a shared filesystem.# These subdirectories are automatically created during cluster creation.lvpShare:# path specifies the host machine path where subdirectories will be created on each host. A local PV# will be created for each subdirectory.path:/mnt/localpv-share# storageClassName specifies the StorageClass that PVs will be created with. The StorageClass# is created during cluster creation.storageClassName:local-shared# numPVUnderSharedPath specifies the number of subdirectories to create under path.numPVUnderSharedPath:5# NodeConfig specifies the configuration that applies to all nodes in the cluster.nodeConfig:# podDensity specifies the pod density configuration.podDensity:# maxPodsPerNode specifies the maximum number of pods allowed on a single node.maxPodsPerNode:250# containerRuntime specifies which container runtime to use for scheduling containers on nodes.# containerd and docker are supported.containerRuntime:containerd---# Node pools for worker nodesapiVersion:baremetal.cluster.gke.io/v1kind:NodePoolmetadata:name:node-pool-1namespace:cluster-cluster1spec:clusterName:cluster1nodes:-address:WORKER_NODE_1_IP-address:WORKER_NODE_2_IP
Run preflight checks and create the cluster
Thebmctlcommand runs preflight checks
on your cluster config file before it creates a cluster. If the checks are successful,bmctlcreates the cluster.
To run preflight checks and create the cluster:
Ensure that you are in thebaremetaldirectory.
Use the following command to create the cluster:
./bmctl create cluster -cCLUSTER_NAME
For example:
./bmctl create cluster -c cluster1
Thebmctlcommand monitors the preflight checks and cluster creation,
displays output to the screen, and writes verbose information
to thebmctllogs.
You can find thebmctl, preflight checks, and node installation
logs in the following directory:baremetal/bmctl-workspace/CLUSTER_NAME/log
Thebmctlpreflight checks the proposed cluster installation for the
following conditions:
The Linux distribution and version are supported.
SELinux is not in "enforcing" mode.
On Ubuntu, Uncomplicated Firewall (UFW) is not active.
Google Container Registry is reachable.
The VIPs are available.
The cluster machines have connectivity to each other.
Load balancer machines are on the same Layer 2 subnet.
Cluster creation can take several minutes to finish.
Get information about your cluster
After you successfully create a cluster, use thekubectlcommand
to show information about the new cluster. During cluster creation, thebmctlcommand writes a kubeconfig file for the cluster that you can
query withkubectl. The kubeconfig file is written tobmctl-workspace/CLUSTER_NAME/CLUSTER_NAME-kubeconfig.
For example:
kubectl --kubeconfig bmctl-workspace/cluster1/cluster1-kubeconfig get nodes
This command returns:
NAME STATUS ROLES AGE VERSION
node-01 Ready master 16h v1.17.8-gke.16
node-02 Ready <none> 16h v1.17.8-gke.16
If your cluster creation fails preflight checks, then check the preflight check logs for
errors, and correct them in the cluster config file. The preflight check logs are located
in the/logdirectory at
~/baremetal/bmctl-workspace/CLUSTER_NAME/log
The preflight check logs for each machine in the cluster are in theCLUSTER_NAMEdirectory, and are organized by IP address.
For example:
The quickstart created a simple two-node hybrid cluster. If you want to create a high availability
control plane, create a cluster that has three control plane nodes.
For example, edit theconfig fileto add two additional nodes to the control
plane:
[[["Easy to understand","easyToUnderstand","thumb-up"],["Solved my problem","solvedMyProblem","thumb-up"],["Other","otherUp","thumb-up"]],[["Hard to understand","hardToUnderstand","thumb-down"],["Incorrect information or sample code","incorrectInformationOrSampleCode","thumb-down"],["Missing the information/samples I need","missingTheInformationSamplesINeed","thumb-down"],["Other","otherDown","thumb-down"]],["Last updated 2025-09-04 UTC."],[[["\u003cp\u003eGoogle Distributed Cloud allows defining four cluster types: admin, user, standalone, and hybrid, each with distinct roles in managing and running workloads.\u003c/p\u003e\n"],["\u003cp\u003eTo prepare for Google Distributed Cloud cluster creation, users must create a Google Cloud project, install Google Cloud CLI, configure a Linux admin workstation, and install the \u003ccode\u003ebmctl\u003c/code\u003e tool.\u003c/p\u003e\n"],["\u003cp\u003eCluster creation involves using the \u003ccode\u003ebmctl\u003c/code\u003e command to generate and customize a configuration file, then using \u003ccode\u003ebmctl\u003c/code\u003e to create the cluster, which includes preflight checks to validate the setup.\u003c/p\u003e\n"],["\u003cp\u003eAfter creating a cluster, users can interact with it using \u003ccode\u003ekubectl\u003c/code\u003e, such as getting information about nodes, creating deployments, and defining services of type LoadBalancer to expose applications externally.\u003c/p\u003e\n"],["\u003cp\u003eFor high availability and resource isolation, configurations can be modified in the cluster configuration file to enable a three-node control plane and separate load balancer node pool, respectively.\u003c/p\u003e\n"]]],[],null,[]]