Working with VM-based workloads

This page shows you how to deploy VMs on Google Distributed Cloud using VM Runtime on Google Distributed Cloud. VM Runtime on Google Distributed Cloud uses KubeVirt to orchestrate VMs on clusters, allowing you to work with your VM-based apps and workloads in a uniform development environment. You can enable VM Runtime on Google Distributed Cloud when creating a new cluster and on existing clusters.

Before you begin

These instructions assume that you have a cluster up and running. If you don't, you can follow the instructions on Google Distributed Cloud quickstart to quickly set up a cluster on your workstation.

Enable VM Runtime on Google Distributed Cloud

VM Runtime on Google Distributed Cloud is disabled, by default. To enable VM Runtime on Google Distributed Cloud, edit the VMRuntime custom resource in the cluster. Starting with Google Distributed Cloud release 1.10.0, the VMRuntime custom resource is automatically installed on clusters.

To enable VM Runtime on Google Distributed Cloud:

  1. Update the VMRuntime custom resource to set enabled to true .

      apiVersion 
     : 
      
     vm.cluster.gke.io/v1 
     kind 
     : 
      
     VMRuntime 
     metadata 
     : 
      
     name 
     : 
      
     vmruntime 
     spec 
     : 
      
      enabled 
     : 
      
     true 
      
     # useEmulation default to false if not set. 
      
     useEmulation 
     : 
      
     true 
      
     # vmImageFormat default to "qcow2" if not set. 
      
     vmImageFormat 
     : 
      
     qcow2 
     
    
  2. If your node doesn't support hardware virtualization, or you aren't sure, set useEmulation to true .

    If available, hardware virtualization provides better performance than software emulation. The useEmulation field defaults to false , if it isn't specified.

      apiVersion 
     : 
      
     vm.cluster.gke.io/v1 
     kind 
     : 
      
     VMRuntime 
     metadata 
     : 
      
     name 
     : 
      
     vmruntime 
     spec 
     : 
      
     enabled 
     : 
      
     true 
      
     # useEmulation default to false if not set. 
      
      useEmulation 
     : 
      
     true 
      
     # vmImageFormat default to "qcow2" if not set. 
      
     vmImageFormat 
     : 
      
     qcow2 
     
    
  3. You can change the image format used for the VMs you create by setting the vmImageFormat field.

    The vmImageFormat field supports two disk image format values: raw and qcow2 . If you don't set vmImageFormat , the VM Runtime on Google Distributed Cloud uses the raw disk image format to create VMs. The raw format may provide improved performance over qcow2 , a copy on write format, but may use more disk. For more information about the image formats for your vm, see Disk image file formats in the QEMU documentation.

      apiVersion 
     : 
      
     vm.cluster.gke.io/v1 
     kind 
     : 
      
     VMRuntime 
     metadata 
     : 
      
     name 
     : 
      
     vmruntime 
     spec 
     : 
      
     enabled 
     : 
      
     true 
      
     # useEmulation default to false if not set. 
      
     useEmulation 
     : 
      
     true 
      
     # vmImageFormat default to "qcow2" if not set. 
      
      vmImageFormat 
     : 
      
     qcow2 
     
    
  4. Save the configuration and verify that the VMRuntime custom resource is enabled:

     kubectl  
    describe  
    vmruntime  
    vmruntime 
    

    The details of the VMRuntime custom resource include a Status section. VM Runtime on Google Distributed Cloud is enabled and working when VMRuntime.Status.Ready is set to true .

Upgrading clusters

The VMRuntime custom resource is automatically installed on clusters upgraded to version 1.10.0 or higher. When you upgrade a 1.9.x cluster to version 1.10.0 and higher, the upgrade process checks your cluster settings and configures the VMRuntime custom resource to match the VM Runtime on Google Distributed Cloud settings on your 1.9.x cluster. If spec.kubevirt is present in the 1.9.x cluster resource, the upgrade process enables VM Runtime on Google Distributed Cloud.

The VMRuntime custom resource settings take precedence over legacy VM Runtime on Google Distributed Cloud cluster settings, such as spec.kubevirt.useEmulation , in version 1.10.0 or higher clusters. Update the VMRuntime custom resource to change the VM Runtime on Google Distributed Cloud settings for your 1.10.0 or higher cluster.

Install virtctl

  1. Install the virtctl CLI tool as a kubectl plugin

      export 
      
     GOOGLE_APPLICATION_CREDENTIALS 
     = 
     "bm-gcr.json" 
    sudo  
    -E  
    ./bmctl  
    install  
    virtctl 
    
  2. Verify that virtctl is installed:

     kubectl  
    plugin  
    list 
    

    If virtctl is listed in the response, it's successfully installed.

Create a VM

Once you enable KubeVirt on your cluster and install the virtctl plugin for kubectl , you can start creating VMs in your cluster using the kubectl virt create vm command. Before running this command, we recommend configuring a cloud-init file to ensure that you have console access to the VM once it's created.

Create a custom cloud-init file for console access

There are two ways that you can create a custom cloud-init file. The easiest way is to specify the --os=<OPERATING_SYSTEM> flag when creating the VM. This method automatically configures a simple cloud-init file and works for the following operating systems.

  • Ubuntu
  • CentOS
  • Debian
  • Fedora

Once your VM is created, you can access it for the first time with the following credentials and then change the password:

  user 
 : 
  
 root 
 password 
 : 
  
 changeme 
 

If your image contains a different Linux-based OS or you need a more advanced configuration, you can manually create a custom cloud-init file and specify the path to that file by specifying the --cloud-init-file=<path/to/file> flag. In its most basic form, the cloud-init file is a YAML file that contains the following:

  #cloud-config 
 user 
 : 
  
 root 
 password 
 : 
  
 changeme 
 lock_passwd 
 : 
  
 false 
 chpasswd 
 : 
  
 { 
 expire 
 : 
  
 false 
 } 
 disable_root 
 : 
  
 false 
 ssh_authorized_keys 
 : 
 - 
  
< ssh-key 
> 

For more advanced configurations, see Cloud config examples .

Once you've determined which method to use, you are ready to create a VM.

Run the kubectl virt create vm command

You can create VMs from public or custom images.

Public image

If your cluster has external connection, you can create a VM from a public image by running:

 kubectl  
virt  
create  
vm  
 VM_NAME 
  
 \ 
  
--boot-disk-access-mode = 
 MODE 
  
 \ 
  
--boot-disk-size = 
 DISK_SIZE 
  
 \ 
  
--boot-disk-storage-class = 
 " DISK_CLASS 
" 
  
 \ 
  
--cloud-init-file = 
 FILE_PATH 
  
 \ 
  
--cpu = 
 CPU_NUMBER 
  
 \ 
  
--image = 
 IMAGE_NAME 
  
 \ 
  
--memory = 
 MEMORY_SIZE 
 

Replace the following:

  • VM_NAME with the name of the VM that you want to create.
  • MODE with the access-mode of the boot disk. Possible values are ReadWriteOnce (default) or ReadWriteMany .
  • DISK_SIZE with the size you want for the boot disk. The default value is 20Gi .
  • DISK_CLASS with the storage class of the boot disk. The default value is local-shared . For a list of available storage classes, run kubectl get storageclass .
  • FILE_PATH with the full path of the customized cloud-init file. Depending on the image, this may be required to gain console access to the VM after it is created. If you plan to automatically configure the cloud-init file with the --os flag, then don't specify the --cloud-init-file flag. If you're specifying the --cloud-init-file flag, the --os flag is ignored. Acceptable values for --os are ubuntu , centos , debian , and fedora .
  • CPU_NUMBER with the number of CPUs you want to configure for the VM. The default value is 1 .
  • IMAGE_NAME with the VM image, which can be ubuntu20.04 (default), centos8 , or a URL of the image.
  • MEMORY_SIZE with the memory size of the VM. The default value is 4Gi .

Custom image

When creating a VM from a custom image, you can either specify an image from an HTTP image server or a locally stored image.

HTTP image server

You can set up an HTTP server using Apache or nginx and upload the custom image to its exposed folder. You can then create a VM from the custom image by running:

 kubectl  
virt  
create  
vm  
 VM_NAME 
  
 \ 
  
--boot-disk-access-mode = 
 DISK_ACCESS_MODE 
  
 \ 
  
--boot-disk-size = 
 DISK_SIZE 
  
 \ 
  
--boot-disk-storage-class = 
 DISK_CLASS 
  
 \ 
  
--cloud-init-file = 
 FILE_PATH 
  
 \ 
  
--cpu = 
 CPU_NUMBER 
  
 \ 
  
--image = 
http:// SERVER_IP 
/ IMAGE_NAME 
  
 \ 
  
--memory = 
 MEMORY_SIZE 
 

Replace the following:

  • VM_NAME with the name of the VM you want to create.
  • DISK_ACCESS_MODE with the access-mode of the boot disk. Possible values are ReadWriteOnce (default) or ReadWriteMany .
  • DISK_SIZE with the size you want for the boot disk. The default value is 20Gi .
  • DISK_CLASS with the storage class of the boot disk. The default value is local-shared . For a list of available storage classes, run kubectl get storageclass .
  • FILE_PATH with the full path of the customized cloud-init file. Depending on the image, this may be required to gain console access to the VM after it is created. If you plan to automatically configure the cloud-init file with the --os flag, then don't specify the --cloud-init-file flag. If you're specifying the --cloud-init-file flag, the --os flag is ignored. Accectable values for --os are ubuntu , centos , debian , and fedora .
  • CPU_NUMBER with the number of CPUs you want to configure for the VM. The default value is 1 .
  • SERVER_IP with the IP address of the server hosting the image.
  • IMAGE_NAME with the file name of the custom image.
  • MEMORY_SIZE with the memory size of the VM. The default value is 4Gi .

Locally stored image

You can store the custom image locally and create a VM from it by running:

 kubectl  
virt  
create  
vm  
 VM_NAME 
  
 \ 
  
--boot-disk-access-mode = 
 DISK_ACCESS_MODE 
  
 \ 
  
--boot-disk-size = 
 DISK_SIZE 
  
 \ 
  
--boot-disk-storage-class = 
 DISK_CLASS 
  
 \ 
  
--cloud-init-file = 
 FILE_PATH 
  
 \ 
  
--cpu = 
 CPU_NUMBER 
  
 \ 
  
--image = 
 IMAGE_PATH 
  
 \ 
  
--memory = 
 MEMORY_SIZE 
  
 \ 
 

Replace the following:

  • VM_NAME with the name of the VM you want to create.
  • DISK_ACCESS_MODE with the access-mode of the boot disk. Possible values are ReadWriteOnce (default) or ReadWriteMany .
  • DISK_SIZE with the size you want for the boot disk. The default value is 20Gi .
  • DISK_CLASS with the storage class of the boot disk. The default value is local-shared .
  • FILE_PATH with the full path of the customized cloud-init file. Depending on the image, this may be required to gain console access to the VM after it is created. If you plan to automatically configure the cloud-init file with the --os flag, then don't specify the --cloud-init-file flag. If you're specifying the --cloud-init-file flag, the --os flag is ignored. Accectable values for --os are ubuntu , centos , debian , and fedora .
  • CPU_NUMBER with the number of CPUs you want to configure for the VM. The default value is 1 .
  • IMAGE_PATH with the local file path to the custom image.
  • MEMORY_SIZE with the memory size of the VM. The default value is 4Gi .

Change the default values for flags

The kubectl virt create vm command uses default values to auto-fill unspecified flags when the command is executed. You can change these default values by running:

 kubectl  
virt  
config  
default  
 FLAG 
 

Replace FLAG with the flag of the parameter that you want to change the default value for.

Example: The following command changes the default CPU configuration from the initial default of 1 to 2 :

 kubectl  
virt  
config  
default  
--cpu = 
 2 
 

For a list of supported flags and their current default values, run:

 kubectl  
virt  
config  
default  
-h 

The default configurations are stored client side as a local file called ~/.virtctl.default . You can change the default configurations by editing this file as well.

Access your VM

You can access VMs using the following methods:

Console access

To access a VM from the console, run:

 kubectl  
virt  
console  
 VM_NAME 
 

Replace VM_NAME with the name of the VM that you want to access.

VNC access

To access a VM using VNC, run:

  # This requires remote-viewer from the virt-viewer package and a graphical desktop from where you run virtctl 
kubectl  
virt  
vnc  
 VM_NAME 
 

Replace VM_NAME with the name of the VM that you want to access.

Internal access

The IP addresses of your cluster VMs can be directly accessed by all other pods in the cluster. To find the IP address of a VM, run:

 kubectl  
get  
vmi  
 VM_NAME 
 

Replace VM_NAME with the name of the VM that you want to access.

The command returns something like:

 NAME  
AGE  
PHASE  
IP  
NODENAME
vm1  
13m  
Running  
 192 
.168.1.194  
upgi-bm002 

External access

VMs created in your cluster have pod network addresses that can be accessed only from within the cluster. To access cluster VMs externally:

  1. Expose the VM as a load balancer service:

     kubectl  
    virt  
    expose  
    vm  
     VM_NAME 
      
     \ 
      
    --port = 
     LB_PORT 
      
     \ 
      
    --target-port = 
     VM_PORT 
      
     \ 
      
    --type = 
    LoadBalancer  
     \ 
      
    --name = 
     SERVICE_NAME 
     
    

    Replace the following:

    • VM_NAME with the name of the VM that you want to access.
    • LB_PORT with the port of the load balancer service that is exposed.
    • VM_PORT with the port on the VM that you want to access through the load balancer service.
    • SERVICE_NAME with the name you want to give to this load balancer service.
  2. Get the external IP address of the load balancer service:

     kubectl  
    get  
    svc  
     SERVICE_NAME 
     
    

    Replace SERVICE_NAME with the name of the load balancer service that exposes the VM.

    You can access the target-port of the VM through the IP address listed in the EXTERNAL-IP field of the response.

Example

If you have a VM named galaxy that you wanted to access it from outside the cluster using SSH, you would run:

 kubectl  
virt  
expose  
vm  
galaxy  
 \ 
  
--port = 
 25022 
  
 \ 
  
--target-port = 
 22 
  
 \ 
  
--type = 
LoadBalancer  
 \ 
  
--name = 
galaxy-ssh 

Then get the load balancer IP address:

 kubectl  
get  
svc  
galaxy-ssh 

The command returns something like:

 NAME  
TYPE  
CLUSTER-IP  
EXTERNAL-IP  
PORT ( 
S ) 
  
AGE
galaxy-ssh  
LoadBalancer  
 10 
.96.250.76  
 21 
.1.38.202  
 25000 
:30499/TCP  
4m40s 

Now you can access the VM using SSH through 21.1.38.202:25022 (VIP:port) from outside the cluster:

 ssh  
root@21.1.38.202:22  
-p  
 25022 
 

Inspect VM telemetry and console logs

VM telemetry and console logs have been integrated into Google Cloud console. Telemetry information and log data are critical to monitoring the status of your VMs and troubleshooting any problems with your cluster VMs.

VM telemetry

The Anthos clusters VM statusdashboard provides live telemetry data for your cluster VMs.

To view telemetry information for your cluster VMs:

  1. In the Google Cloud console, select Monitoring, or click the following button:

    Go to Monitoring

  2. Select Dashboards.

    Anthos cluster VM status dashboard in the Monitoring dashboards list

  3. Click Anthos clusters VM statusin the All Dashboardslist.

    Anthos cluster VM status details

VM console logs

VM serial console logs are streamed to Cloud Logging and can be viewed in Logs Explorer.

Logs Explorer showing Anthos cluster VM data

Delete VMs and their resources

Deleting only the VM

 kubectl  
virt  
delete  
vm  
 VM_NAME 
 

Replace VM_NAME with the name of the VM that you want to delete.

Deleting only VM disks

 kubectl  
virt  
delete  
disk  
 DISK_NAME 
 

Replace DISK_NAME with the name of the disk that you want to delete. If you try to delete a VM disk before deleting the VM, the disk is marked for deletion pending the deletion of the VM.

Deleting VM & resources

 kubectl  
virt  
delete  
vm  
 VM_NAME 
  
--all 

Replace VM_NAME with the name of the VM that you want to delete.

If you want to check the resources used by the VM that would be deleted, you can specify the --dry-run flag together with --all .

Disable VM Runtime on Google Distributed Cloud

When you no longer need to use VM Runtime on Google Distributed Cloud, you can disable this feature.

bmctl

  • To disable the runtime, use the bmctl tool:

     bmctl  
    disable  
    vmruntime  
    --kubeconfig  
     KUBECONFIG_PATH 
      
     \ 
      
    --timeout  
     TIMEOUT_IN_MINUTES 
      
     \ 
      
    --force  
     true 
     
    

    Provide the path to the kubeconfig file for your cluster and values for the following configuration options:

    • --timeout : TIMEOUT_IN_MINUTES to wait for existing VM resources to be deleted. The default value is 10 minutes.
    • --force : Set to true to confirm you want to delete existing VM resources. The default value is false .

Custom resource

Before you can disable VM Runtime on Google Distributed Cloud in a cluster by editing the VMRuntime custom resource, you must ensure that all VMs in that cluster have been deleted.

To disable the runtime, update the VMRuntime custom resource:

  1. Check for any existing VMs in the cluster:

     kubectl  
    get  
    vm 
    

    If the command shows that there are still VMs in your cluster, then you must delete them before proceeding.

  2. Edit the VMRuntime custom resource:

     kubectl  
    edit  
    vmruntime 
    
  3. Set enabled:false in the spec:

      apiVersion 
     : 
      
     vm.cluster.gke.io/v1` 
     kind 
     : 
      
     VMRuntime 
     metadata 
     : 
      
     name 
     : 
      
     vmruntime 
     spec 
     : 
      
     enabled 
     : 
      
     false 
      
     useEmulation 
     : 
      
     true 
      
     vmImageFormat 
     : 
      
     qcow2 
     
    
  4. Save the updated custom resource specification in your editor.

  5. To verify that the VMRuntime custom resource is disabled, view the pods that run in the vm-system namespace:

     kubectl  
    get  
    pods  
    --namespace  
    vm-system 
    

    VM Runtime on Google Distributed Cloud is disabled when only the pods that belong to the vmruntime-controller-manager deployment are running in the namespace.

What's next

Design a Mobile Site
View Site in Mobile | Classic
Share by: