Managing Virtual Machines in KubeVirt
Now that you have KubeVirt installed on your Kubernetes cluster, it is time to look at the VirtualMachine resource and use the virtctl utility to run VMs.
What does a virtual machine look like in KubeVirt?
Let's start with a simple example. As you can see below, a VirtualMachine starts out like most Kubernetes resources, defining an apiVersion, kind, and some metadata like the VirtualMachine's name.
apiVersion: kubevirt.io/v1 kind: VirtualMachine metadata: name: testvm
Next comes the familiar spec section, and here we start getting into some virtualization specific concepts.
spec: running: false
Running refers to the expected status of the VM, if it is false as shown here, then KubeVirt will not attempt to run the VM.
Next up is a template section that will look familiar if you have worked with Deployments before. Where a Deployment templates Pods it will control, a VirtualMachine templates VirtualMachineInstances.
template: metadata: labels: kubevirt.io/size: small kubevirt.io/domain: testvm spec:
Within the spec of the template, we start to find specifics defining how the virtual machine should be provisioned.
domain: devices: disks: - name: containerdisk disk: bus: virtio - name: cloudinitdisk disk: bus: virtio interfaces: - name: default masquerade: {} resources: requests: memory: 64M
The domain section relates directly to a libvirt XML domain definition, and is used to create such XML for use within the virt-launcher Pod when the VirtualMachine gets set to running. Here you can see that we are creating a small, 64MB RAM VM with a containerdisk as its OS disk. It will receive configuration through cloud-init, (though in this simple CirrOS example, cloud-init is not actually installed).
networks: - name: default pod: {}
As a peer to the domain section, we have a networks section that configures networking external to the VM's domain definition. Wit this basic example, the VM is connected to the Pod network via a single masqueraded network interface. This means the VM gets an internal private network address that is then connected to the Kubernetes cluster's Pod network using network address translation, or NAT.
volumes: - name: containerdisk containerDisk: image: quay.io/kubevirt/cirros-container-disk-demo - name: cloudinitdisk cloudInitNoCloud: userDataBase64: SGkuXG4=
The last section defines volumes the virt-launcher Pod will mount and how they will be presented to the VM. Like the volumes section of a Pod, names must match disks within the domain's disks section, and many of the same options available to a Pod work here (like ConfigurationMaps, PersistentVolumeClaims, and Secrets).
How to interact with virtual machines
The virtctl command line interface works alongside kubectl to provide virtualization specific actions to manage virtual machines in a Kubernetes cluster. Virtctl was downloaded in the previous guided exercise; the latest release may be found from the KubeVirt GitHub project’s releases page.
The virtctl client enables a virtual machine admin to:
- Stop, start, restart, and pause virtual machines
- Manage disk images
- Access the virtual machine’s serial, graphical, and ssh consoles
- Migrate between nodes
- Create Kubernetes services exposing VM ports to the cluster
- Port forward VM ports to the local client’s workstation
This isn’t an exhuastive list of even the current functions, and new ones are added with new releases of KubeVirt from time to time. To see a full and updated list, run
virtctl help