Guided Exercise: Use CDI to Manage VM Disk Images
Check Default Storage Class
Before we can install the Containerized Data Importer (CDI), we have a prerequisite. CDI requires PVCs, so we need a supported (and default) storage class and provisioner. Setting this up is beyond the scope of this lesson, but at least in a sandbox environment like a Minikube or Kind deployment, there is a working local storage class available. Here is an example from a Kind node (using our Killercoda CDI lesson):
kubectl get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 6m4s
Install CDI
Much as we did with the Installing KubeVirt exercise above, we first gather the most recent CDI version into an environment variable.
export VERSION=$(curl -Ls https://github.com/kubevirt/containerized-data-importer/releases/latest | grep -m 1 -o "v[0-9]\.[0-9]*\.[0-9]*") echo $VERSION kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-operator.yaml
Create a CDI Custom Resource to trigger the operator's deployment of CDI:
kubectl create -f https://github.com/kubevirt/containerized-data-importer/releases/download/$VERSION/cdi-cr.yaml
Check the status of the CDI deployment. It may take some time before the cdi resource's "PHASE" reads "Deployed"
kubectl get cdi -n cdi NAME AGE PHASE cdi 3m Deployed
Create a DataVolume
Here is the Cirros DataVolume from before:
kubectl create -f - <<EOF
apiVersion: cdi.kubevirt.io/v1beta1
kind: DataVolume
metadata:
name: "cirros"
spec:
source:
http:
url: "https://download.cirros-cloud.net/0.4.0/cirros-0.4.0-x86_64-disk.img" # S3 or GCS
pvc:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "64Mi"
EOF
datavolume.cdi.kubevirt.io/cirros created
Next, we check the status of the newly created DV:
kubectl get dv NAME PHASE PROGRESS RESTARTS AGE cirros WaitForFirstConsumer N/A 8s
As you can see here, the DV is waiting for a consumer to come along and request the PVC. This is because we are using a node local StorageClass, and the storage provider needs to know which node to import the DV into.
Create a VirtualMachine using the DataVolume
Let’s create a consumer for the DV from the previous step:
kubectl create -f - <<EOF
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
labels:
kubevirt.io/os: linux
name: vm1
spec:
running: true
template:
metadata:
creationTimestamp: null
labels:
kubevirt.io/domain: vm1
spec:
domain:
cpu:
cores: 1
devices:
disks:
- disk:
bus: virtio
name: disk0
- cdrom:
bus: sata
readonly: true
name: cloudinitdisk
resources:
requests:
memory: 128M
volumes:
- name: disk0
persistentVolumeClaim:
claimName: cirros
- cloudInitNoCloud:
userData: |
#cloud-config
hostname: vm1
ssh_pwauth: True
disable_root: false
ssh_authorized_keys:
- ssh-rsa YOUR_SSH_PUB_KEY_HERE
name: cloudinitdisk
EOF
virtualmachine.kubevirt.io/vm1 created
We can watch the DV populating with kubectl get commands:
kubectl get dv,pvc,vm NAME PHASE PROGRESS RESTARTS AGE datavolume.cdi.kubevirt.io/cirros PVCBound N/A 32s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/cirros Bound pvc-428372e3-e7eb-47e5-8179-2a4e9bdaff92 64Mi RWO local-path 32s NAME AGE STATUS READY virtualmachine.kubevirt.io/vm1 10s Starting False
Then the DV goes through a few updates:
NAME PHASE PROGRESS RESTARTS AGE datavolume.cdi.kubevirt.io/cirros ImportScheduled N/A 42s
Once the import starts, the PROGRESS field will show a percentage:
NAME PHASE PROGRESS RESTARTS AGE datavolume.cdi.kubevirt.io/cirros ImportInProgress 11.90% 51s
Finally, when the import finishes, the DataVolume will be deleted and only the PVC will be left:
kubectl get dv,pvc,vm NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/cirros Bound pvc-428372e3-e7eb-47e5-8179-2a4e9bdaff92 64Mi RWO local-path 2m44s NAME AGE STATUS READY virtualmachine.kubevirt.io/vm1 2m22s Running True