Rancher Harvester: Live Migration Network

I’ve been running Rancher Harvester since version 1.1, and for a while, I was stuck on an older release because of a big pain point: live migrations kept timing out when I attempted cluster upgrades. In my setup, the default management network only had 1Gb/s NICs, a deal-breaker for large VMs with 32 to 64GB of RAM (and large disks). Migrating them on a slow link was never going to end well.

The situation was even more frustrating because I already had a backend bond set up with MC-LAG and LACP, giving me a full 50Gb/s for all my essential traffic. But because it’s a bonded interface, I didn’t have an extra physical NIC to break out strictly for migrations. I had to figure out how to route KubeVirt’s live migration data onto a VLAN on that bond rather than leaving it stuck on the 1Gb/s management interface.

After a bit of digging, I realized Harvester is KubeVirt under the hood — and KubeVirt supports specifying a dedicated migration network (as of 0.49) if I create a custom NetworkAttachmentDefinition (NAD) and update the KubeVirt CRD. Eventually, I got it working. My cluster migrates large VMs quickly and smoothly during upgrades, with zero timeouts. Below is exactly how I pulled it off!

Step-by-Step: Setting Up a Dedicated Live Migration Network

1. Create a NetworkAttachmentDefinition (NAD)

First, I made a NAD in the harvester-system namespace referencing my existing “backend bond” interface. I needed VLAN 10 and the 10.1.10.0/23 subnet, but feel free to adjust these to whatever fits your environment.

migration-network.yaml

				
					apiVersion: "k8s.cni.cncf.io/v1"
kind: NetworkAttachmentDefinition
metadata:
  name: migration-network
  namespace: harvester-system
  labels:
    network.harvesterhci.io/clusternetwork: backend-bond
    network.harvesterhci.io/ready: "true"
    network.harvesterhci.io/type: L2VlanNetwork
    network.harvesterhci.io/vlan-id: "10"
spec:
  config: '{
      "cniVersion":"0.3.1",
      "name":"migration-network",
      "type":"bridge",
      "bridge":"backend-bond-br",
      "promiscMode":true,
      "vlan":10,
      "ipam": {
        "type": "whereabouts",
        "range": "10.1.10.0/23"
      }
    }'
				
			

– VLAN ID: I used 10 here.

– Bridge name: backend-bond-br must match the actual name of the bonded interface on each Harvester host.

– IP Range: 10.1.10.0/23 is what I had free. Adapt to your environment.

Once the file was created, I applied it with:

				
					kubectl apply -f migration-network.yaml
				
			

You’ll see something like `harvester-system/kubevirt` (the namespace might differ slightly).

2. Edit the CR:

				
					kubectl edit kubevirt -n harvester-system kubevirt
				
			

(Replace with your namespace as needed but shouldn’t need to on harvester)

3. Add or update this block under spec.configuration:

				
					spec:
     certificateRotateStrategy: {}
     configuration:
       developerConfiguration:
         featureGates:
         - LiveMigration
         - HotplugVolumes
         - HostDevices
       emulatedMachines:
       - q35
       - pc-q35*
       - pc
       - pc-i440fx*
       migrations:                   # <---
         network: migration-network  # <---
				
			

– migrations.network is set to migration-network, matching the NAD metadata.name from the YAML we applied.

4. Save your edits.

– Expect your virt-handler pods to restart as they reload the new config.

3. Verify & Migrate

Once the virt-handler pods have restarted, initiate a migration on one of your VMs that resides on shared storage. You can do it through the Harvester UI (click the VM, choose “Migrate”).

Use your favorite monitoring tool — iftopnload, or your switch’s interface counters — to confirm traffic is now flowing over the backend bond on VLAN 10. You should finally see those big 50Gb/s pipes in action, instead of being stuck at 1Gb/s.

The Results

By moving live migration traffic onto this faster VLAN, I can finally migrate large VMs — some with 64GB RAM and big disks — without timing out. Upgrades go smoother, and I’m not banging my head against the wall waiting for migrations to finish. For anyone else running Harvester or KubeVirt on a similarly bonded environment, I hope this walkthrough saves you from the headaches I faced!

If you have any questions or issues, be sure to check your virt-handler and virt-launcher logs, confirm your VLAN is properly trunked on the switch, and verify your IPAM settings in the NAD. Once you’re set, life is so much better on that high-speed link.

Happy migrating!

Disclaimer:
These steps are based on my personal experience and experimentation. I cannot guarantee that they will work flawlessly in all environments or that they will not cause damage to your system. Please use caution and, if possible, test everything thoroughly in a development environment first — though some of us might prefer to live dangerously and test in production! Additionally, these instructions are not officially provided or endorsed by Rancher, and I am not affiliated with Rancher or any of its subsidiaries. Use the information at your own risk.