1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
4 Live Migration of VM with Virtio on host running vhost_user
5 ===========================================================
10 Live Migration of a VM with DPDK Virtio PMD on a host which is
11 running the Vhost sample application (vhost-switch) and using the DPDK PMD (ixgbe or i40e).
13 The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC's.
15 The following sections show an example of how to do this migration.
20 To test the Live Migration two servers with identical operating systems installed are used.
21 KVM and QEMU is also required on the servers.
23 QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.
25 In this example, the servers have Niantic and or Fortville NIC's installed.
26 The NIC's on both servers are connected to a switch
27 which is also connected to the traffic generator.
29 The switch is configured to broadcast traffic on all the NIC ports.
31 The ip address of host_server_1 is 10.237.212.46
33 The ip address of host_server_2 is 10.237.212.131
35 .. _figure_lm_vhost_user:
37 .. figure:: img/lm_vhost_user.*
42 The sample scripts mentioned in the steps below can be found in the
43 :ref:`Sample host scripts <lm_virtio_vhost_user_host_scripts>` and
44 :ref:`Sample VM scripts <lm_virtio_vhost_user_vm_scripts>` sections.
46 On host_server_1: Terminal 1
47 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
49 Setup DPDK on host_server_1
51 .. code-block:: console
53 cd /root/dpdk/host_scripts
54 ./setup_dpdk_on_host.sh
56 On host_server_1: Terminal 2
57 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
59 Bind the Niantic or Fortville NIC to igb_uio on host_server_1.
63 .. code-block:: console
65 cd /root/dpdk/usertools
66 ./dpdk-devbind.py -b igb_uio 0000:02:00.0
70 .. code-block:: console
72 cd /root/dpdk/usertools
73 ./dpdk-devbind.py -b igb_uio 0000:09:00.0
75 On host_server_1: Terminal 3
76 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
78 For Fortville and Niantic NIC's reset SRIOV and run the
79 vhost_user sample application (vhost-switch) on host_server_1.
81 .. code-block:: console
83 cd /root/dpdk/host_scripts
84 ./reset_vf_on_212_46.sh
85 ./run_vhost_switch_on_host.sh
87 On host_server_1: Terminal 1
88 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
90 Start the VM on host_server_1
92 .. code-block:: console
94 ./vm_virtio_vhost_user.sh
96 On host_server_1: Terminal 4
97 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
99 Connect to the QEMU monitor on host_server_1.
101 .. code-block:: console
103 cd /root/dpdk/host_scripts
104 ./connect_to_qemu_mon_on_host.sh
107 On host_server_1: Terminal 1
108 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
110 **In VM on host_server_1:**
112 Setup DPDK in the VM and run testpmd in the VM.
114 .. code-block:: console
116 cd /root/dpdk/vm_scripts
117 ./setup_dpdk_in_vm.sh
118 ./run_testpmd_in_vm.sh
120 testpmd> show port info all
121 testpmd> set fwd mac retry
122 testpmd> start tx_first
123 testpmd> show port stats all
125 Virtio traffic is seen at P1 and P2.
127 On host_server_2: Terminal 1
128 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
130 Set up DPDK on the host_server_2.
132 .. code-block:: console
134 cd /root/dpdk/host_scripts
135 ./setup_dpdk_on_host.sh
137 On host_server_2: Terminal 2
138 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
140 Bind the Niantic or Fortville NIC to igb_uio on host_server_2.
144 .. code-block:: console
146 cd /root/dpdk/usertools
147 ./dpdk-devbind.py -b igb_uio 0000:03:00.0
151 .. code-block:: console
153 cd /root/dpdk/usertools
154 ./dpdk-devbind.py -b igb_uio 0000:06:00.0
156 On host_server_2: Terminal 3
157 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
159 For Fortville and Niantic NIC's reset SRIOV, and run
160 the vhost_user sample application on host_server_2.
162 .. code-block:: console
164 cd /root/dpdk/host_scripts
165 ./reset_vf_on_212_131.sh
166 ./run_vhost_switch_on_host.sh
168 On host_server_2: Terminal 1
169 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
171 Start the VM on host_server_2.
173 .. code-block:: console
175 ./vm_virtio_vhost_user_migrate.sh
177 On host_server_2: Terminal 4
178 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
180 Connect to the QEMU monitor on host_server_2.
182 .. code-block:: console
184 cd /root/dpdk/host_scripts
185 ./connect_to_qemu_mon_on_host.sh
187 VM status: paused (inmigrate)
190 On host_server_1: Terminal 4
191 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
193 Check that switch is up before migrating the VM.
195 .. code-block:: console
197 (qemu) migrate tcp:10.237.212.131:5555
199 VM status: paused (postmigrate)
202 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
203 Migration status: completed
204 total time: 11619 milliseconds
205 downtime: 5 milliseconds
206 setup: 7 milliseconds
207 transferred ram: 379699 kbytes
208 throughput: 267.82 mbps
209 remaining ram: 0 kbytes
210 total ram: 1590088 kbytes
211 duplicate: 303985 pages
214 normal bytes: 376292 kbytes
218 On host_server_2: Terminal 1
219 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221 **In VM on host_server_2:**
223 Hit Enter key. This brings the user to the testpmd prompt.
225 .. code-block:: console
229 On host_server_2: Terminal 4
230 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232 **In QEMU monitor on host_server_2**
234 .. code-block:: console
239 On host_server_2: Terminal 1
240 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
242 **In VM on host_server_2:**
244 .. code-block:: console
246 testomd> show port info all
247 testpmd> show port stats all
249 Virtio traffic is seen at P0 and P1.
252 .. _lm_virtio_vhost_user_host_scripts:
257 reset_vf_on_212_46.sh
258 ~~~~~~~~~~~~~~~~~~~~~
263 # This script is run on the host 10.237.212.46 to reset SRIOV
265 # BDF for Fortville NIC is 0000:02:00.0
266 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
267 echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
268 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
270 # BDF for Niantic NIC is 0000:09:00.0
271 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
272 echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
273 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
275 vm_virtio_vhost_user.sh
276 ~~~~~~~~~~~~~~~~~~~~~~~
281 # Script for use with vhost_user sample application
282 # The host system has 8 cpu's (0-7)
285 KVM_PATH="/usr/bin/qemu-system-x86_64"
288 DISK_IMG="/home/user/disk_image/virt1_sml.disk"
290 # Number of guest cpus
296 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
299 SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
301 taskset -c 2-7 $KVM_PATH \
305 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
306 -numa node,memdev=mem,nodeid=0 \
314 -chardev socket,id=chr0,path=$SOCKET_PATH \
315 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
316 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
317 -chardev socket,id=chr1,path=$SOCKET_PATH \
318 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
319 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
320 -monitor telnet::3333,server,nowait
322 connect_to_qemu_mon_on_host.sh
323 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
328 # This script is run on both hosts when the VM is up,
329 # to connect to the Qemu Monitor.
333 reset_vf_on_212_131.sh
334 ~~~~~~~~~~~~~~~~~~~~~~
339 # This script is run on the host 10.237.212.131 to reset SRIOV
341 # BDF for Ninatic NIC is 0000:06:00.0
342 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
343 echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
344 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
346 # BDF for Fortville NIC is 0000:03:00.0
347 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
348 echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
349 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
351 vm_virtio_vhost_user_migrate.sh
352 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
357 # Script for use with vhost user sample application
358 # The host system has 8 cpu's (0-7)
361 KVM_PATH="/usr/bin/qemu-system-x86_64"
364 DISK_IMG="/home/user/disk_image/virt1_sml.disk"
366 # Number of guest cpus
372 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
375 SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
377 taskset -c 2-7 $KVM_PATH \
381 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
382 -numa node,memdev=mem,nodeid=0 \
390 -chardev socket,id=chr0,path=$SOCKET_PATH \
391 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
392 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
393 -chardev socket,id=chr1,path=$SOCKET_PATH \
394 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
395 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
396 -incoming tcp:0:5555 \
397 -monitor telnet::3333,server,nowait
399 .. _lm_virtio_vhost_user_vm_scripts:
404 setup_dpdk_virtio_in_vm.sh
405 ~~~~~~~~~~~~~~~~~~~~~~~~~~
410 # this script matches the vm_virtio_vhost_user script
414 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
415 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
416 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
419 /root/dpdk/usertools/dpdk-devbind.py --status
424 insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
426 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
427 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
429 /root/dpdk/usertools/dpdk-devbind.py --status
437 # Run testpmd for use with vhost_user sample app.
438 # test system has 8 cpus (0-7), use cpus 2-7 for VM
440 /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
441 -l 0-5 -n 4 --socket-mem 350 -- --burst=64 --i