2 Copyright(c) 2016 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 Live Migration of VM with Virtio on host running vhost_user
33 ===========================================================
38 Live Migration of a VM with DPDK Virtio PMD on a host which is
39 running the Vhost sample application (vhost-switch) and using the DPDK PMD (ixgbe or i40e).
41 The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC's.
43 The following sections show an example of how to do this migration.
48 To test the Live Migration two servers with identical operating systems installed are used.
49 KVM and QEMU is also required on the servers.
51 QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.
53 In this example, the servers have Niantic and or Fortville NIC's installed.
54 The NIC's on both servers are connected to a switch
55 which is also connected to the traffic generator.
57 The switch is configured to broadcast traffic on all the NIC ports.
59 The ip address of host_server_1 is 10.237.212.46
61 The ip address of host_server_2 is 10.237.212.131
66 The sample scripts mentioned in the steps below can be found in the
67 :ref:`Sample host scripts <lm_virtio_vhost_user_host_scripts>` and
68 :ref:`Sample VM scripts <lm_virtio_vhost_user_vm_scripts>` sections.
70 On host_server_1: Terminal 1
71 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
73 Setup DPDK on host_server_1
75 .. code-block:: console
77 cd /root/dpdk/host_scripts
78 ./setup_dpdk_on_host.sh
80 On host_server_1: Terminal 2
81 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
83 Bind the Niantic or Fortville NIC to igb_uio on host_server_1.
87 .. code-block:: console
90 ./dpdk_nic_bind.py -b igb_uio 0000:02:00.0
94 .. code-block:: console
97 ./dpdk_nic_bind.py -b igb_uio 0000:09:00.0
99 On host_server_1: Terminal 3
100 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
102 For Fortville and Niantic NIC's reset SRIOV and run the
103 vhost_user sample application (vhost-switch) on host_server_1.
105 .. code-block:: console
107 cd /root/dpdk/host_scripts
108 ./reset_vf_on_212_46.sh
109 ./run_vhost_switch_on_host.sh
111 On host_server_1: Terminal 1
112 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
114 Start the VM on host_server_1
116 .. code-block:: console
118 ./vm_virtio_vhost_user.sh
120 On host_server_1: Terminal 4
121 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
123 Connect to the QEMU monitor on host_server_1.
125 .. code-block:: console
127 cd /root/dpdk/host_scripts
128 ./connect_to_qemu_mon_on_host.sh
131 On host_server_1: Terminal 1
132 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
134 **In VM on host_server_1:**
136 Setup DPDK in the VM and run testpmd in the VM.
138 .. code-block:: console
140 cd /root/dpdk/vm_scripts
141 ./setup_dpdk_in_vm.sh
142 ./run_testpmd_in_vm.sh
144 testpmd> show port info all
145 testpmd> set fwd mac retry
146 testpmd> start tx_first
147 testpmd> show port stats all
149 Virtio traffic is seen at P1 and P2.
151 On host_server_2: Terminal 1
152 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
154 Set up DPDK on the host_server_2.
156 .. code-block:: console
158 cd /root/dpdk/host_scripts
159 ./setup_dpdk_on_host.sh
161 On host_server_2: Terminal 2
162 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
164 Bind the Niantic or Fortville NIC to igb_uio on host_server_2.
168 .. code-block:: console
171 ./dpdk_nic_bind.py -b igb_uio 0000:03:00.0
175 .. code-block:: console
178 ./dpdk_nic_bind.py -b igb_uio 0000:06:00.0
180 On host_server_2: Terminal 3
181 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
183 For Fortville and Niantic NIC's reset SRIOV, and run
184 the vhost_user sample application on host_server_2.
186 .. code-block:: console
188 cd /root/dpdk/host_scripts
189 ./reset_vf_on_212_131.sh
190 ./run_vhost_switch_on_host.sh
192 On host_server_2: Terminal 1
193 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
195 Start the VM on host_server_2.
197 .. code-block:: console
199 ./vm_virtio_vhost_user_migrate.sh
201 On host_server_2: Terminal 4
202 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
204 Connect to the QEMU monitor on host_server_2.
206 .. code-block:: console
208 cd /root/dpdk/host_scripts
209 ./connect_to_qemu_mon_on_host.sh
211 VM status: paused (inmigrate)
214 On host_server_1: Terminal 4
215 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
217 Check that switch is up before migrating the VM.
219 .. code-block:: console
221 (qemu) migrate tcp:10.237.212.131:5555
223 VM status: paused (postmigrate)
226 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
227 Migration status: completed
228 total time: 11619 milliseconds
229 downtime: 5 milliseconds
230 setup: 7 milliseconds
231 transferred ram: 379699 kbytes
232 throughput: 267.82 mbps
233 remaining ram: 0 kbytes
234 total ram: 1590088 kbytes
235 duplicate: 303985 pages
238 normal bytes: 376292 kbytes
242 On host_server_2: Terminal 1
243 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
245 **In VM on host_server_2:**
247 Hit Enter key. This brings the user to the testpmd prompt.
249 .. code-block:: console
253 On host_server_2: Terminal 4
254 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
256 **In QEMU monitor on host_server_2**
258 .. code-block:: console
263 On host_server_2: Terminal 1
264 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
266 **In VM on host_server_2:**
268 .. code-block:: console
270 testomd> show port info all
271 testpmd> show port stats all
273 Virtio traffic is seen at P0 and P1.
276 .. _lm_virtio_vhost_user_host_scripts:
281 reset_vf_on_212_46.sh
282 ~~~~~~~~~~~~~~~~~~~~~
287 # This script is run on the host 10.237.212.46 to reset SRIOV
289 # BDF for Fortville NIC is 0000:02:00.0
290 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
291 echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
292 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
294 # BDF for Niantic NIC is 0000:09:00.0
295 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
296 echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
297 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
299 vm_virtio_vhost_user.sh
300 ~~~~~~~~~~~~~~~~~~~~~~~
305 # Script for use with vhost_user sample application
306 # The host system has 8 cpu's (0-7)
309 KVM_PATH="/usr/bin/qemu-system-x86_64"
312 DISK_IMG="/home/user/disk_image/virt1_sml.disk"
314 # Number of guest cpus
320 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
323 SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
325 taskset -c 2-7 $KVM_PATH \
329 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
330 -numa node,memdev=mem,nodeid=0 \
338 -chardev socket,id=chr0,path=$SOCKET_PATH \
339 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
340 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
341 -chardev socket,id=chr1,path=$SOCKET_PATH \
342 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
343 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
344 -monitor telnet::3333,server,nowait
346 connect_to_qemu_mon_on_host.sh
347 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
352 # This script is run on both hosts when the VM is up,
353 # to connect to the Qemu Monitor.
357 reset_vf_on_212_131.sh
358 ~~~~~~~~~~~~~~~~~~~~~~
363 # This script is run on the host 10.237.212.131 to reset SRIOV
365 # BDF for Ninatic NIC is 0000:06:00.0
366 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
367 echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
368 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
370 # BDF for Fortville NIC is 0000:03:00.0
371 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
372 echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
373 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
375 vm_virtio_vhost_user_migrate.sh
376 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
381 # Script for use with vhost user sample application
382 # The host system has 8 cpu's (0-7)
385 KVM_PATH="/usr/bin/qemu-system-x86_64"
388 DISK_IMG="/home/user/disk_image/virt1_sml.disk"
390 # Number of guest cpus
396 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
399 SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
401 taskset -c 2-7 $KVM_PATH \
405 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
406 -numa node,memdev=mem,nodeid=0 \
414 -chardev socket,id=chr0,path=$SOCKET_PATH \
415 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
416 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
417 -chardev socket,id=chr1,path=$SOCKET_PATH \
418 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
419 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
420 -incoming tcp:0:5555 \
421 -monitor telnet::3333,server,nowait
423 .. _lm_virtio_vhost_user_vm_scripts:
428 setup_dpdk_virtio_in_vm.sh
429 ~~~~~~~~~~~~~~~~~~~~~~~~~~
434 # this script matches the vm_virtio_vhost_user script
438 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
439 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
440 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
443 /root/dpdk/tools/dpdk_nic_bind.py --status
448 insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
450 /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:03.0
451 /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:04.0
453 /root/dpdk/tools/dpdk_nic_bind.py --status
461 # Run testpmd for use with vhost_user sample app.
462 # test system has 8 cpus (0-7), use cpus 2-7 for VM
464 /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
465 -c 3f -n 4 --socket-mem 350 -- --burst=64 --i --disable-hw-vlan-filter