2 Copyright(c) 2016 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
32 Live Migration of VM with Virtio on host running vhost_user
33 ===========================================================
38 Live Migration of a VM with DPDK Virtio PMD on a host which is
39 running the Vhost sample application (vhost-switch) and using the DPDK PMD (ixgbe or i40e).
41 The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC's.
43 The following sections show an example of how to do this migration.
48 To test the Live Migration two servers with identical operating systems installed are used.
49 KVM and QEMU is also required on the servers.
51 QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.
53 In this example, the servers have Niantic and or Fortville NIC's installed.
54 The NIC's on both servers are connected to a switch
55 which is also connected to the traffic generator.
57 The switch is configured to broadcast traffic on all the NIC ports.
59 The ip address of host_server_1 is 10.237.212.46
61 The ip address of host_server_2 is 10.237.212.131
63 .. _figure_lm_vhost_user:
65 .. figure:: img/lm_vhost_user.*
70 The sample scripts mentioned in the steps below can be found in the
71 :ref:`Sample host scripts <lm_virtio_vhost_user_host_scripts>` and
72 :ref:`Sample VM scripts <lm_virtio_vhost_user_vm_scripts>` sections.
74 On host_server_1: Terminal 1
75 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
77 Setup DPDK on host_server_1
79 .. code-block:: console
81 cd /root/dpdk/host_scripts
82 ./setup_dpdk_on_host.sh
84 On host_server_1: Terminal 2
85 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
87 Bind the Niantic or Fortville NIC to igb_uio on host_server_1.
91 .. code-block:: console
93 cd /root/dpdk/usertools
94 ./dpdk-devbind.py -b igb_uio 0000:02:00.0
98 .. code-block:: console
100 cd /root/dpdk/usertools
101 ./dpdk-devbind.py -b igb_uio 0000:09:00.0
103 On host_server_1: Terminal 3
104 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
106 For Fortville and Niantic NIC's reset SRIOV and run the
107 vhost_user sample application (vhost-switch) on host_server_1.
109 .. code-block:: console
111 cd /root/dpdk/host_scripts
112 ./reset_vf_on_212_46.sh
113 ./run_vhost_switch_on_host.sh
115 On host_server_1: Terminal 1
116 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
118 Start the VM on host_server_1
120 .. code-block:: console
122 ./vm_virtio_vhost_user.sh
124 On host_server_1: Terminal 4
125 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
127 Connect to the QEMU monitor on host_server_1.
129 .. code-block:: console
131 cd /root/dpdk/host_scripts
132 ./connect_to_qemu_mon_on_host.sh
135 On host_server_1: Terminal 1
136 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
138 **In VM on host_server_1:**
140 Setup DPDK in the VM and run testpmd in the VM.
142 .. code-block:: console
144 cd /root/dpdk/vm_scripts
145 ./setup_dpdk_in_vm.sh
146 ./run_testpmd_in_vm.sh
148 testpmd> show port info all
149 testpmd> set fwd mac retry
150 testpmd> start tx_first
151 testpmd> show port stats all
153 Virtio traffic is seen at P1 and P2.
155 On host_server_2: Terminal 1
156 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158 Set up DPDK on the host_server_2.
160 .. code-block:: console
162 cd /root/dpdk/host_scripts
163 ./setup_dpdk_on_host.sh
165 On host_server_2: Terminal 2
166 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
168 Bind the Niantic or Fortville NIC to igb_uio on host_server_2.
172 .. code-block:: console
174 cd /root/dpdk/usertools
175 ./dpdk-devbind.py -b igb_uio 0000:03:00.0
179 .. code-block:: console
181 cd /root/dpdk/usertools
182 ./dpdk-devbind.py -b igb_uio 0000:06:00.0
184 On host_server_2: Terminal 3
185 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
187 For Fortville and Niantic NIC's reset SRIOV, and run
188 the vhost_user sample application on host_server_2.
190 .. code-block:: console
192 cd /root/dpdk/host_scripts
193 ./reset_vf_on_212_131.sh
194 ./run_vhost_switch_on_host.sh
196 On host_server_2: Terminal 1
197 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
199 Start the VM on host_server_2.
201 .. code-block:: console
203 ./vm_virtio_vhost_user_migrate.sh
205 On host_server_2: Terminal 4
206 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
208 Connect to the QEMU monitor on host_server_2.
210 .. code-block:: console
212 cd /root/dpdk/host_scripts
213 ./connect_to_qemu_mon_on_host.sh
215 VM status: paused (inmigrate)
218 On host_server_1: Terminal 4
219 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
221 Check that switch is up before migrating the VM.
223 .. code-block:: console
225 (qemu) migrate tcp:10.237.212.131:5555
227 VM status: paused (postmigrate)
230 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
231 Migration status: completed
232 total time: 11619 milliseconds
233 downtime: 5 milliseconds
234 setup: 7 milliseconds
235 transferred ram: 379699 kbytes
236 throughput: 267.82 mbps
237 remaining ram: 0 kbytes
238 total ram: 1590088 kbytes
239 duplicate: 303985 pages
242 normal bytes: 376292 kbytes
246 On host_server_2: Terminal 1
247 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
249 **In VM on host_server_2:**
251 Hit Enter key. This brings the user to the testpmd prompt.
253 .. code-block:: console
257 On host_server_2: Terminal 4
258 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
260 **In QEMU monitor on host_server_2**
262 .. code-block:: console
267 On host_server_2: Terminal 1
268 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
270 **In VM on host_server_2:**
272 .. code-block:: console
274 testomd> show port info all
275 testpmd> show port stats all
277 Virtio traffic is seen at P0 and P1.
280 .. _lm_virtio_vhost_user_host_scripts:
285 reset_vf_on_212_46.sh
286 ~~~~~~~~~~~~~~~~~~~~~
291 # This script is run on the host 10.237.212.46 to reset SRIOV
293 # BDF for Fortville NIC is 0000:02:00.0
294 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
295 echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
296 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
298 # BDF for Niantic NIC is 0000:09:00.0
299 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
300 echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
301 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
303 vm_virtio_vhost_user.sh
304 ~~~~~~~~~~~~~~~~~~~~~~~
309 # Script for use with vhost_user sample application
310 # The host system has 8 cpu's (0-7)
313 KVM_PATH="/usr/bin/qemu-system-x86_64"
316 DISK_IMG="/home/user/disk_image/virt1_sml.disk"
318 # Number of guest cpus
324 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
327 SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
329 taskset -c 2-7 $KVM_PATH \
333 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
334 -numa node,memdev=mem,nodeid=0 \
342 -chardev socket,id=chr0,path=$SOCKET_PATH \
343 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
344 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
345 -chardev socket,id=chr1,path=$SOCKET_PATH \
346 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
347 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
348 -monitor telnet::3333,server,nowait
350 connect_to_qemu_mon_on_host.sh
351 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
356 # This script is run on both hosts when the VM is up,
357 # to connect to the Qemu Monitor.
361 reset_vf_on_212_131.sh
362 ~~~~~~~~~~~~~~~~~~~~~~
367 # This script is run on the host 10.237.212.131 to reset SRIOV
369 # BDF for Ninatic NIC is 0000:06:00.0
370 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
371 echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
372 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
374 # BDF for Fortville NIC is 0000:03:00.0
375 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
376 echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
377 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
379 vm_virtio_vhost_user_migrate.sh
380 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
385 # Script for use with vhost user sample application
386 # The host system has 8 cpu's (0-7)
389 KVM_PATH="/usr/bin/qemu-system-x86_64"
392 DISK_IMG="/home/user/disk_image/virt1_sml.disk"
394 # Number of guest cpus
400 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
403 SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
405 taskset -c 2-7 $KVM_PATH \
409 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
410 -numa node,memdev=mem,nodeid=0 \
418 -chardev socket,id=chr0,path=$SOCKET_PATH \
419 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
420 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
421 -chardev socket,id=chr1,path=$SOCKET_PATH \
422 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
423 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
424 -incoming tcp:0:5555 \
425 -monitor telnet::3333,server,nowait
427 .. _lm_virtio_vhost_user_vm_scripts:
432 setup_dpdk_virtio_in_vm.sh
433 ~~~~~~~~~~~~~~~~~~~~~~~~~~
438 # this script matches the vm_virtio_vhost_user script
442 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
443 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
444 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
447 /root/dpdk/usertools/dpdk-devbind.py --status
452 insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
454 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
455 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
457 /root/dpdk/usertools/dpdk-devbind.py --status
465 # Run testpmd for use with vhost_user sample app.
466 # test system has 8 cpus (0-7), use cpus 2-7 for VM
468 /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
469 -l 0-5 -n 4 --socket-mem 350 -- --burst=64 --i