1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
4 Live Migration of VM with SR-IOV VF
5 ===================================
10 It is not possible to migrate a Virtual Machine which has an SR-IOV Virtual Function (VF).
12 To get around this problem the bonding PMD is used.
14 The following sections show an example of how to do this.
19 A bonded device is created in the VM.
20 The virtio and VF PMD's are added as slaves to the bonded device.
21 The VF is set as the primary slave of the bonded device.
23 A bridge must be set up on the Host connecting the tap device, which is the
24 backend of the Virtio device and the Physical Function (PF) device.
26 To test the Live Migration two servers with identical operating systems installed are used.
27 KVM and Qemu 2.3 is also required on the servers.
29 In this example, the servers have Niantic and or Fortville NIC's installed.
30 The NIC's on both servers are connected to a switch
31 which is also connected to the traffic generator.
33 The switch is configured to broadcast traffic on all the NIC ports.
34 A :ref:`Sample switch configuration <lm_bond_virtio_sriov_switch_conf>`
35 can be found in this section.
37 The host is running the Kernel PF driver (ixgbe or i40e).
39 The ip address of host_server_1 is 10.237.212.46
41 The ip address of host_server_2 is 10.237.212.131
43 .. _figure_lm_bond_virtio_sriov:
45 .. figure:: img/lm_bond_virtio_sriov.*
50 The sample scripts mentioned in the steps below can be found in the
51 :ref:`Sample host scripts <lm_bond_virtio_sriov_host_scripts>` and
52 :ref:`Sample VM scripts <lm_bond_virtio_sriov_vm_scripts>` sections.
54 On host_server_1: Terminal 1
55 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
57 .. code-block:: console
59 cd /root/dpdk/host_scripts
60 ./setup_vf_on_212_46.sh
64 .. code-block:: console
66 ./vm_virtio_vf_i40e_212_46.sh
70 .. code-block:: console
72 ./vm_virtio_vf_one_212_46.sh
74 On host_server_1: Terminal 2
75 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
77 .. code-block:: console
79 cd /root/dpdk/host_scripts
80 ./setup_bridge_on_212_46.sh
81 ./connect_to_qemu_mon_on_host.sh
84 On host_server_1: Terminal 1
85 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
87 **In VM on host_server_1:**
89 .. code-block:: console
91 cd /root/dpdk/vm_scripts
93 ./run_testpmd_bonding_in_vm.sh
95 testpmd> show port info all
97 The ``mac_addr`` command only works with kernel PF for Niantic
99 .. code-block:: console
101 testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
103 The syntax of the ``testpmd`` command is:
105 Create bonded device (mode) (socket).
107 Mode 1 is active backup.
109 Virtio is port 0 (P0).
113 Bonding is port 2 (P2).
115 .. code-block:: console
117 testpmd> create bonded device 1 0
118 Created new bonded device net_bond_testpmd_0 on (port 2).
119 testpmd> add bonding slave 0 2
120 testpmd> add bonding slave 1 2
121 testpmd> show bonding config 2
123 The syntax of the ``testpmd`` command is:
125 set bonding primary (slave id) (port id)
127 Set primary to P1 before starting bonding port.
129 .. code-block:: console
131 testpmd> set bonding primary 1 2
132 testpmd> show bonding config 2
133 testpmd> port start 2
134 Port 2: 02:09:C0:68:99:A5
135 Checking link statuses...
136 Port 0 Link Up - speed 10000 Mbps - full-duplex
137 Port 1 Link Up - speed 10000 Mbps - full-duplex
138 Port 2 Link Up - speed 10000 Mbps - full-duplex
140 testpmd> show bonding config 2
142 Primary is now P1. There are 2 active slaves.
144 Use P2 only for forwarding.
146 .. code-block:: console
148 testpmd> set portlist 2
149 testpmd> show config fwd
152 testpmd> show bonding config 2
154 Primary is now P1. There are 2 active slaves.
156 .. code-block:: console
158 testpmd> show port stats all
160 VF traffic is seen at P1 and P2.
162 .. code-block:: console
164 testpmd> clear port stats all
165 testpmd> set bonding primary 0 2
166 testpmd> remove bonding slave 1 2
167 testpmd> show bonding config 2
169 Primary is now P0. There is 1 active slave.
171 .. code-block:: console
173 testpmd> clear port stats all
174 testpmd> show port stats all
176 No VF traffic is seen at P0 and P2, VF MAC address still present.
178 .. code-block:: console
181 testpmd> port close 1
183 Port close should remove VF MAC address, it does not remove perm_addr.
185 The ``mac_addr`` command only works with the kernel PF for Niantic.
187 .. code-block:: console
189 testpmd> mac_addr remove 1 AA:BB:CC:DD:EE:FF
190 testpmd> port detach 1
191 Port '0000:00:04.0' is detached. Now total ports is 2
192 testpmd> show port stats all
194 No VF traffic is seen at P0 and P2.
196 On host_server_1: Terminal 2
197 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
199 .. code-block:: console
201 (qemu) device_del vf1
204 On host_server_1: Terminal 1
205 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207 **In VM on host_server_1:**
209 .. code-block:: console
211 testpmd> show bonding config 2
213 Primary is now P0. There is 1 active slave.
215 .. code-block:: console
217 testpmd> show port info all
218 testpmd> show port stats all
220 On host_server_2: Terminal 1
221 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
223 .. code-block:: console
225 cd /root/dpdk/host_scripts
226 ./setup_vf_on_212_131.sh
227 ./vm_virtio_one_migrate.sh
229 On host_server_2: Terminal 2
230 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232 .. code-block:: console
234 ./setup_bridge_on_212_131.sh
235 ./connect_to_qemu_mon_on_host.sh
237 VM status: paused (inmigrate)
240 On host_server_1: Terminal 2
241 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
243 Check that the switch is up before migrating.
245 .. code-block:: console
247 (qemu) migrate tcp:10.237.212.131:5555
249 VM status: paused (postmigrate)
253 .. code-block:: console
256 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
257 Migration status: completed
258 total time: 11834 milliseconds
259 downtime: 18 milliseconds
260 setup: 3 milliseconds
261 transferred ram: 389137 kbytes
262 throughput: 269.49 mbps
263 remaining ram: 0 kbytes
264 total ram: 1590088 kbytes
265 duplicate: 301620 pages
268 normal bytes: 385732 kbytes
272 For the Fortville NIC.
274 .. code-block:: console
277 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
278 Migration status: completed
279 total time: 11619 milliseconds
280 downtime: 5 milliseconds
281 setup: 7 milliseconds
282 transferred ram: 379699 kbytes
283 throughput: 267.82 mbps
284 remaining ram: 0 kbytes
285 total ram: 1590088 kbytes
286 duplicate: 303985 pages
289 normal bytes: 376292 kbytes
293 On host_server_2: Terminal 1
294 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
296 **In VM on host_server_2:**
298 Hit Enter key. This brings the user to the testpmd prompt.
300 .. code-block:: console
304 On host_server_2: Terminal 2
305 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
307 .. code-block:: console
314 .. code-block:: console
316 (qemu) device_add pci-assign,host=06:10.0,id=vf1
318 For the Fortville NIC.
320 .. code-block:: console
322 (qemu) device_add pci-assign,host=03:02.0,id=vf1
324 On host_server_2: Terminal 1
325 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
327 **In VM on host_server_2:**
329 .. code-block:: console
331 testomd> show port info all
332 testpmd> show port stats all
333 testpmd> show bonding config 2
334 testpmd> port attach 0000:00:04.0
339 testpmd> port start 1
341 The ``mac_addr`` command only works with the Kernel PF for Niantic.
343 .. code-block:: console
345 testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
346 testpmd> show port stats all.
347 testpmd> show config fwd
348 testpmd> show bonding config 2
349 testpmd> add bonding slave 1 2
350 testpmd> set bonding primary 1 2
351 testpmd> show bonding config 2
352 testpmd> show port stats all
354 VF traffic is seen at P1 (VF) and P2 (Bonded device).
356 .. code-block:: console
358 testpmd> remove bonding slave 0 2
359 testpmd> show bonding config 2
361 testpmd> port close 0
362 testpmd> port detach 0
363 Port '0000:00:03.0' is detached. Now total ports is 2
365 testpmd> show port info all
366 testpmd> show config fwd
367 testpmd> show port stats all
369 VF traffic is seen at P1 (VF) and P2 (Bonded device).
371 .. _lm_bond_virtio_sriov_host_scripts:
376 setup_vf_on_212_46.sh
377 ~~~~~~~~~~~~~~~~~~~~~
378 Set up Virtual Functions on host_server_1
383 # This script is run on the host 10.237.212.46 to setup the VF
386 cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
387 echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
388 cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
391 # set up Fortville VF
392 cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
393 echo 1 > /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
394 cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
397 vm_virtio_vf_one_212_46.sh
398 ~~~~~~~~~~~~~~~~~~~~~~~~~~
400 Setup Virtual Machine on host_server_1
407 KVM_PATH="/usr/bin/qemu-system-x86_64"
410 DISK_IMG="/home/username/disk_image/virt1_sml.disk"
412 # Number of guest cpus
418 taskset -c 1-5 $KVM_PATH \
426 -vnc none -nographic \
428 -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
429 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
430 -device pci-assign,host=09:10.0,id=vf1 \
431 -monitor telnet::3333,server,nowait
433 setup_bridge_on_212_46.sh
434 ~~~~~~~~~~~~~~~~~~~~~~~~~
436 Setup bridge on host_server_1
441 # This script is run on the host 10.237.212.46 to setup the bridge
442 # for the Tap device and the PF device.
443 # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
445 # ens3f0 is the Niantic NIC
446 # ens6f0 is the Fortville NIC
454 brctl addif virbr0 ens3f0
455 brctl addif virbr0 ens6f0
456 brctl addif virbr0 tap1
464 connect_to_qemu_mon_on_host.sh
465 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
470 # This script is run on both hosts when the VM is up,
471 # to connect to the Qemu Monitor.
475 setup_vf_on_212_131.sh
476 ~~~~~~~~~~~~~~~~~~~~~~
478 Set up Virtual Functions on host_server_2
483 # This script is run on the host 10.237.212.131 to setup the VF
486 cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
487 echo 1 > /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
488 cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
491 # set up Fortville VF
492 cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
493 echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
494 cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
497 vm_virtio_one_migrate.sh
498 ~~~~~~~~~~~~~~~~~~~~~~~~
500 Setup Virtual Machine on host_server_2
505 # Start the VM on host_server_2 with the same parameters except without the VF
506 # parameters, as the VM on host_server_1, in migration-listen mode
507 # (-incoming tcp:0:5555)
510 KVM_PATH="/usr/bin/qemu-system-x86_64"
513 DISK_IMG="/home/username/disk_image/virt1_sml.disk"
515 # Number of guest cpus
521 taskset -c 1-5 $KVM_PATH \
529 -vnc none -nographic \
531 -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
532 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
533 -incoming tcp:0:5555 \
534 -monitor telnet::3333,server,nowait
536 setup_bridge_on_212_131.sh
537 ~~~~~~~~~~~~~~~~~~~~~~~~~~
539 Setup bridge on host_server_2
544 # This script is run on the host to setup the bridge
545 # for the Tap device and the PF device.
546 # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
548 # ens4f0 is the Niantic NIC
549 # ens5f0 is the Fortville NIC
557 brctl addif virbr0 ens4f0
558 brctl addif virbr0 ens5f0
559 brctl addif virbr0 tap1
567 .. _lm_bond_virtio_sriov_vm_scripts:
575 Set up DPDK in the Virtual Machine
580 # this script matches the vm_virtio_vf_one script
584 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
585 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
586 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
589 /root/dpdk/usertools/dpdk-devbind.py --status
591 rmmod virtio-pci ixgbevf
594 insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
596 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
597 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
599 /root/dpdk/usertools/dpdk-devbind.py --status
601 run_testpmd_bonding_in_vm.sh
602 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
604 Run testpmd in the Virtual Machine.
609 # Run testpmd in the VM
611 # The test system has 8 cpus (0-7), use cpus 2-7 for VM
612 # Use taskset -pc <core number> <thread_id>
614 # use for bonding of virtio and vf tests in VM
616 /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
617 -l 0-3 -n 4 --socket-mem 350 -- --i --port-topology=chained
619 .. _lm_bond_virtio_sriov_switch_conf:
621 Sample switch configuration
622 ---------------------------
624 The Intel switch is used to connect the traffic generator to the
625 NIC's on host_server_1 and host_server_2.
627 In order to run the switch configuration two console windows are required.
629 Log in as root in both windows.
631 TestPointShared, run_switch.sh and load /root/switch_config must be executed
632 in the sequence below.
634 On Switch: Terminal 1
635 ~~~~~~~~~~~~~~~~~~~~~
639 .. code-block:: console
641 /usr/bin/TestPointShared
643 On Switch: Terminal 2
644 ~~~~~~~~~~~~~~~~~~~~~
646 execute run_switch.sh
648 .. code-block:: console
652 On Switch: Terminal 1
653 ~~~~~~~~~~~~~~~~~~~~~
655 load switch configuration
657 .. code-block:: console
659 load /root/switch_config
661 Sample switch configuration script
662 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
664 The ``/root/switch_config`` script:
669 show port 1,5,9,13,17,21,25
670 set port 1,5,9,13,17,21,25 up
671 show port 1,5,9,13,17,21,25
676 add port port-set 1 0
677 add port port-set 5,9,13,17,21,25 1
679 add acl-rule condition 1 1 port-set 1
680 add acl-rule action 1 1 redirect 1
683 add vlan port 1000 1,5,9,13,17,21,25
684 set vlan tagging 1000 1,5,9,13,17,21,25 tag
685 set switch config flood_ucast fwd
686 show port stats all 1,5,9,13,17,21,25