2 Copyright(c) 2016 Intel Corporation. All rights reserved.
5 Redistribution and use in source and binary forms, with or without
6 modification, are permitted provided that the following conditions
9 * Redistributions of source code must retain the above copyright
10 notice, this list of conditions and the following disclaimer.
11 * Redistributions in binary form must reproduce the above copyright
12 notice, this list of conditions and the following disclaimer in
13 the documentation and/or other materials provided with the
15 * Neither the name of Intel Corporation nor the names of its
16 contributors may be used to endorse or promote products derived
17 from this software without specific prior written permission.
19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 Live Migration of VM with SR-IOV VF
32 ===================================
37 It is not possible to migrate a Virtual Machine which has an SR-IOV Virtual Function (VF).
39 To get around this problem the bonding PMD is used.
41 The following sections show an example of how to do this.
46 A bonded device is created in the VM.
47 The virtio and VF PMD's are added as slaves to the bonded device.
48 The VF is set as the primary slave of the bonded device.
50 A bridge must be set up on the Host connecting the tap device, which is the
51 backend of the Virtio device and the Physical Function (PF) device.
53 To test the Live Migration two servers with identical operating systems installed are used.
54 KVM and Qemu 2.3 is also required on the servers.
56 In this example, the servers have Niantic and or Fortville NIC's installed.
57 The NIC's on both servers are connected to a switch
58 which is also connected to the traffic generator.
60 The switch is configured to broadcast traffic on all the NIC ports.
61 A :ref:`Sample switch configuration <lm_bond_virtio_sriov_switch_conf>`
62 can be found in this section.
64 The host is running the Kernel PF driver (ixgbe or i40e).
66 The ip address of host_server_1 is 10.237.212.46
68 The ip address of host_server_2 is 10.237.212.131
73 The sample scripts mentioned in the steps below can be found in the
74 :ref:`Sample host scripts <lm_bond_virtio_sriov_host_scripts>` and
75 :ref:`Sample VM scripts <lm_bond_virtio_sriov_vm_scripts>` sections.
77 On host_server_1: Terminal 1
78 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80 .. code-block:: console
82 cd /root/dpdk/host_scripts
83 ./setup_vf_on_212_46.sh
87 .. code-block:: console
89 ./vm_virtio_vf_i40e_212_46.sh
93 .. code-block:: console
95 ./vm_virtio_vf_one_212_46.sh
97 On host_server_1: Terminal 2
98 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
100 .. code-block:: console
102 cd /root/dpdk/host_scripts
103 ./setup_bridge_on_212_46.sh
104 ./connect_to_qemu_mon_on_host.sh
107 On host_server_1: Terminal 1
108 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
110 **In VM on host_server_1:**
112 .. code-block:: console
114 cd /root/dpdk/vm_scripts
115 ./setup_dpdk_in_vm.sh
116 ./run_testpmd_bonding_in_vm.sh
118 testpmd> show port info all
120 The ``mac_addr`` command only works with kernel PF for Niantic
122 .. code-block:: console
124 testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
126 The syntax of the ``testpmd`` command is:
128 Create bonded device (mode) (socket).
130 Mode 1 is active backup.
132 Virtio is port 0 (P0).
136 Bonding is port 2 (P2).
138 .. code-block:: console
140 testpmd> create bonded device 1 0
141 Created new bonded device eth_bond_testpmd_0 on (port 2).
142 testpmd> add bonding slave 0 2
143 testpmd> add bonding slave 1 2
144 testpmd> show bonding config 2
146 The syntax of the ``testpmd`` command is:
148 set bonding primary (slave id) (port id)
150 Set primary to P1 before starting bonding port.
152 .. code-block:: console
154 testpmd> set bonding primary 1 2
155 testpmd> show bonding config 2
156 testpmd> port start 2
157 Port 2: 02:09:C0:68:99:A5
158 Checking link statuses...
159 Port 0 Link Up - speed 10000 Mbps - full-duplex
160 Port 1 Link Up - speed 10000 Mbps - full-duplex
161 Port 2 Link Up - speed 10000 Mbps - full-duplex
163 testpmd> show bonding config 2
165 Primary is now P1. There are 2 active slaves.
167 Use P2 only for forwarding.
169 .. code-block:: console
171 testpmd> set portlist 2
172 testpmd> show config fwd
175 testpmd> show bonding config 2
177 Primary is now P1. There are 2 active slaves.
179 .. code-block:: console
181 testpmd> show port stats all
183 VF traffic is seen at P1 and P2.
185 .. code-block:: console
187 testpmd> clear port stats all
188 testpmd> set bonding primary 0 2
189 testpmd> remove bonding slave 1 2
190 testpmd> show bonding config 2
192 Primary is now P0. There is 1 active slave.
194 .. code-block:: console
196 testpmd> clear port stats all
197 testpmd> show port stats all
199 No VF traffic is seen at P0 and P2, VF MAC address still present.
201 .. code-block:: console
204 testpmd> port close 1
206 Port close should remove VF MAC address, it does not remove perm_addr.
208 The ``mac_addr`` command only works with the kernel PF for Niantic.
210 .. code-block:: console
212 testpmd> mac_addr remove 1 AA:BB:CC:DD:EE:FF
213 testpmd> port detach 1
214 Port '0000:00:04.0' is detached. Now total ports is 2
215 testpmd> show port stats all
217 No VF traffic is seen at P0 and P2.
219 On host_server_1: Terminal 2
220 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
222 .. code-block:: console
224 (qemu) device_del vf1
227 On host_server_1: Terminal 1
228 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
230 **In VM on host_server_1:**
232 .. code-block:: console
234 testpmd> show bonding config 2
236 Primary is now P0. There is 1 active slave.
238 .. code-block:: console
240 testpmd> show port info all
241 testpmd> show port stats all
243 On host_server_2: Terminal 1
244 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
246 .. code-block:: console
248 cd /root/dpdk/host_scripts
249 ./setup_vf_on_212_131.sh
250 ./vm_virtio_one_migrate.sh
252 On host_server_2: Terminal 2
253 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
255 .. code-block:: console
257 ./setup_bridge_on_212_131.sh
258 ./connect_to_qemu_mon_on_host.sh
260 VM status: paused (inmigrate)
263 On host_server_1: Terminal 2
264 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
266 Check that the switch is up before migrating.
268 .. code-block:: console
270 (qemu) migrate tcp:10.237.212.131:5555
272 VM status: paused (postmigrate)
276 .. code-block:: console
279 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
280 Migration status: completed
281 total time: 11834 milliseconds
282 downtime: 18 milliseconds
283 setup: 3 milliseconds
284 transferred ram: 389137 kbytes
285 throughput: 269.49 mbps
286 remaining ram: 0 kbytes
287 total ram: 1590088 kbytes
288 duplicate: 301620 pages
291 normal bytes: 385732 kbytes
295 For the Fortville NIC.
297 .. code-block:: console
300 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
301 Migration status: completed
302 total time: 11619 milliseconds
303 downtime: 5 milliseconds
304 setup: 7 milliseconds
305 transferred ram: 379699 kbytes
306 throughput: 267.82 mbps
307 remaining ram: 0 kbytes
308 total ram: 1590088 kbytes
309 duplicate: 303985 pages
312 normal bytes: 376292 kbytes
316 On host_server_2: Terminal 1
317 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
319 **In VM on host_server_2:**
321 Hit Enter key. This brings the user to the testpmd prompt.
323 .. code-block:: console
327 On host_server_2: Terminal 2
328 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
330 .. code-block:: console
337 .. code-block:: console
339 (qemu) device_add pci-assign,host=06:10.0,id=vf1
341 For the Fortville NIC.
343 .. code-block:: console
345 (qemu) device_add pci-assign,host=03:02.0,id=vf1
347 On host_server_2: Terminal 1
348 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
350 **In VM on host_server_2:**
352 .. code-block:: console
354 testomd> show port info all
355 testpmd> show port stats all
356 testpmd> show bonding config 2
357 testpmd> port attach 0000:00:04.0
362 testpmd> port start 1
364 The ``mac_addr`` command only works with the Kernel PF for Niantic.
366 .. code-block:: console
368 testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
369 testpmd> show port stats all.
370 testpmd> show config fwd
371 testpmd> show bonding config 2
372 testpmd> add bonding slave 1 2
373 testpmd> set bonding primary 1 2
374 testpmd> show bonding config 2
375 testpmd> show port stats all
377 VF traffic is seen at P1 (VF) and P2 (Bonded device).
379 .. code-block:: console
381 testpmd> remove bonding slave 0 2
382 testpmd> show bonding config 2
384 testpmd> port close 0
385 testpmd> port detach 0
386 Port '0000:00:03.0' is detached. Now total ports is 2
388 testpmd> show port info all
389 testpmd> show config fwd
390 testpmd> show port stats all
392 VF traffic is seen at P1 (VF) and P2 (Bonded device).
394 .. _lm_bond_virtio_sriov_host_scripts:
399 setup_vf_on_212_46.sh
400 ~~~~~~~~~~~~~~~~~~~~~
401 Set up Virtual Functions on host_server_1
406 # This script is run on the host 10.237.212.46 to setup the VF
409 cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
410 echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
411 cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
414 # set up Fortville VF
415 cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
416 echo 1 > /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
417 cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
420 vm_virtio_vf_one_212_46.sh
421 ~~~~~~~~~~~~~~~~~~~~~~~~~~
423 Setup Virtual Machine on host_server_1
430 KVM_PATH="/usr/bin/qemu-system-x86_64"
433 DISK_IMG="/home/username/disk_image/virt1_sml.disk"
435 # Number of guest cpus
441 taskset -c 1-5 $KVM_PATH \
449 -vnc none -nographic \
451 -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
452 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
453 -device pci-assign,host=09:10.0,id=vf1 \
454 -monitor telnet::3333,server,nowait
456 setup_bridge_on_212_46.sh
457 ~~~~~~~~~~~~~~~~~~~~~~~~~
459 Setup bridge on host_server_1
464 # This script is run on the host 10.237.212.46 to setup the bridge
465 # for the Tap device and the PF device.
466 # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
468 # ens3f0 is the Niantic NIC
469 # ens6f0 is the Fortville NIC
477 brctl addif virbr0 ens3f0
478 brctl addif virbr0 ens6f0
479 brctl addif virbr0 tap1
487 connect_to_qemu_mon_on_host.sh
488 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
493 # This script is run on both hosts when the VM is up,
494 # to connect to the Qemu Monitor.
498 setup_vf_on_212_131.sh
499 ~~~~~~~~~~~~~~~~~~~~~~
501 Set up Virtual Functions on host_server_2
506 # This script is run on the host 10.237.212.131 to setup the VF
509 cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
510 echo 1 > /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
511 cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
514 # set up Fortville VF
515 cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
516 echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
517 cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
520 vm_virtio_one_migrate.sh
521 ~~~~~~~~~~~~~~~~~~~~~~~~
523 Setup Virtual Machine on host_server_2
528 # Start the VM on host_server_2 with the same parameters except without the VF
529 # parameters, as the VM on host_server_1, in migration-listen mode
530 # (-incoming tcp:0:5555)
533 KVM_PATH="/usr/bin/qemu-system-x86_64"
536 DISK_IMG="/home/username/disk_image/virt1_sml.disk"
538 # Number of guest cpus
544 taskset -c 1-5 $KVM_PATH \
552 -vnc none -nographic \
554 -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
555 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
556 -incoming tcp:0:5555 \
557 -monitor telnet::3333,server,nowait
559 setup_bridge_on_212_131.sh
560 ~~~~~~~~~~~~~~~~~~~~~~~~~~
562 Setup bridge on host_server_2
567 # This script is run on the host to setup the bridge
568 # for the Tap device and the PF device.
569 # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
571 # ens4f0 is the Niantic NIC
572 # ens5f0 is the Fortville NIC
580 brctl addif virbr0 ens4f0
581 brctl addif virbr0 ens5f0
582 brctl addif virbr0 tap1
590 .. _lm_bond_virtio_sriov_vm_scripts:
598 Set up DPDK in the Virtual Machine
603 # this script matches the vm_virtio_vf_one script
607 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
608 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
609 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
612 /root/dpdk/tools/dpdk_nic_bind.py --status
614 rmmod virtio-pci ixgbevf
617 insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
619 /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:03.0
620 /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:04.0
622 /root/dpdk/tools/dpdk_nic_bind.py --status
624 run_testpmd_bonding_in_vm.sh
625 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
627 Run testpmd in the Virtual Machine.
632 # Run testpmd in the VM
634 # The test system has 8 cpus (0-7), use cpus 2-7 for VM
635 # Use taskset -pc <core number> <thread_id>
637 # use for bonding of virtio and vf tests in VM
639 /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
640 -c f -n 4 --socket-mem 350 -- --i --port-topology=chained
642 .. _lm_bond_virtio_sriov_switch_conf:
644 Sample switch configuration
645 ---------------------------
647 The Intel switch is used to connect the traffic generator to the
648 NIC's on host_server_1 and host_server_2.
650 In order to run the switch configuration two console windows are required.
652 Log in as root in both windows.
654 TestPointShared, run_switch.sh and load /root/switch_config must be executed
655 in the sequence below.
657 On Switch: Terminal 1
658 ~~~~~~~~~~~~~~~~~~~~~
662 .. code-block:: console
664 /usr/bin/TestPointShared
666 On Switch: Terminal 2
667 ~~~~~~~~~~~~~~~~~~~~~
669 execute run_switch.sh
671 .. code-block:: console
675 On Switch: Terminal 1
676 ~~~~~~~~~~~~~~~~~~~~~
678 load switch configuration
680 .. code-block:: console
682 load /root/switch_config
684 Sample switch configuration script
685 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
687 The ``/root/switch_config`` script:
692 show port 1,5,9,13,17,21,25
693 set port 1,5,9,13,17,21,25 up
694 show port 1,5,9,13,17,21,25
699 add port port-set 1 0
700 add port port-set 5,9,13,17,21,25 1
702 add acl-rule condition 1 1 port-set 1
703 add acl-rule action 1 1 redirect 1
706 add vlan port 1000 1,5,9,13,17,21,25
707 set vlan tagging 1000 1,5,9,13,17,21,25 tag
708 set switch config flood_ucast fwd
709 show port stats all 1,5,9,13,17,21,25