X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Fnics%2Fintel_vf.rst;h=fcea8151bf3cf84d749ab3e76f1a9e26e9494b7f;hb=61c7b522d9062242d648d09b61d5137740bc2ebb;hp=e8d5118a0dbe5bb63bdff46a72888100f9c26f16;hpb=02d212ca3125444a70952aae79a1d679e2f83973;p=dpdk.git diff --git a/doc/guides/nics/intel_vf.rst b/doc/guides/nics/intel_vf.rst index e8d5118a0d..fcea8151bf 100644 --- a/doc/guides/nics/intel_vf.rst +++ b/doc/guides/nics/intel_vf.rst @@ -88,6 +88,16 @@ For more detail on SR-IOV, please refer to the following documents: assignment in hypervisor. Take qemu for example, the device assignment should carry the IAVF device id (0x1889) like ``-device vfio-pci,x-pci-device-id=0x1889,host=03:0a.0``. + Starting from DPDK 21.05, the default VF driver for Intel® 700 Series Ethernet Controller will be IAVF. No new feature + will be added into i40evf except bug fix until it's removed in DPDK 21.11. Between DPDK 21.05 and 21.11, by using the + ``devargs`` option ``driver=i40evf``, i40evf PMD still can be used on Intel® 700 Series Ethernet Controller, for example:: + + -a 81:02.0,driver=i40evf + + When IAVF is backed by an Intel® E810 device, the "Protocol Extraction" feature which is supported by ice PMD is also + available for IAVF PMD. The same devargs with the same parameters can be applied to IAVF PMD, for detail please reference + the section ``Protocol extraction for per queue`` of ice.rst. + The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ @@ -124,7 +134,6 @@ Intel® X710/XL710 Gigabit Ethernet Controller VF Infrastructure In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)* globally per Intel® X710/XL710 Gigabit Ethernet Controller NIC device. -The number of queue pairs of each VF can be configured by ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VF`` in ``config`` file. The Physical Function in host could be either configured by the Linux* i40e driver (in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver. When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application. @@ -521,20 +530,12 @@ The setup procedure is as follows: .. code-block:: console - make install T=x86_64-native-linuxapp-gcc - ./x86_64-native-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -- -i + .//app/dpdk-testpmd -l 0-3 -n 4 -- -i #. Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the lspci command output in the Guest OS. The virtual functions will be listed as available for use. -#. Configure and install the DPDK with an x86_64-native-linuxapp-gcc configuration on the Guest OS as normal, - that is, there is no change to the normal installation procedure. - - .. code-block:: console - - make config T=x86_64-native-linuxapp-gcc O=x86_64-native-linuxapp-gcc - cd x86_64-native-linuxapp-gcc - make +#. Configure and install the DPDK on the Guest OS as normal, that is, there is no change to the normal installation procedure. .. note::