X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Fnics%2Fbnxt.rst;h=feb0c6a7657aa8387533a5cd88f29b88bce4812e;hb=2e3dbc80cc012f11799c7eda866e1168dadb5032;hp=d9a7d8793092c612d5d993575a74512a07b9b651;hpb=1509e07f58369cb629b018d6272d714d6b6d236e;p=dpdk.git diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst index d9a7d87930..feb0c6a765 100644 --- a/doc/guides/nics/bnxt.rst +++ b/doc/guides/nics/bnxt.rst @@ -6,7 +6,7 @@ BNXT Poll Mode Driver The Broadcom BNXT PMD (**librte_net_bnxt**) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom -BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, +BCM5741X/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom StrataGX® BCM5873X Series of Communications Processors. @@ -25,30 +25,6 @@ device memory to userspace, registering interrupts, etc. VFIO is more secure than UIO, relying on IOMMU protection. UIO requires the IOMMU disabled or configured to pass-through mode. -Operating Systems supported: - -* Red Hat Enterprise Linux release 8.1 (Ootpa) -* Red Hat Enterprise Linux release 8.0 (Ootpa) -* Red Hat Enterprise Linux Server release 7.7 (Maipo) -* Red Hat Enterprise Linux Server release 7.6 (Maipo) -* Red Hat Enterprise Linux Server release 7.5 (Maipo) -* Red Hat Enterprise Linux Server release 7.4 (Maipo) -* Red Hat Enterprise Linux Server release 7.3 (Maipo) -* Red Hat Enterprise Linux Server release 7.2 (Maipo) -* CentOS Linux release 8.0 -* CentOS Linux release 7.7 -* CentOS Linux release 7.6.1810 -* CentOS Linux release 7.5.1804 -* CentOS Linux release 7.4.1708 -* Fedora 31 -* FreeBSD 12.1 -* Suse 15SP1 -* Ubuntu 19.04 -* Ubuntu 18.04 -* Ubuntu 16.10 -* Ubuntu 16.04 -* Ubuntu 14.04 - The BNXT PMD supports operating with: * Linux vfio-pci @@ -385,7 +361,7 @@ The application enables multiple TX and RX queues when it is started. .. code-block:: console -   testpmd -l 1,3,5 --main-lcore 1 --txq=2 –rxq=2 --nb-cores=2 +   dpdk-testpmd -l 1,3,5 --main-lcore 1 --txq=2 –rxq=2 --nb-cores=2 **TSS** @@ -685,6 +661,8 @@ optimizes flow insertions and deletions. This is a tech preview feature, and is disabled by default. It can be enabled using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”. +This feature is currently supported on Whitney+ and Stingray devices. + Notes ----- @@ -775,7 +753,7 @@ The sample command line with the new ``devargs`` looks like this:: .. code-block:: console - testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\ + dpdk-testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\ representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\ rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3 @@ -869,29 +847,42 @@ DPDK implements a light-weight library to allow PMDs to be bonded together and p .. code-block:: console -   testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=,slave=,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained - (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained +   dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=,slave=,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained + (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained Vector Processing ----------------- +The BNXT PMD provides vectorized burst transmit/receive function implementations +on x86-based platforms using SSE (Streaming SIMD Extensions) and AVX2 (Advanced +Vector Extensions 2) instructions, and on Arm-based platforms using Arm Neon +Advanced SIMD instructions. Vector processing support is currently implemented +only for Intel/AMD and Arm CPU architectures. + Vector processing provides significantly improved performance over scalar -processing (see Vector Processor, here). +processing. This improved performance is derived from a number of optimizations: -The BNXT PMD supports the vector processing using SSE (Streaming SIMD -Extensions) instructions on x86 platforms. It also supports NEON intrinsics for -vector processing on ARM CPUs. The BNXT vPMD (vector mode PMD) is available for -Intel/AMD and ARM CPU architectures. +* Using SIMD instructions to operate on multiple packets in parallel. +* Using SIMD instructions to do more work per instruction than is possible + with scalar instructions, for example by leveraging 128-bit and 256-bi + load/store instructions or by using SIMD shuffle and permute operations. +* Batching -This improved performance comes from several optimizations: +  * TX: transmit completions are processed in bulk. +  * RX: bulk allocation of mbufs is used when allocating rxq buffers. -* Batching -  * TX: processing completions in bulk -  * RX: allocating mbufs in bulk -* Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf -* Some stateless offloads are *not* supported with vector processing -  * TX: no offloads will be supported -  * RX: reduced RX offloads (listed below) will be supported:: +* Simplifications enabled by not supporting chained mbufs in vector mode. +* Simplifications enabled by not supporting some stateless offloads in vector + mode: + +  * TX: only the following reduced set of transmit offloads is supported in + vector mode:: + +   DEV_TX_OFFLOAD_MBUF_FAST_FREE + +  * RX: only the following reduced set of receive offloads is supported in + vector mode (note that jumbo MTU is allowed only when the MTU setting + does not require `DEV_RX_OFFLOAD_SCATTER` to be enabled)::   DEV_RX_OFFLOAD_VLAN_STRIP   DEV_RX_OFFLOAD_KEEP_CRC @@ -900,16 +891,17 @@ This improved performance comes from several optimizations:   DEV_RX_OFFLOAD_UDP_CKSUM   DEV_RX_OFFLOAD_TCP_CKSUM   DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM +   DEV_RX_OFFLOAD_OUTER_UDP_CKSUM   DEV_RX_OFFLOAD_RSS_HASH   DEV_RX_OFFLOAD_VLAN_FILTER -The BNXT Vector PMD is enabled in DPDK builds by default. - -However, a decision to enable vector mode will be made when the port transitions -from stopped to started. Any TX offloads or some RX offloads (other than listed -above) will disable the vector mode. -Offload configuration changes that impact vector mode must be made when the port -is stopped. +The BNXT Vector PMD is enabled in DPDK builds by default. The decision to enable +vector processing is made at run-time when the port is started; if no transmit +offloads outside the set supported for vector mode are enabled then vector mode +transmit will be enabled, and if no receive offloads outside the set supported +for vector mode are enabled then vector mode receive will be enabled. Offload +configuration changes that impact the decision to enable vector mode are allowed +only when the port is stopped. Note that TX (or RX) vector mode can be enabled independently from RX (or TX) vector mode.