X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Fnics%2Fbnxt.rst;h=ab093c3f4df61f58f5083bcbb50ebe032ff2b9a4;hb=35ce677cfada4e267221ddc53f969cd524c933b6;hp=ed650187e0d7856d051c314f6e1f3a26bdcb89c1;hpb=1adaf0e0f2eeb56bc7d4b22b855706b4aba51567;p=dpdk.git diff --git a/doc/guides/nics/bnxt.rst b/doc/guides/nics/bnxt.rst index ed650187e0..ab093c3f4d 100644 --- a/doc/guides/nics/bnxt.rst +++ b/doc/guides/nics/bnxt.rst @@ -4,7 +4,7 @@ BNXT Poll Mode Driver ===================== -The Broadcom BNXT PMD (**librte_pmd_bnxt**) implements support for adapters +The Broadcom BNXT PMD (**librte_net_bnxt**) implements support for adapters based on Ethernet controllers and SoCs belonging to the Broadcom BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers, the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom @@ -56,16 +56,8 @@ The BNXT PMD supports operating with: * Linux igb_uio * BSD nic_uio -Compiling BNXT PMD ------------------- - -To compile the BNXT PMD: - -.. code-block:: console - - make config T=x86_64-native-linux-gcc && make // for x86-64 - make config T=x86_32-native-linux-gcc && make // for x86-32 - make config T=armv8a-linux-gcc && make // for ARMv8 +Running BNXT PMD +---------------- Bind the device to one of the kernel modules listed above @@ -73,16 +65,6 @@ Bind the device to one of the kernel modules listed above ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id -Load an application (e.g. testpmd) with a default configuration (e.g. a single -TX /RX queue): - -.. code-block:: console - - ./testpmd -c 0xF -n 4 -- -i --portmask=0x1 --nb-cores=2 - -Running BNXT PMD ----------------- - The BNXT PMD can run on PF or VF. PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment @@ -403,7 +385,7 @@ The application enables multiple TX and RX queues when it is started. .. code-block:: console -   testpmd -l 1,3,5 --master-lcore 1 --txq=2 –rxq=2 --nb-cores=2 +   testpmd -l 1,3,5 --main-lcore 1 --txq=2 –rxq=2 --nb-cores=2 **TSS** @@ -583,9 +565,6 @@ The BNXT PMD supports a PTP client application to communicate with a PTP master clock using DPDK IEEE1588 APIs. Note that the PTP client application needs to run on PF and vector mode needs to be disabled. -For the PTP time synchronization support, the BNXT PMD must be compiled with -``CONFIG_RTE_LIBRTE_IEEE1588=y`` (this compilation flag is currently pending). - .. code-block:: console testpmd> set fwd ieee1588 // enable IEEE 1588 mode @@ -630,7 +609,7 @@ Basic stats include: * oerrors By default, per-queue stats for 16 queues are supported. For more than 16 -queues, BNXT PMD should be compiled with ``CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS`` +queues, BNXT PMD should be compiled with ``RTE_ETHDEV_QUEUE_STAT_CNTRS`` set to the desired number of queues. Extended Stats @@ -706,6 +685,114 @@ optimizes flow insertions and deletions. This is a tech preview feature, and is disabled by default. It can be enabled using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”. +Notes +----- + +- On stopping a device port, all the flows created on a port by the + application will be flushed from the hardware and any tables maintained + by the PMD. After stopping the device port, all flows on the port become + invalid and are not represented in the system anymore. + Instead of destroying or flushing such flows an application should discard + all references to these flows and re-create the flows as required after the + port is restarted. + +- While an application is free to use the group id attribute to group flows + together using a specific criteria, the BNXT PMD currently associates this + group id to a VNIC id. One such case is grouping of flows which are filtered + on the same source or destination MAC address. This allows packets of such + flows to be directed to one or more queues associated with the VNIC id. + This implementation is supported only when TRUFLOW functionality is disabled. + +- An application can issue a VXLAN decap offload request using rte_flow API + either as a single rte_flow request or a combination of two stages. + The PMD currently supports the two stage offload design. + In this approach the offload request may come as two flow offload requests + Flow1 & Flow2. The match criteria for Flow1 is O_DMAC, O_SMAC, O_DST_IP, + O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match criteria for Flow2 + is O_SRC_IP, O_DST_IP, VNI and inner header fields. + Flow1 and Flow2 flow offload requests can come in any order. If Flow2 flow + offload request comes first then Flow2 can’t be offloaded as there is + no O_DMAC information in Flow2. In this case, Flow2 will be deferred until + Flow1 flow offload request arrives. When Flow1 flow offload request is + received it will have O_DMAC information. Using Flow1’s O_DMAC, driver + creates an L2 context entry in the hardware as part of offloading Flow1. + Flow2 will now use Flow1’s O_DMAC to get the L2 context id associated with + this O_DMAC and other flow fields that are cached already at the time + of deferring Flow2 for offloading. Flow2 that arrive after Flow1 is offloaded + will be directly programmed and not cached. + +- PMD supports thread-safe rte_flow operations. + +Note: A VNIC represents a virtual interface in the hardware. It is a resource +in the RX path of the chip and is used to setup various target actions such as +RSS, MAC filtering etc. for the physical function in use. + +Virtual Function Port Representors +---------------------------------- +The BNXT PMD supports the creation of VF port representors for the control +and monitoring of BNXT virtual function devices. Each port representor +corresponds to a single virtual function of that device that is connected to a +VF. When there is no hardware flow offload, each packet transmitted by the VF +will be received by the corresponding representor. Similarly each packet that is +sent to a representor will be received by the VF. Applications can take +advantage of this feature when SRIOV is enabled. The representor will allow the +first packet that is transmitted by the VF to be received by the DPDK +application which can then decide if the flow should be offloaded to the +hardware. Once the flow is offloaded in the hardware, any packet matching the +flow will be received by the VF while the DPDK application will not receive it +any more. The BNXT PMD supports creation and handling of the port representors +when the PMD is initialized on a PF or trusted-VF. The user can specify the list +of VF IDs of the VFs for which the representors are needed by using the +``devargs`` option ``representor``.:: + + -w DBDF,representor=[0,1,4] + +Note that currently hot-plugging of representor ports is not supported so all +the required representors must be specified on the creation of the PF or the +trusted VF. + +Representors on Stingray SoC +---------------------------- +A representor created on X86 host typically represents a VF running in the same +X86 domain. But in case of the SoC, the application can run on the CPU complex +inside the SoC. The representor can be created on the SoC to represent a PF or a +VF running in the x86 domain. Since the representator creation requires passing +the bus:device.function of the PCI device endpoint which is not necessarily in the +same host domain, additional dev args have been added to the PMD. + +* rep_is_vf - false to indicate VF representor +* rep_is_pf - true to indicate PF representor +* rep_based_pf - Physical index of the PF +* rep_q_r2f - Logical COS Queue index for the rep to endpoint direction +* rep_q_f2r - Logical COS Queue index for the endpoint to rep direction +* rep_fc_r2f - Flow control for the representor to endpoint direction +* rep_fc_f2r - Flow control for the endpoint to representor direction + +The sample command line with the new ``devargs`` looks like this:: + + -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\ + rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1 + +.. code-block:: console + + testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\ + representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\ + rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3 + +Number of flows supported +------------------------- +The number of flows that can be support can be changed using the devargs +parameter ``max_num_kflows``. The default number of flows supported is 16K each +in ingress and egress path. + +Selecting EM vs EEM +------------------- +Broadcom devices can support filter creation in the onchip memory or the +external memory. This is referred to as EM or EEM mode respectively. +The decision for internal/external EM support is based on the ``devargs`` +parameter ``max_num_kflows``. If this is set by the user, external EM is used. +Otherwise EM support is enabled with flows created in internal memory. + Application Support ------------------- @@ -792,9 +879,9 @@ Vector processing provides significantly improved performance over scalar processing (see Vector Processor, here). The BNXT PMD supports the vector processing using SSE (Streaming SIMD -Extensions) instructions on x86 platforms. The BNXT vPMD (vector mode PMD) is -currently limited to Intel/AMD CPU architecture. Support for ARM is *not* -currently implemented. +Extensions) instructions on x86 platforms. It also supports NEON intrinsics for +vector processing on ARM CPUs. The BNXT vPMD (vector mode PMD) is available for +Intel/AMD and ARM CPU architectures. This improved performance comes from several optimizations: