+using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
+
+This feature is currently supported on Whitney+ and Stingray devices.
+
+Notes
+-----
+
+- On stopping a device port, all the flows created on a port by the
+ application will be flushed from the hardware and any tables maintained
+ by the PMD. After stopping the device port, all flows on the port become
+ invalid and are not represented in the system anymore.
+ Instead of destroying or flushing such flows an application should discard
+ all references to these flows and re-create the flows as required after the
+ port is restarted.
+
+- While an application is free to use the group id attribute to group flows
+ together using a specific criteria, the BNXT PMD currently associates this
+ group id to a VNIC id. One such case is grouping of flows which are filtered
+ on the same source or destination MAC address. This allows packets of such
+ flows to be directed to one or more queues associated with the VNIC id.
+ This implementation is supported only when TRUFLOW functionality is disabled.
+
+- An application can issue a VXLAN decap offload request using rte_flow API
+ either as a single rte_flow request or a combination of two stages.
+ The PMD currently supports the two stage offload design.
+ In this approach the offload request may come as two flow offload requests
+ Flow1 & Flow2. The match criteria for Flow1 is O_DMAC, O_SMAC, O_DST_IP,
+ O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match criteria for Flow2
+ is O_SRC_IP, O_DST_IP, VNI and inner header fields.
+ Flow1 and Flow2 flow offload requests can come in any order. If Flow2 flow
+ offload request comes first then Flow2 can’t be offloaded as there is
+ no O_DMAC information in Flow2. In this case, Flow2 will be deferred until
+ Flow1 flow offload request arrives. When Flow1 flow offload request is
+ received it will have O_DMAC information. Using Flow1’s O_DMAC, driver
+ creates an L2 context entry in the hardware as part of offloading Flow1.
+ Flow2 will now use Flow1’s O_DMAC to get the L2 context id associated with
+ this O_DMAC and other flow fields that are cached already at the time
+ of deferring Flow2 for offloading. Flow2 that arrive after Flow1 is offloaded
+ will be directly programmed and not cached.
+
+- PMD supports thread-safe rte_flow operations.
+
+Note: A VNIC represents a virtual interface in the hardware. It is a resource
+in the RX path of the chip and is used to setup various target actions such as
+RSS, MAC filtering etc. for the physical function in use.
+
+Virtual Function Port Representors
+----------------------------------
+The BNXT PMD supports the creation of VF port representors for the control
+and monitoring of BNXT virtual function devices. Each port representor
+corresponds to a single virtual function of that device that is connected to a
+VF. When there is no hardware flow offload, each packet transmitted by the VF
+will be received by the corresponding representor. Similarly each packet that is
+sent to a representor will be received by the VF. Applications can take
+advantage of this feature when SRIOV is enabled. The representor will allow the
+first packet that is transmitted by the VF to be received by the DPDK
+application which can then decide if the flow should be offloaded to the
+hardware. Once the flow is offloaded in the hardware, any packet matching the
+flow will be received by the VF while the DPDK application will not receive it
+any more. The BNXT PMD supports creation and handling of the port representors
+when the PMD is initialized on a PF or trusted-VF. The user can specify the list
+of VF IDs of the VFs for which the representors are needed by using the
+``devargs`` option ``representor``.::
+
+ -a DBDF,representor=[0,1,4]
+
+Note that currently hot-plugging of representor ports is not supported so all
+the required representors must be specified on the creation of the PF or the
+trusted VF.
+
+Representors on Stingray SoC
+----------------------------
+A representor created on X86 host typically represents a VF running in the same
+X86 domain. But in case of the SoC, the application can run on the CPU complex
+inside the SoC. The representor can be created on the SoC to represent a PF or a
+VF running in the x86 domain. Since the representator creation requires passing
+the bus:device.function of the PCI device endpoint which is not necessarily in the
+same host domain, additional dev args have been added to the PMD.
+
+* rep_is_vf - false to indicate VF representor
+* rep_is_pf - true to indicate PF representor
+* rep_based_pf - Physical index of the PF
+* rep_q_r2f - Logical COS Queue index for the rep to endpoint direction
+* rep_q_f2r - Logical COS Queue index for the endpoint to rep direction
+* rep_fc_r2f - Flow control for the representor to endpoint direction
+* rep_fc_f2r - Flow control for the endpoint to representor direction
+
+The sample command line with the new ``devargs`` looks like this::
+
+ -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
+ rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
+
+.. code-block:: console
+
+ dpdk-testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
+ representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
+ rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
+
+Number of flows supported
+-------------------------
+The number of flows that can be support can be changed using the devargs
+parameter ``max_num_kflows``. The default number of flows supported is 16K each
+in ingress and egress path.
+
+Selecting EM vs EEM
+-------------------
+Broadcom devices can support filter creation in the onchip memory or the
+external memory. This is referred to as EM or EEM mode respectively.
+The decision for internal/external EM support is based on the ``devargs``
+parameter ``max_num_kflows``. If this is set by the user, external EM is used.
+Otherwise EM support is enabled with flows created in internal memory.