1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2020 Broadcom Inc.
7 The Broadcom BNXT PMD (**librte_net_bnxt**) implements support for adapters
8 based on Ethernet controllers and SoCs belonging to the Broadcom
9 BCM574XX/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers,
10 the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom
11 StrataGX® BCM5873X Series of Communications Processors.
13 A complete list with links to reference material is in the Appendix section.
18 BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
23 BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping
24 device memory to userspace, registering interrupts, etc.
25 VFIO is more secure than UIO, relying on IOMMU protection.
26 UIO requires the IOMMU disabled or configured to pass-through mode.
28 Operating Systems supported:
30 * Red Hat Enterprise Linux release 8.1 (Ootpa)
31 * Red Hat Enterprise Linux release 8.0 (Ootpa)
32 * Red Hat Enterprise Linux Server release 7.7 (Maipo)
33 * Red Hat Enterprise Linux Server release 7.6 (Maipo)
34 * Red Hat Enterprise Linux Server release 7.5 (Maipo)
35 * Red Hat Enterprise Linux Server release 7.4 (Maipo)
36 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
37 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
38 * CentOS Linux release 8.0
39 * CentOS Linux release 7.7
40 * CentOS Linux release 7.6.1810
41 * CentOS Linux release 7.5.1804
42 * CentOS Linux release 7.4.1708
52 The BNXT PMD supports operating with:
55 * Linux uio_pci_generic
62 Bind the device to one of the kernel modules listed above
64 .. code-block:: console
66 ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
68 The BNXT PMD can run on PF or VF.
70 PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment
71 of part of the network port resources to guest operating systems using the
73 NIC is logically distributed among multiple virtual machines (VMs), while still
74 having global data in common to share with the PF and other VFs.
76 Sysadmin can create and configure VFs:
78 .. code-block:: console
80 echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
81 (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
83 Sysadmin also can change the VF property such as MAC address, transparent VLAN,
84 TX rate limit, and trusted VF:
86 .. code-block:: console
88 ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
89 (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
97 The Flow Bifurcation splits the incoming data traffic to user space applications
98 (such as DPDK applications) and/or kernel space programs (such as the Linux
100 It can direct some traffic, for example data plane traffic, to DPDK.
101 Rest of the traffic, for example control plane traffic, would be redirected to
102 the traditional Linux networking stack.
104 Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
106 Benefits of the flow bifurcation include:
108 * Better performance with less CPU overhead, as user application can directly
109 access the NIC for data path
110 * NIC is still being controlled by the kernel, as control traffic is forwarded
111 only to the kernel driver
112 * Control commands, e.g. ethtool, will work as usual
114 Running on a VF, the BXNT PMD supports the flow bifurcation with a combination
115 of SR-IOV and packet classification and/or forwarding capability.
116 In the simplest case of flow bifurcation, a PF driver configures a NIC to
117 forward all user traffic directly to VFs with matching destination MAC address,
118 while the rest of the traffic is forwarded to a PF.
119 Note that the broadcast packets will be forwarded to both PF and VF.
121 .. code-block:: console
123 (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
128 By default, VFs are *not* allowed to perform privileged operations, such as
129 modifying the VF’s MAC address in the guest. These security measures are
130 designed to prevent possible attacks.
131 However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these
132 operations performed by a VF would be legitimate and can be allowed.
134 To enable VF to request "trusted mode," a new trusted VF concept was introduced
135 in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some
136 privileged operations.
138 The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable the
139 trusted attribute on the VF. It is preferable to enable the Trusted setting on a
140 VF before starting applications.
141 However, the BNXT PMD handles dynamic changes in trusted settings as well.
143 Note that control commands, e.g., ethtool, will work via the kernel PF driver,
144 *not* via the trusted VF driver.
146 Operations supported by trusted VF:
148 * MAC address configuration
151 Operations *not* supported by trusted VF:
154 * Promiscuous mode setting
159 Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the
160 features which the PF can enable or request. In a multiport NIC, each port will
161 have a corresponding PF. Also depending on the configuration of the NIC there
162 can be more than one PF associated per port.
163 A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other
164 PF or run the PMD on both the PFs. In such cases, the firmware picks one of the
167 Much like in the trusted VF, the DPDK application must be *trusted* and expected
168 to be *well-behaved*.
173 The BNXT PMD supports the following features:
178 * Flow Control and Autoneg
181 * Multicast MAC Filter
187 * Checksum Offload (IPv4, TCP, and UDP)
188 * Multi-Queue (TSS and RSS)
189 * Segmentation and Reassembly (TSO and LRO)
192 * Generic Flow Offload
197 **Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574
200 .. code-block:: console
202 testpmd> port config mtu (port_id) mtu_value
203 testpmd> show port info (port_id)
205 **LED**: Application tunes on (or off) a port LED, typically for a port
208 .. code-block:: console
210 int rte_eth_led_on (uint16_t port_id)
211 int rte_eth_led_off (uint16_t port_id)
213 **Flow Control and Autoneg**: Application tunes on (or off) flow control and/or
214 auto-negotiation on a port:
216 .. code-block:: console
218 testpmd> set flow_ctrl rx (on|off) (port_id)
219 testpmd> set flow_ctrl tx (on|off) (port_id)
220 testpmd> set flow_ctrl autoneg (on|off) (port_id)
222 Note that the BNXT PMD does *not* support some options and ignores them when
234 Applications control the packet-forwarding behaviors with packet filters.
236 The BNXT PMD supports hardware-based packet filtering:
238 * UC (Unicast) MAC Filters
239 * No unicast packets are forwarded to an application except the one with
240 DMAC address added to the port
241 * At initialization, the station MAC address is added to the port
242 * MC (Multicast) MAC Filters
243 * No multicast packets are forwarded to an application except the one with
244 MC address added to the port
245 * When the application listens to a multicast group, it adds the MC address
247 * VLAN Filtering Mode
248 * When enabled, no packets are forwarded to an application except the ones
249 with the VLAN tag assigned to the port
251 * When enabled, every multicast packet received on the port is forwarded to
253 * Typical usage is routing applications
255 * When enabled, every packet received on the port is forwarded to the
261 The application adds (or removes) MAC addresses to enable (or disable)
262 whitelist filtering to accept packets.
264 .. code-block:: console
266 testpmd> show port (port_id) macs
267 testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
272 Application adds (or removes) Multicast addresses to enable (or disable)
273 whitelist filtering to accept packets.
275 .. code-block:: console
277 testpmd> show port (port_id) mcast_macs
278 testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
280 Application adds (or removes) Multicast addresses to enable (or disable)
281 whitelist filtering to accept packets.
283 Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
284 than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
289 The application enables (or disables) VLAN filtering mode. When the mode is
290 enabled, no packets are forwarded to an application except ones with VLAN tag
291 assigned for the application.
293 .. code-block:: console
295 testpmd> vlan set filter (on|off) (port_id)
296 testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
301 The application enables (or disables) the allmulticast mode. When the mode is
302 enabled, every multicast packet received is forwarded to the application.
304 .. code-block:: console
306 testpmd> show port info (port_id)
307 testpmd> set allmulti (port_id) (on|off)
312 The application enables (or disables) the promiscuous mode. When the mode is
313 enabled on a port, every packet received on the port is forwarded to the
316 .. code-block:: console
318 testpmd> show port info (port_id)
319 testpmd> set promisc port_id (on|off)
324 Like Linux, DPDK provides enabling hardware offload of some stateless processing
325 (such as checksum calculation) of the stack, alleviating the CPU from having to
326 burn cycles on every packet.
328 Listed below are the stateless offloads supported by the BNXT PMD:
330 * CRC offload (for both TX and RX packets)
331 * Checksum Offload (for both TX and RX packets)
332 * IPv4 Checksum Offload
333 * TCP Checksum Offload
334 * UDP Checksum Offload
335 * Segmentation/Reassembly Offloads
336 * TCP Segmentation Offload (TSO)
337 * Large Receive Offload (LRO)
339 * Transmit Side Scaling (TSS)
340 * Receive Side Scaling (RSS)
342 Also, the BNXT PMD supports stateless offloads on inner frames for tunneled
343 packets. Listed below are the tunneling protocols supported by the BNXT PMD:
349 Note that enabling (or disabling) stateless offloads requires applications to
350 stop DPDK before changing configuration.
355 The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic
356 Redundancy Check) that allows detection of corrupted data within the entire
357 frame as received on the receiver side.
359 The BNXT PMD supports hardware-based CRC offload:
361 * TX: calculate and insert CRC
362 * RX: check and remove CRC, notify the application on CRC error
364 Note that the CRC offload is always turned on.
369 The application enables hardware checksum calculation for IPv4, TCP, and UDP.
371 .. code-block:: console
373 testpmd> port stop (port_id)
374 testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
375 testpmd> set fwd csum
380 Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side
381 Scaling), is a common networking technique that allows for more efficient load
382 balancing across multiple CPU cores.
384 The application enables multiple TX and RX queues when it is started.
386 .. code-block:: console
388 testpmd -l 1,3,5 --main-lcore 1 --txq=2 –rxq=2 --nb-cores=2
392 TSS distributes network transmit processing across several hardware-based
393 transmit queues, allowing outbound network traffic to be processed by multiple
398 RSS distributes network receive processing across several hardware-based receive
399 queues, allowing inbound network traffic to be processed by multiple CPU cores.
401 The application can select the RSS mode, i.e. select the header fields that are
402 included for hash calculation. The BNXT PMD supports the RSS mode of
403 ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
405 For tunneled packets, RSS hash is calculated over inner frame header fields.
406 Applications may want to select the tunnel header fields for hash calculation,
407 and it will be supported in 20.08 using RSS level.
409 .. code-block:: console
411 testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
413 // note that the testpmd defaults the RSS mode to ip
414 // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
415 testpmd> port config (port_id) rss default
417 // to check the current RSS configuration, such as RSS function and RSS key
418 testpmd> show port (port_id) rss-hash key
420 // RSS is enabled by default. However, application can disable RSS as follows
421 testpmd> port config (port_id) rss none
423 Application can change the flow distribution, i.e. remap the received traffic to
424 CPU cores, using RSS RETA (Redirection Table).
426 .. code-block:: console
428 // application queries the current RSS RETA configuration
429 testpmd> show port (port_id) rss reta size (mask0, mask1)
431 // application changes the RSS RETA configuration
432 testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
437 TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables
438 the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum
439 Transmit Unit). NIC breaks it into multiple segments before sending it to the
442 The BNXT PMD supports hardware-based TSO.
444 .. code-block:: console
446 // display the status of TSO
447 testpmd> tso show (port_id)
449 // enable/disable TSO
450 testpmd> port config (port_id) tx_offload tcp_tso (on|off)
452 // set TSO segment size
453 testpmd> tso set segment_size (port_id)
455 The BNXT PMD also supports hardware-based tunneled TSO.
457 .. code-block:: console
459 // display the status of tunneled TSO
460 testpmd> tunnel_tso show (port_id)
462 // enable/disable tunneled TSO
463 testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
465 // set tunneled TSO segment size
466 testpmd> tunnel_tso set segment_size (port_id)
468 Note that the checksum offload is always assumed to be enabled for TSO.
473 LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP
474 packets from a single stream into a larger buffer, before passing to the
477 The BNXT PMD supports hardware-based LRO.
479 .. code-block:: console
481 // display the status of LRO
482 testpmd> show port (port_id) rx_offload capabilities
483 testpmd> show port (port_id) rx_offload configuration
485 // enable/disable LRO
486 testpmd> port config (port_id) rx_offload tcp_lro (on|off)
488 // set max LRO packet (datagram) size
489 testpmd> port config (port_id) max-lro-pkt-size (max_size)
491 The BNXT PMD also supports tunneled LRO.
493 Some applications, such as routing, should *not* change the packet headers as
494 they pass through (i.e. received from and sent back to the network). In such a
495 case, GRO (Generic Receive Offload) should be used instead of LRO.
500 DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD
501 supports hardware-based VLAN insert/strip offload for both single and double
508 Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is
511 .. code-block:: console
513 // configure outer TPID value for a port
514 testpmd> vlan set outer tpid (tpid_value) (port_id)
516 The inner TPID set will be rejected as the BNXT PMD supports inserting only an
517 outer VLAN. Note that when a packet has a single VLAN, the tag is considered as
518 outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
520 The BNXT PMD supports various TPID values shown below. Any other values will be
529 The BNXT PMD supports the VLAN insert offload per-packet basis. The application
530 provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD
531 inserts the VLAN tag (via hardware) using the provided TCI along with the
534 .. code-block:: console
536 // enable VLAN insert offload
537 testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
539 if (mbuf->ol_flags && PKT_TX_QINQ) // case-1: insert VLAN to single-tagged packet
540 tci_value = mbuf->vlan_tci_outer
541 else if (mbuf->ol_flags && PKT_TX_VLAN) // case-2: insert VLAN to untagged packet
542 tci_value = mbuf->vlan_tci
547 The application configures the per-port VLAN strip offload.
549 .. code-block:: console
551 // enable VLAN strip on a port
552 testpmd> port config (port_id) tx_offload vlan_strip (on|off)
554 // notify application VLAN strip via mbuf
555 mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
556 mbuf->vlan_tci = tci_value // TCI of the stripped VLAN
561 System operators may run a PTP (Precision Time Protocol) client application to
562 synchronize the time on the NIC (and optionally, on the system) to a PTP master.
564 The BNXT PMD supports a PTP client application to communicate with a PTP master
565 clock using DPDK IEEE1588 APIs. Note that the PTP client application needs to
566 run on PF and vector mode needs to be disabled.
568 .. code-block:: console
570 testpmd> set fwd ieee1588 // enable IEEE 1588 mode
572 When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to
573 the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP
574 packets to application via mbuf.
576 .. code-block:: console
578 // RX packet completion will indicate whether the packet is PTP
579 mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
581 Statistics Collection
582 ~~~~~~~~~~~~~~~~~~~~~
584 In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the
585 similar functionalities via rte_eth_stats and rte_eth_xstats.
587 The BNXT PMD supports both basic and extended stats collection:
595 The application collects per-port and per-queue stats using rte_eth_stats APIs.
597 .. code-block:: console
599 testpmd> show port stats (port_id)
611 By default, per-queue stats for 16 queues are supported. For more than 16
612 queues, BNXT PMD should be compiled with ``RTE_ETHDEV_QUEUE_STAT_CNTRS``
613 set to the desired number of queues.
618 Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor
619 provides its own set of counters.
621 The BNXT PMD provides a rich set of counters, including per-flow counters,
622 per-cos counters, per-priority counters, etc.
624 .. code-block:: console
626 testpmd> show port xstats (port_id)
628 Shown below is the elaborated sequence to retrieve extended stats:
630 .. code-block:: console
632 // application queries the number of xstats
633 len = rte_eth_xstats_get(port_id, NULL, 0);
634 // BNXT PMD returns the size of xstats array (i.e. the number of entries)
635 // BNXT PMD returns 0, if the feature is compiled out or disabled
637 // application allocates memory for xstats
638 struct rte_eth_xstats_name *names; // name is 64 character or less
639 struct rte_eth_xstats *xstats;
640 names = calloc(len, sizeof(*names));
641 xstats = calloc(len, sizeof(*xstats));
643 // application retrieves xstats // names and values
644 ret = rte_eth_xstats_get_names(port_id, *names, len);
645 ret = rte_eth_xstats_get(port_id, *xstats, len);
647 // application checks the xstats
648 // application may repeat the below:
649 len = rte_eth_xstats_reset(port_id); // reset the xstats
651 // reset can be skipped, if application wants to see accumulated stats
653 // probably stop the traffic
654 // retrieve xstats // no need to retrieve xstats names again
660 Applications can get benefit by offloading all or part of flow processing to
661 hardware. For example, applications can offload packet classification only
662 (partial offload) or whole match-action (full offload).
664 DPDK offers the Generic Flow API (rte_flow API) to configure hardware to
665 perform flow processing.
667 Listed below are the rte_flow APIs BNXT PMD supports:
674 Host Based Flow Table Management
675 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
677 Starting with 20.05 BNXT PMD supports host based flow table management. This is
678 a new mechanism that should allow higher flow scalability than what is currently
679 supported. This new approach also defines a new rte_flow parser, and mapper
680 which currently supports basic packet classification in the receive path.
682 The feature uses a newly implemented control-plane firmware interface which
683 optimizes flow insertions and deletions.
685 This is a tech preview feature, and is disabled by default. It can be enabled
686 using bnxt devargs. For ex: "-w 0000:0d:00.0,host-based-truflow=1”.
691 - On stopping a device port, all the flows created on a port by the
692 application will be flushed from the hardware and any tables maintained
693 by the PMD. After stopping the device port, all flows on the port become
694 invalid and are not represented in the system anymore.
695 Instead of destroying or flushing such flows an application should discard
696 all references to these flows and re-create the flows as required after the
699 - While an application is free to use the group id attribute to group flows
700 together using a specific criteria, the BNXT PMD currently associates this
701 group id to a VNIC id. One such case is grouping of flows which are filtered
702 on the same source or destination MAC address. This allows packets of such
703 flows to be directed to one or more queues associated with the VNIC id.
704 This implementation is supported only when TRUFLOW functionality is disabled.
706 - An application can issue a VXLAN decap offload request using rte_flow API
707 either as a single rte_flow request or a combination of two stages.
708 The PMD currently supports the two stage offload design.
709 In this approach the offload request may come as two flow offload requests
710 Flow1 & Flow2. The match criteria for Flow1 is O_DMAC, O_SMAC, O_DST_IP,
711 O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match criteria for Flow2
712 is O_SRC_IP, O_DST_IP, VNI and inner header fields.
713 Flow1 and Flow2 flow offload requests can come in any order. If Flow2 flow
714 offload request comes first then Flow2 can’t be offloaded as there is
715 no O_DMAC information in Flow2. In this case, Flow2 will be deferred until
716 Flow1 flow offload request arrives. When Flow1 flow offload request is
717 received it will have O_DMAC information. Using Flow1’s O_DMAC, driver
718 creates an L2 context entry in the hardware as part of offloading Flow1.
719 Flow2 will now use Flow1’s O_DMAC to get the L2 context id associated with
720 this O_DMAC and other flow fields that are cached already at the time
721 of deferring Flow2 for offloading. Flow2 that arrive after Flow1 is offloaded
722 will be directly programmed and not cached.
724 - PMD supports thread-safe rte_flow operations.
726 Note: A VNIC represents a virtual interface in the hardware. It is a resource
727 in the RX path of the chip and is used to setup various target actions such as
728 RSS, MAC filtering etc. for the physical function in use.
730 Virtual Function Port Representors
731 ----------------------------------
732 The BNXT PMD supports the creation of VF port representors for the control
733 and monitoring of BNXT virtual function devices. Each port representor
734 corresponds to a single virtual function of that device that is connected to a
735 VF. When there is no hardware flow offload, each packet transmitted by the VF
736 will be received by the corresponding representor. Similarly each packet that is
737 sent to a representor will be received by the VF. Applications can take
738 advantage of this feature when SRIOV is enabled. The representor will allow the
739 first packet that is transmitted by the VF to be received by the DPDK
740 application which can then decide if the flow should be offloaded to the
741 hardware. Once the flow is offloaded in the hardware, any packet matching the
742 flow will be received by the VF while the DPDK application will not receive it
743 any more. The BNXT PMD supports creation and handling of the port representors
744 when the PMD is initialized on a PF or trusted-VF. The user can specify the list
745 of VF IDs of the VFs for which the representors are needed by using the
746 ``devargs`` option ``representor``.::
748 -w DBDF,representor=[0,1,4]
750 Note that currently hot-plugging of representor ports is not supported so all
751 the required representors must be specified on the creation of the PF or the
754 Representors on Stingray SoC
755 ----------------------------
756 A representor created on X86 host typically represents a VF running in the same
757 X86 domain. But in case of the SoC, the application can run on the CPU complex
758 inside the SoC. The representor can be created on the SoC to represent a PF or a
759 VF running in the x86 domain. Since the representator creation requires passing
760 the bus:device.function of the PCI device endpoint which is not necessarily in the
761 same host domain, additional dev args have been added to the PMD.
763 * rep_is_vf - false to indicate VF representor
764 * rep_is_pf - true to indicate PF representor
765 * rep_based_pf - Physical index of the PF
766 * rep_q_r2f - Logical COS Queue index for the rep to endpoint direction
767 * rep_q_f2r - Logical COS Queue index for the endpoint to rep direction
768 * rep_fc_r2f - Flow control for the representor to endpoint direction
769 * rep_fc_f2r - Flow control for the endpoint to representor direction
771 The sample command line with the new ``devargs`` looks like this::
773 -w 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
774 rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
776 .. code-block:: console
778 testpmd -l1-4 -n2 -w 0008:01:00.0,host-based-truflow=1,\
779 representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
780 rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
782 Number of flows supported
783 -------------------------
784 The number of flows that can be support can be changed using the devargs
785 parameter ``max_num_kflows``. The default number of flows supported is 16K each
786 in ingress and egress path.
790 Broadcom devices can support filter creation in the onchip memory or the
791 external memory. This is referred to as EM or EEM mode respectively.
792 The decision for internal/external EM support is based on the ``devargs``
793 parameter ``max_num_kflows``. If this is set by the user, external EM is used.
794 Otherwise EM support is enabled with flows created in internal memory.
802 The BNXT PMD supports the application to retrieve the firmware version.
804 .. code-block:: console
806 testpmd> show port info (port_id)
808 Note that the applications cannot update the firmware using BNXT PMD.
813 When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single
814 instance of DPDK, the BNXT PMD supports a single primary application and one or
815 more secondary applications. Note that the DPDK-layer (not the PMD) ensures
816 there is only one primary application.
822 * Application notifies whether it is primary or secondary using *proc-type* flag
823 * 1st process should be spawned with ``--proc-type=primary``
824 * All subsequent processes should be spawned with ``--proc-type=secondary``
828 * Application is using ``proc-type=auto`` flag
829 * A process is spawned as a secondary if a primary is already running
831 The BNXT PMD uses the info to skip a device initialization, i.e. performs a
832 device initialization only when being brought up by a primary application.
837 Typically, a DPDK application allocates TX and RX queues statically: i.e. queues
838 are allocated at start. However, an application may want to increase (or
839 decrease) the number of queues dynamically for various reasons, e.g. power
842 The BNXT PMD supports applications to increase or decrease queues at runtime.
844 .. code-block:: console
846 testpmd> port config all (rxq|txq) (num_queues)
848 Note that a DPDK application must allocate default queues (one for TX and one
849 for RX at minimum) at initialization.
854 Applications may use the descriptor status for various reasons, e.g. for power
855 savings. For example, an application may stop polling and change to interrupt
856 mode when the descriptor status shows no packets to service for a while.
858 The BNXT PMD supports the application to retrieve both TX and RX descriptor
861 .. code-block:: console
863 testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
868 DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
870 .. code-block:: console
872 testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
873 (ex) testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
878 Vector processing provides significantly improved performance over scalar
879 processing (see Vector Processor, here).
881 The BNXT PMD supports the vector processing using SSE (Streaming SIMD
882 Extensions) instructions on x86 platforms. It also supports NEON intrinsics for
883 vector processing on ARM CPUs. The BNXT vPMD (vector mode PMD) is available for
884 Intel/AMD and ARM CPU architectures.
886 This improved performance comes from several optimizations:
889 * TX: processing completions in bulk
890 * RX: allocating mbufs in bulk
891 * Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
892 * Some stateless offloads are *not* supported with vector processing
893 * TX: no offloads will be supported
894 * RX: reduced RX offloads (listed below) will be supported::
896 DEV_RX_OFFLOAD_VLAN_STRIP
897 DEV_RX_OFFLOAD_KEEP_CRC
898 DEV_RX_OFFLOAD_JUMBO_FRAME
899 DEV_RX_OFFLOAD_IPV4_CKSUM
900 DEV_RX_OFFLOAD_UDP_CKSUM
901 DEV_RX_OFFLOAD_TCP_CKSUM
902 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
903 DEV_RX_OFFLOAD_RSS_HASH
904 DEV_RX_OFFLOAD_VLAN_FILTER
906 The BNXT Vector PMD is enabled in DPDK builds by default.
908 However, a decision to enable vector mode will be made when the port transitions
909 from stopped to started. Any TX offloads or some RX offloads (other than listed
910 above) will disable the vector mode.
911 Offload configuration changes that impact vector mode must be made when the port
914 Note that TX (or RX) vector mode can be enabled independently from RX (or TX)
920 Supported Chipsets and Adapters
921 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
923 BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
924 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
926 Information about Ethernet adapters in the NetXtreme family of adapters can be
927 found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
929 * ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
930 * ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
931 * ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
933 BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
934 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
936 Information about Ethernet adapters in the NetXtreme family of adapters can be
937 found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
939 * ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
940 * ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
941 * ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
942 * ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
943 * ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
944 * ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
945 * ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
946 * ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
947 * ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
948 * ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
949 * ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
950 * ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
951 * ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
952 * ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
953 * ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
954 * ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
955 * ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
956 * ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
957 * ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
958 * ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
959 * ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
960 * ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
961 * ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
962 * ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
963 * ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
964 * ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
965 * ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
966 * ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
968 BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
969 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
971 Information about the Stingray family of SmartNIC adapters can be found in the
972 `Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
974 * ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
976 BCM5873x StrataGX® Family of Communications Processors
977 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
979 These ARM-based processors target a broad range of networking applications,
980 including virtual CPE (vCPE) and NFV appliances, 10G service routers and
981 gateways, control plane processing for Ethernet switches, and network-attached
984 * ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``