1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2020 Broadcom Inc.
7 The Broadcom BNXT PMD (**librte_net_bnxt**) implements support for adapters
8 based on Ethernet controllers and SoCs belonging to the Broadcom
9 BCM5741X/BCM575XX NetXtreme-E® Family of Ethernet Network Controllers,
10 the Broadcom BCM588XX Stingray Family of Smart NIC Adapters, and the Broadcom
11 StrataGX® BCM5873X Series of Communications Processors.
13 A complete list with links to reference material is in the Appendix section.
18 BNXT PMD supports multiple CPU architectures, including x86-32, x86-64, and ARMv8.
23 BNXT PMD requires a kernel module (VFIO or UIO) for setting up a device, mapping
24 device memory to userspace, registering interrupts, etc.
25 VFIO is more secure than UIO, relying on IOMMU protection.
26 UIO requires the IOMMU disabled or configured to pass-through mode.
28 The BNXT PMD supports operating with:
31 * Linux uio_pci_generic
38 Bind the device to one of the kernel modules listed above
40 .. code-block:: console
42 ./dpdk-devbind.py -b vfio-pci|igb_uio|uio_pci_generic bus_id:device_id.function_id
44 The BNXT PMD can run on PF or VF.
46 PCI-SIG Single Root I/O Virtualization (SR-IOV) involves the direct assignment
47 of part of the network port resources to guest operating systems using the
49 NIC is logically distributed among multiple virtual machines (VMs), while still
50 having global data in common to share with the PF and other VFs.
52 Sysadmin can create and configure VFs:
54 .. code-block:: console
56 echo num_vfs > /sys/bus/pci/devices/domain_id:bus_id:device_id:function_id/sriov_numvfs
57 (ex) echo 4 > /sys/bus/pci/devices/0000:82:00:0/sriov_numvfs
59 Sysadmin also can change the VF property such as MAC address, transparent VLAN,
60 TX rate limit, and trusted VF:
62 .. code-block:: console
64 ip link set pf_id vf vf_id mac (mac_address) vlan (vlan_id) txrate (rate_value) trust (enable|disable)
65 (ex) ip link set 0 vf 0 mac 00:11:22:33:44:55 vlan 0x100 txrate 100 trust disable
73 The Flow Bifurcation splits the incoming data traffic to user space applications
74 (such as DPDK applications) and/or kernel space programs (such as the Linux
76 It can direct some traffic, for example data plane traffic, to DPDK.
77 Rest of the traffic, for example control plane traffic, would be redirected to
78 the traditional Linux networking stack.
80 Refer to https://doc.dpdk.org/guides/howto/flow_bifurcation.html
82 Benefits of the flow bifurcation include:
84 * Better performance with less CPU overhead, as user application can directly
85 access the NIC for data path
86 * NIC is still being controlled by the kernel, as control traffic is forwarded
87 only to the kernel driver
88 * Control commands, e.g. ethtool, will work as usual
90 Running on a VF, the BXNT PMD supports the flow bifurcation with a combination
91 of SR-IOV and packet classification and/or forwarding capability.
92 In the simplest case of flow bifurcation, a PF driver configures a NIC to
93 forward all user traffic directly to VFs with matching destination MAC address,
94 while the rest of the traffic is forwarded to a PF.
95 Note that the broadcast packets will be forwarded to both PF and VF.
97 .. code-block:: console
99 (ex) ethtool --config-ntuple ens2f0 flow-type ether dst 00:01:02:03:00:01 vlan 10 vlan-mask 0xf000 action 0x100000000
104 By default, VFs are *not* allowed to perform privileged operations, such as
105 modifying the VF’s MAC address in the guest. These security measures are
106 designed to prevent possible attacks.
107 However, when a DPDK application can be trusted (e.g., OVS-DPDK, here), these
108 operations performed by a VF would be legitimate and can be allowed.
110 To enable VF to request "trusted mode," a new trusted VF concept was introduced
111 in Linux kernel 4.4 and allowed VFs to become “trusted” and perform some
112 privileged operations.
114 The BNXT PMD supports the trusted VF mode of operation. Only a PF can enable the
115 trusted attribute on the VF. It is preferable to enable the Trusted setting on a
116 VF before starting applications.
117 However, the BNXT PMD handles dynamic changes in trusted settings as well.
119 Note that control commands, e.g., ethtool, will work via the kernel PF driver,
120 *not* via the trusted VF driver.
122 Operations supported by trusted VF:
124 * MAC address configuration
127 Operations *not* supported by trusted VF:
130 * Promiscuous mode setting
135 Unlike the VF when BNXT PMD runs on a PF there are no restrictions placed on the
136 features which the PF can enable or request. In a multiport NIC, each port will
137 have a corresponding PF. Also depending on the configuration of the NIC there
138 can be more than one PF associated per port.
139 A sysadmin can load the kernel driver on one PF, and run BNXT PMD on the other
140 PF or run the PMD on both the PFs. In such cases, the firmware picks one of the
143 Much like in the trusted VF, the DPDK application must be *trusted* and expected
144 to be *well-behaved*.
149 The BNXT PMD supports the following features:
154 * Flow Control and Autoneg
157 * Multicast MAC Filter
163 * Checksum Offload (IPv4, TCP, and UDP)
164 * Multi-Queue (TSS and RSS)
165 * Segmentation and Reassembly (TSO and LRO)
168 * Generic Flow Offload
173 **Port MTU**: BNXT PMD supports the MTU (Maximum Transmission Unit) up to 9,574
176 .. code-block:: console
178 testpmd> port config mtu (port_id) mtu_value
179 testpmd> show port info (port_id)
181 **LED**: Application tunes on (or off) a port LED, typically for a port
184 .. code-block:: console
186 int rte_eth_led_on (uint16_t port_id)
187 int rte_eth_led_off (uint16_t port_id)
189 **Flow Control and Autoneg**: Application tunes on (or off) flow control and/or
190 auto-negotiation on a port:
192 .. code-block:: console
194 testpmd> set flow_ctrl rx (on|off) (port_id)
195 testpmd> set flow_ctrl tx (on|off) (port_id)
196 testpmd> set flow_ctrl autoneg (on|off) (port_id)
198 Note that the BNXT PMD does *not* support some options and ignores them when
210 Applications control the packet-forwarding behaviors with packet filters.
212 The BNXT PMD supports hardware-based packet filtering:
214 * UC (Unicast) MAC Filters
215 * No unicast packets are forwarded to an application except the one with
216 DMAC address added to the port
217 * At initialization, the station MAC address is added to the port
218 * MC (Multicast) MAC Filters
219 * No multicast packets are forwarded to an application except the one with
220 MC address added to the port
221 * When the application listens to a multicast group, it adds the MC address
223 * VLAN Filtering Mode
224 * When enabled, no packets are forwarded to an application except the ones
225 with the VLAN tag assigned to the port
227 * When enabled, every multicast packet received on the port is forwarded to
229 * Typical usage is routing applications
231 * When enabled, every packet received on the port is forwarded to the
237 The application can add (or remove) MAC addresses to enable (or disable)
238 filtering on MAC address used to accept packets.
240 .. code-block:: console
242 testpmd> show port (port_id) macs
243 testpmd> mac_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
248 The application can add (or remove) Multicast addresses that enable (or disable)
249 filtering on multicast MAC address used to accept packets.
251 .. code-block:: console
253 testpmd> show port (port_id) mcast_macs
254 testpmd> mcast_addr (add|remove) (port_id) (XX:XX:XX:XX:XX:XX)
256 Application adds (or removes) Multicast addresses to enable (or disable)
257 allowlist filtering to accept packets.
259 Note that the BNXT PMD supports up to 16 MC MAC filters. if the user adds more
260 than 16 MC MACs, the BNXT PMD puts the port into the Allmulticast mode.
265 The application enables (or disables) VLAN filtering mode. When the mode is
266 enabled, no packets are forwarded to an application except ones with VLAN tag
267 assigned for the application.
269 .. code-block:: console
271 testpmd> vlan set filter (on|off) (port_id)
272 testpmd> rx_vlan (add|rm) (vlan_id) (port_id)
277 The application enables (or disables) the allmulticast mode. When the mode is
278 enabled, every multicast packet received is forwarded to the application.
280 .. code-block:: console
282 testpmd> show port info (port_id)
283 testpmd> set allmulti (port_id) (on|off)
288 The application enables (or disables) the promiscuous mode. When the mode is
289 enabled on a port, every packet received on the port is forwarded to the
292 .. code-block:: console
294 testpmd> show port info (port_id)
295 testpmd> set promisc port_id (on|off)
300 Like Linux, DPDK provides enabling hardware offload of some stateless processing
301 (such as checksum calculation) of the stack, alleviating the CPU from having to
302 burn cycles on every packet.
304 Listed below are the stateless offloads supported by the BNXT PMD:
306 * CRC offload (for both TX and RX packets)
307 * Checksum Offload (for both TX and RX packets)
308 * IPv4 Checksum Offload
309 * TCP Checksum Offload
310 * UDP Checksum Offload
311 * Segmentation/Reassembly Offloads
312 * TCP Segmentation Offload (TSO)
313 * Large Receive Offload (LRO)
315 * Transmit Side Scaling (TSS)
316 * Receive Side Scaling (RSS)
318 Also, the BNXT PMD supports stateless offloads on inner frames for tunneled
319 packets. Listed below are the tunneling protocols supported by the BNXT PMD:
325 Note that enabling (or disabling) stateless offloads requires applications to
326 stop DPDK before changing configuration.
331 The FCS (Frame Check Sequence) in the Ethernet frame is a four-octet CRC (Cyclic
332 Redundancy Check) that allows detection of corrupted data within the entire
333 frame as received on the receiver side.
335 The BNXT PMD supports hardware-based CRC offload:
337 * TX: calculate and insert CRC
338 * RX: check and remove CRC, notify the application on CRC error
340 Note that the CRC offload is always turned on.
345 The application enables hardware checksum calculation for IPv4, TCP, and UDP.
347 .. code-block:: console
349 testpmd> port stop (port_id)
350 testpmd> csum set (ip|tcp|udp|outer-ip|outer-udp) (sw|hw) (port_id)
351 testpmd> set fwd csum
356 Multi-Queue, also known as TSS (Transmit Side Scaling) or RSS (Receive Side
357 Scaling), is a common networking technique that allows for more efficient load
358 balancing across multiple CPU cores.
360 The application enables multiple TX and RX queues when it is started.
362 .. code-block:: console
364 dpdk-testpmd -l 1,3,5 --main-lcore 1 --txq=2 –rxq=2 --nb-cores=2
368 TSS distributes network transmit processing across several hardware-based
369 transmit queues, allowing outbound network traffic to be processed by multiple
374 RSS distributes network receive processing across several hardware-based receive
375 queues, allowing inbound network traffic to be processed by multiple CPU cores.
377 The application can select the RSS mode, i.e. select the header fields that are
378 included for hash calculation. The BNXT PMD supports the RSS mode of
379 ``default|ip|tcp|udp|none``, where default mode is L3 and L4.
381 For tunneled packets, RSS hash is calculated over inner frame header fields.
382 Applications may want to select the tunnel header fields for hash calculation,
383 and it will be supported in 20.08 using RSS level.
385 .. code-block:: console
387 testpmd> port config (port_id) rss (all|default|ip|tcp|udp|none)
389 // note that the testpmd defaults the RSS mode to ip
390 // ensure to issue the command below to enable L4 header (TCP or UDP) along with IPv4 header
391 testpmd> port config (port_id) rss default
393 // to check the current RSS configuration, such as RSS function and RSS key
394 testpmd> show port (port_id) rss-hash key
396 // RSS is enabled by default. However, application can disable RSS as follows
397 testpmd> port config (port_id) rss none
399 Application can change the flow distribution, i.e. remap the received traffic to
400 CPU cores, using RSS RETA (Redirection Table).
402 .. code-block:: console
404 // application queries the current RSS RETA configuration
405 testpmd> show port (port_id) rss reta size (mask0, mask1)
407 // application changes the RSS RETA configuration
408 testpmd> port config (port_id) rss reta (hash, queue) [, (hash, queue)]
413 TSO (TCP Segmentation Offload), also known as LSO (Large Send Offload), enables
414 the TCP/IP stack to pass to the NIC a larger datagram than the MTU (Maximum
415 Transmit Unit). NIC breaks it into multiple segments before sending it to the
418 The BNXT PMD supports hardware-based TSO.
420 .. code-block:: console
422 // display the status of TSO
423 testpmd> tso show (port_id)
425 // enable/disable TSO
426 testpmd> port config (port_id) tx_offload tcp_tso (on|off)
428 // set TSO segment size
429 testpmd> tso set segment_size (port_id)
431 The BNXT PMD also supports hardware-based tunneled TSO.
433 .. code-block:: console
435 // display the status of tunneled TSO
436 testpmd> tunnel_tso show (port_id)
438 // enable/disable tunneled TSO
439 testpmd> port config (port_id) tx_offload vxlan_tnl_tso|gre_tnl_tso (on|off)
441 // set tunneled TSO segment size
442 testpmd> tunnel_tso set segment_size (port_id)
444 Note that the checksum offload is always assumed to be enabled for TSO.
449 LRO (Large Receive Offload) enables NIC to aggregate multiple incoming TCP/IP
450 packets from a single stream into a larger buffer, before passing to the
453 The BNXT PMD supports hardware-based LRO.
455 .. code-block:: console
457 // display the status of LRO
458 testpmd> show port (port_id) rx_offload capabilities
459 testpmd> show port (port_id) rx_offload configuration
461 // enable/disable LRO
462 testpmd> port config (port_id) rx_offload tcp_lro (on|off)
464 // set max LRO packet (datagram) size
465 testpmd> port config (port_id) max-lro-pkt-size (max_size)
467 The BNXT PMD also supports tunneled LRO.
469 Some applications, such as routing, should *not* change the packet headers as
470 they pass through (i.e. received from and sent back to the network). In such a
471 case, GRO (Generic Receive Offload) should be used instead of LRO.
476 DPDK application offloads VLAN insert/strip to improve performance. The BNXT PMD
477 supports hardware-based VLAN insert/strip offload for both single and double
484 Application configures the VLAN TPID (Tag Protocol ID). By default, the TPID is
487 .. code-block:: console
489 // configure outer TPID value for a port
490 testpmd> vlan set outer tpid (tpid_value) (port_id)
492 The inner TPID set will be rejected as the BNXT PMD supports inserting only an
493 outer VLAN. Note that when a packet has a single VLAN, the tag is considered as
494 outer, i.e. the inner VLAN is relevant only when a packet is double-tagged.
496 The BNXT PMD supports various TPID values shown below. Any other values will be
505 The BNXT PMD supports the VLAN insert offload per-packet basis. The application
506 provides the TCI (Tag Control Info) for a packet via mbuf. In turn, the BNXT PMD
507 inserts the VLAN tag (via hardware) using the provided TCI along with the
510 .. code-block:: console
512 // enable VLAN insert offload
513 testpmd> port config (port_id) rx_offload vlan_insert|qinq_insert (on|off)
515 if (mbuf->ol_flags && PKT_TX_QINQ) // case-1: insert VLAN to single-tagged packet
516 tci_value = mbuf->vlan_tci_outer
517 else if (mbuf->ol_flags && PKT_TX_VLAN) // case-2: insert VLAN to untagged packet
518 tci_value = mbuf->vlan_tci
523 The application configures the per-port VLAN strip offload.
525 .. code-block:: console
527 // enable VLAN strip on a port
528 testpmd> port config (port_id) tx_offload vlan_strip (on|off)
530 // notify application VLAN strip via mbuf
531 mbuf->ol_flags |= PKT_RX_VLAN | PKT_RX_STRIPPED // outer VLAN is found and stripped
532 mbuf->vlan_tci = tci_value // TCI of the stripped VLAN
537 System operators may run a PTP (Precision Time Protocol) client application to
538 synchronize the time on the NIC (and optionally, on the system) to a PTP master.
540 The BNXT PMD supports a PTP client application to communicate with a PTP master
541 clock using DPDK IEEE1588 APIs. Note that the PTP client application needs to
542 run on PF and vector mode needs to be disabled.
544 .. code-block:: console
546 testpmd> set fwd ieee1588 // enable IEEE 1588 mode
548 When enabled, the BNXT PMD configures hardware to insert IEEE 1588 timestamps to
549 the outgoing PTP packets and reports IEEE 1588 timestamps from the incoming PTP
550 packets to application via mbuf.
552 .. code-block:: console
554 // RX packet completion will indicate whether the packet is PTP
555 mbuf->ol_flags |= PKT_RX_IEEE1588_PTP
557 Statistics Collection
558 ~~~~~~~~~~~~~~~~~~~~~
560 In Linux, the *ethtool -S* enables us to query the NIC stats. DPDK provides the
561 similar functionalities via rte_eth_stats and rte_eth_xstats.
563 The BNXT PMD supports both basic and extended stats collection:
571 The application collects per-port and per-queue stats using rte_eth_stats APIs.
573 .. code-block:: console
575 testpmd> show port stats (port_id)
587 By default, per-queue stats for 16 queues are supported. For more than 16
588 queues, BNXT PMD should be compiled with ``RTE_ETHDEV_QUEUE_STAT_CNTRS``
589 set to the desired number of queues.
594 Unlike basic stats, the extended stats are vendor-specific, i.e. each vendor
595 provides its own set of counters.
597 The BNXT PMD provides a rich set of counters, including per-flow counters,
598 per-cos counters, per-priority counters, etc.
600 .. code-block:: console
602 testpmd> show port xstats (port_id)
604 Shown below is the elaborated sequence to retrieve extended stats:
606 .. code-block:: console
608 // application queries the number of xstats
609 len = rte_eth_xstats_get(port_id, NULL, 0);
610 // BNXT PMD returns the size of xstats array (i.e. the number of entries)
611 // BNXT PMD returns 0, if the feature is compiled out or disabled
613 // application allocates memory for xstats
614 struct rte_eth_xstats_name *names; // name is 64 character or less
615 struct rte_eth_xstats *xstats;
616 names = calloc(len, sizeof(*names));
617 xstats = calloc(len, sizeof(*xstats));
619 // application retrieves xstats // names and values
620 ret = rte_eth_xstats_get_names(port_id, *names, len);
621 ret = rte_eth_xstats_get(port_id, *xstats, len);
623 // application checks the xstats
624 // application may repeat the below:
625 len = rte_eth_xstats_reset(port_id); // reset the xstats
627 // reset can be skipped, if application wants to see accumulated stats
629 // probably stop the traffic
630 // retrieve xstats // no need to retrieve xstats names again
636 Applications can get benefit by offloading all or part of flow processing to
637 hardware. For example, applications can offload packet classification only
638 (partial offload) or whole match-action (full offload).
640 DPDK offers the Generic Flow API (rte_flow API) to configure hardware to
641 perform flow processing.
643 Listed below are the rte_flow APIs BNXT PMD supports:
650 Host Based Flow Table Management
651 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
653 Starting with 20.05 BNXT PMD supports host based flow table management. This is
654 a new mechanism that should allow higher flow scalability than what is currently
655 supported. This new approach also defines a new rte_flow parser, and mapper
656 which currently supports basic packet classification in the receive path.
658 The feature uses a newly implemented control-plane firmware interface which
659 optimizes flow insertions and deletions.
661 This is a tech preview feature, and is disabled by default. It can be enabled
662 using bnxt devargs. For ex: "-a 0000:0d:00.0,host-based-truflow=1”.
664 This feature is currently supported on Whitney+ and Stingray devices.
669 - On stopping a device port, all the flows created on a port by the
670 application will be flushed from the hardware and any tables maintained
671 by the PMD. After stopping the device port, all flows on the port become
672 invalid and are not represented in the system anymore.
673 Instead of destroying or flushing such flows an application should discard
674 all references to these flows and re-create the flows as required after the
677 - While an application is free to use the group id attribute to group flows
678 together using a specific criteria, the BNXT PMD currently associates this
679 group id to a VNIC id. One such case is grouping of flows which are filtered
680 on the same source or destination MAC address. This allows packets of such
681 flows to be directed to one or more queues associated with the VNIC id.
682 This implementation is supported only when TRUFLOW functionality is disabled.
684 - An application can issue a VXLAN decap offload request using rte_flow API
685 either as a single rte_flow request or a combination of two stages.
686 The PMD currently supports the two stage offload design.
687 In this approach the offload request may come as two flow offload requests
688 Flow1 & Flow2. The match criteria for Flow1 is O_DMAC, O_SMAC, O_DST_IP,
689 O_UDP_DPORT and actions are COUNT, MARK, JUMP. The match criteria for Flow2
690 is O_SRC_IP, O_DST_IP, VNI and inner header fields.
691 Flow1 and Flow2 flow offload requests can come in any order. If Flow2 flow
692 offload request comes first then Flow2 can’t be offloaded as there is
693 no O_DMAC information in Flow2. In this case, Flow2 will be deferred until
694 Flow1 flow offload request arrives. When Flow1 flow offload request is
695 received it will have O_DMAC information. Using Flow1’s O_DMAC, driver
696 creates an L2 context entry in the hardware as part of offloading Flow1.
697 Flow2 will now use Flow1’s O_DMAC to get the L2 context id associated with
698 this O_DMAC and other flow fields that are cached already at the time
699 of deferring Flow2 for offloading. Flow2 that arrive after Flow1 is offloaded
700 will be directly programmed and not cached.
702 - PMD supports thread-safe rte_flow operations.
704 Note: A VNIC represents a virtual interface in the hardware. It is a resource
705 in the RX path of the chip and is used to setup various target actions such as
706 RSS, MAC filtering etc. for the physical function in use.
708 Virtual Function Port Representors
709 ----------------------------------
710 The BNXT PMD supports the creation of VF port representors for the control
711 and monitoring of BNXT virtual function devices. Each port representor
712 corresponds to a single virtual function of that device that is connected to a
713 VF. When there is no hardware flow offload, each packet transmitted by the VF
714 will be received by the corresponding representor. Similarly each packet that is
715 sent to a representor will be received by the VF. Applications can take
716 advantage of this feature when SRIOV is enabled. The representor will allow the
717 first packet that is transmitted by the VF to be received by the DPDK
718 application which can then decide if the flow should be offloaded to the
719 hardware. Once the flow is offloaded in the hardware, any packet matching the
720 flow will be received by the VF while the DPDK application will not receive it
721 any more. The BNXT PMD supports creation and handling of the port representors
722 when the PMD is initialized on a PF or trusted-VF. The user can specify the list
723 of VF IDs of the VFs for which the representors are needed by using the
724 ``devargs`` option ``representor``.::
726 -a DBDF,representor=[0,1,4]
728 Note that currently hot-plugging of representor ports is not supported so all
729 the required representors must be specified on the creation of the PF or the
732 Representors on Stingray SoC
733 ----------------------------
734 A representor created on X86 host typically represents a VF running in the same
735 X86 domain. But in case of the SoC, the application can run on the CPU complex
736 inside the SoC. The representor can be created on the SoC to represent a PF or a
737 VF running in the x86 domain. Since the representator creation requires passing
738 the bus:device.function of the PCI device endpoint which is not necessarily in the
739 same host domain, additional dev args have been added to the PMD.
741 * rep_is_vf - false to indicate VF representor
742 * rep_is_pf - true to indicate PF representor
743 * rep_based_pf - Physical index of the PF
744 * rep_q_r2f - Logical COS Queue index for the rep to endpoint direction
745 * rep_q_f2r - Logical COS Queue index for the endpoint to rep direction
746 * rep_fc_r2f - Flow control for the representor to endpoint direction
747 * rep_fc_f2r - Flow control for the endpoint to representor direction
749 The sample command line with the new ``devargs`` looks like this::
751 -a 0000:06:02.0,host-based-truflow=1,representor=[1],rep-based-pf=8,\
752 rep-is-pf=1,rep-q-r2f=1,rep-fc-r2f=0,rep-q-f2r=1,rep-fc-f2r=1
754 .. code-block:: console
756 dpdk-testpmd -l1-4 -n2 -a 0008:01:00.0,host-based-truflow=1,\
757 representor=[0], rep-based-pf=8,rep-is-pf=0,rep-q-r2f=1,rep-fc-r2f=1,\
758 rep-q-f2r=0,rep-fc-f2r=1 --log-level="pmd.*",8 -- -i --rxq=3 --txq=3
760 Number of flows supported
761 -------------------------
762 The number of flows that can be support can be changed using the devargs
763 parameter ``max_num_kflows``. The default number of flows supported is 16K each
764 in ingress and egress path.
768 Broadcom devices can support filter creation in the onchip memory or the
769 external memory. This is referred to as EM or EEM mode respectively.
770 The decision for internal/external EM support is based on the ``devargs``
771 parameter ``max_num_kflows``. If this is set by the user, external EM is used.
772 Otherwise EM support is enabled with flows created in internal memory.
780 The BNXT PMD supports the application to retrieve the firmware version.
782 .. code-block:: console
784 testpmd> show port info (port_id)
786 Note that the applications cannot update the firmware using BNXT PMD.
791 When two or more DPDK applications (e.g., testpmd and dpdk-pdump) share a single
792 instance of DPDK, the BNXT PMD supports a single primary application and one or
793 more secondary applications. Note that the DPDK-layer (not the PMD) ensures
794 there is only one primary application.
800 * Application notifies whether it is primary or secondary using *proc-type* flag
801 * 1st process should be spawned with ``--proc-type=primary``
802 * All subsequent processes should be spawned with ``--proc-type=secondary``
806 * Application is using ``proc-type=auto`` flag
807 * A process is spawned as a secondary if a primary is already running
809 The BNXT PMD uses the info to skip a device initialization, i.e. performs a
810 device initialization only when being brought up by a primary application.
815 Typically, a DPDK application allocates TX and RX queues statically: i.e. queues
816 are allocated at start. However, an application may want to increase (or
817 decrease) the number of queues dynamically for various reasons, e.g. power
820 The BNXT PMD supports applications to increase or decrease queues at runtime.
822 .. code-block:: console
824 testpmd> port config all (rxq|txq) (num_queues)
826 Note that a DPDK application must allocate default queues (one for TX and one
827 for RX at minimum) at initialization.
832 Applications may use the descriptor status for various reasons, e.g. for power
833 savings. For example, an application may stop polling and change to interrupt
834 mode when the descriptor status shows no packets to service for a while.
836 The BNXT PMD supports the application to retrieve both TX and RX descriptor
839 .. code-block:: console
841 testpmd> show port (port_id) (rxq|txq) (queue_id) desc (desc_id) status
846 DPDK implements a light-weight library to allow PMDs to be bonded together and provide a single logical PMD to the application.
848 .. code-block:: console
850 dpdk-testpmd -l 0-3 -n4 --vdev 'net_bonding0,mode=0,slave=<PCI B:D.F device 1>,slave=<PCI B:D.F device 2>,mac=XX:XX:XX:XX:XX:XX’ – --socket_num=1 – -i --port-topology=chained
851 (ex) dpdk-testpmd -l 1,3,5,7,9 -n4 --vdev 'net_bonding0,mode=0,slave=0000:82:00.0,slave=0000:82:00.1,mac=00:1e:67:1d:fd:1d' – --socket-num=1 – -i --port-topology=chained
856 Vector processing provides significantly improved performance over scalar
857 processing (see Vector Processor, here).
859 The BNXT PMD supports the vector processing using SSE (Streaming SIMD
860 Extensions) instructions on x86 platforms. It also supports NEON intrinsics for
861 vector processing on ARM CPUs. The BNXT vPMD (vector mode PMD) is available for
862 Intel/AMD and ARM CPU architectures.
864 This improved performance comes from several optimizations:
867 * TX: processing completions in bulk
868 * RX: allocating mbufs in bulk
869 * Chained mbufs are *not* supported, i.e. a packet should fit a single mbuf
870 * Some stateless offloads are *not* supported with vector processing
871 * TX: no offloads will be supported
872 * RX: reduced RX offloads (listed below) will be supported::
874 DEV_RX_OFFLOAD_VLAN_STRIP
875 DEV_RX_OFFLOAD_KEEP_CRC
876 DEV_RX_OFFLOAD_JUMBO_FRAME
877 DEV_RX_OFFLOAD_IPV4_CKSUM
878 DEV_RX_OFFLOAD_UDP_CKSUM
879 DEV_RX_OFFLOAD_TCP_CKSUM
880 DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM
881 DEV_RX_OFFLOAD_RSS_HASH
882 DEV_RX_OFFLOAD_VLAN_FILTER
884 The BNXT Vector PMD is enabled in DPDK builds by default.
886 However, a decision to enable vector mode will be made when the port transitions
887 from stopped to started. Any TX offloads or some RX offloads (other than listed
888 above) will disable the vector mode.
889 Offload configuration changes that impact vector mode must be made when the port
892 Note that TX (or RX) vector mode can be enabled independently from RX (or TX)
895 Also vector mode is allowed when jumbo is enabled
896 as long as the MTU setting does not require scattered Rx.
901 Supported Chipsets and Adapters
902 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
904 BCM5730x NetXtreme-C® Family of Ethernet Network Controllers
905 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
907 Information about Ethernet adapters in the NetXtreme family of adapters can be
908 found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
910 * ``M150c ... Single-port 40/50 Gigabit Ethernet Adapter``
911 * ``P150c ... Single-port 40/50 Gigabit Ethernet Adapter``
912 * ``P225c ... Dual-port 10/25 Gigabit Ethernet Adapter``
914 BCM574xx/575xx NetXtreme-E® Family of Ethernet Network Controllers
915 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
917 Information about Ethernet adapters in the NetXtreme family of adapters can be
918 found in the `NetXtreme® Brand section <https://www.broadcom.com/products/ethernet-connectivity/network-adapters/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
920 * ``M125P .... Single-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
921 * ``M150P .... Single-port OCP 2.0 50 Gigabit Ethernet Adapter``
922 * ``M150PM ... Single-port OCP 2.0 Multi-Host 50 Gigabit Ethernet Adapter``
923 * ``M210P .... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
924 * ``M210TP ... Dual-port OCP 2.0 10 Gigabit Ethernet Adapter``
925 * ``M1100G ... Single-port OCP 2.0 10/25/50/100 Gigabit Ethernet Adapter``
926 * ``N150G .... Single-port OCP 3.0 50 Gigabit Ethernet Adapter``
927 * ``M225P .... Dual-port OCP 2.0 10/25 Gigabit Ethernet Adapter``
928 * ``N210P .... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
929 * ``N210TP ... Dual-port OCP 3.0 10 Gigabit Ethernet Adapter``
930 * ``N225P .... Dual-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
931 * ``N250G .... Dual-port OCP 3.0 50 Gigabit Ethernet Adapter``
932 * ``N410SG ... Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
933 * ``N410SGBT . Quad-port OCP 3.0 10 Gigabit Ethernet Adapter``
934 * ``N425G .... Quad-port OCP 3.0 10/25 Gigabit Ethernet Adapter``
935 * ``N1100G ... Single-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
936 * ``N2100G ... Dual-port OCP 3.0 10/25/50/100 Gigabit Ethernet Adapter``
937 * ``N2200G ... Dual-port OCP 3.0 10/25/50/100/200 Gigabit Ethernet Adapter``
938 * ``P150P .... Single-port 50 Gigabit Ethernet Adapter``
939 * ``P210P .... Dual-port 10 Gigabit Ethernet Adapter``
940 * ``P210TP ... Dual-port 10 Gigabit Ethernet Adapter``
941 * ``P225P .... Dual-port 10/25 Gigabit Ethernet Adapter``
942 * ``P410SG ... Quad-port 10 Gigabit Ethernet Adapter``
943 * ``P410SGBT . Quad-port 10 Gigabit Ethernet Adapter``
944 * ``P425G .... Quad-port 10/25 Gigabit Ethernet Adapter``
945 * ``P1100G ... Single-port 10/25/50/100 Gigabit Ethernet Adapter``
946 * ``P2100G ... Dual-port 10/25/50/100 Gigabit Ethernet Adapter``
947 * ``P2200G ... Dual-port 10/25/50/100/200 Gigabit Ethernet Adapter``
949 BCM588xx NetXtreme-S® Family of SmartNIC Network Controllers
950 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
952 Information about the Stingray family of SmartNIC adapters can be found in the
953 `Stingray® Brand section <https://www.broadcom.com/products/ethernet-connectivity/smartnic/>`_ of the `Broadcom website <http://www.broadcom.com/>`_.
955 * ``PS225 ... Dual-port 25 Gigabit Ethernet SmartNIC``
957 BCM5873x StrataGX® Family of Communications Processors
958 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
960 These ARM-based processors target a broad range of networking applications,
961 including virtual CPE (vCPE) and NFV appliances, 10G service routers and
962 gateways, control plane processing for Ethernet switches, and network-attached
965 * ``StrataGX BCM58732 ... Octal-Core 3.0GHz 64-bit ARM®v8 Cortex®-A72 based SoC``