1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2020 The DPDK contributors
4 .. include:: <isonum.txt>
12 * **Added Trace Library and Tracepoints.**
14 Added a native implementation of the "common trace format" (CTF) based trace
15 library. This allows the user add tracepoints in an application/library to
16 get runtime trace/debug information for control, and fast APIs with minimum
17 impact on fast path performance. Typical trace overhead is ~20 cycles and
18 instrumentation overhead is 1 cycle. Added tracepoints in ``EAL``,
19 ``ethdev``, ``cryptodev``, ``eventdev`` and ``mempool`` libraries for
22 * **Added APIs for RCU defer queues.**
24 Added APIs to create and delete defer queues. Additional APIs are provided
25 to enqueue a deleted resource and reclaim the resource in the future.
26 These APIs help an application use lock-free data structures with
29 * **Added new API for rte_ring.**
31 * Introduced new synchronization modes for ``rte_ring``.
33 Introduced new optional MT synchronization modes for ``rte_ring``:
34 Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode.
35 With these modes selected, ``rte_ring`` shows significant improvements for
36 average enqueue/dequeue times on overcommitted systems.
38 * Added peek style API for ``rte_ring``.
40 For rings with producer/consumer in ``RTE_RING_SYNC_ST``, ``RTE_RING_SYNC_MT_HTS``
41 mode, provide the ability to split enqueue/dequeue operation into two phases
42 (enqueue/dequeue start and enqueue/dequeue finish). This allows the user to inspect
43 objects in the ring without removing them (aka MT safe peek).
45 * **Added flow aging support.**
47 Added flow aging support to detect and report aged-out flows, including:
49 * Added new action: ``RTE_FLOW_ACTION_TYPE_AGE`` to set the timeout
50 and the application flow context for each flow.
51 * Added new event: ``RTE_ETH_EVENT_FLOW_AGED`` for the driver to report
52 that there are new aged-out flows.
53 * Added new query: ``rte_flow_get_aged_flows`` to get the aged-out flows
54 contexts from the port.
56 * **ethdev: Added a new value to link speed for 200Gbps.**
58 Added a new ethdev value to for link speeds of 200Gbps.
60 * **Updated the Amazon ena driver.**
62 Updated the ena PMD with new features and improvements, including:
64 * Added support for large LLQ (Low-latency queue) headers.
65 * Added Tx drops as a new extended driver statistic.
66 * Added support for accelerated LLQ mode.
67 * Handling of the 0 length descriptors on the Rx path.
69 * **Updated Broadcom bnxt driver.**
71 Updated the Broadcom bnxt driver with new features and improvements, including:
73 * Added support for host based flow table management.
74 * Added flow counters to extended stats.
75 * Added PCI function stats to extended stats.
77 * **Updated Cisco enic driver.**
79 Updated Cisco enic driver GENEVE tunneling support:
81 * Added support to control GENEVE tunneling via UCSM/CIMC and removed devarg.
82 * Added GENEVE port number configuration.
84 * **Updated Hisilicon hns3 driver.**
86 Updated Hisilicon hns3 driver with new features and improvements, including:
88 * Added support for TSO.
89 * Added support for configuring promiscuous and allmulticast mode for VF.
91 * **Added a new driver for Intel Foxville I225 devices.**
93 Added the new ``igc`` net driver for Intel Foxville I225 devices. See the
94 :doc:`../nics/igc` NIC guide for more details on this new driver.
96 * **Updated Intel i40e driver.**
98 Updated i40e PMD with new features and improvements, including:
100 * Enabled MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp.
101 * Added support for RSS using L3/L4 source/destination only.
102 * Added support for setting hash function in rte flow.
104 * **Updated the Intel iavf driver.**
106 Update the Intel iavf driver with new features and improvements, including:
108 * Added generic filter support.
109 * Added advanced iavf with FDIR capability.
110 * Added advanced RSS configuration for VFs.
112 * **Updated the Intel ice driver.**
114 Updated the Intel ice driver with new features and improvements, including:
116 * Added support for DCF (Device Config Function) feature.
117 * Added switch filter support for Intel DCF.
119 * **Updated Marvell OCTEON TX2 ethdev driver.**
121 Updated Marvell OCTEON TX2 ethdev driver with traffic manager support,
124 * Hierarchical Scheduling with DWRR and SP.
125 * Single rate - Two color, Two rate - Three color shaping.
127 * **Updated Mellanox mlx5 driver.**
129 Updated Mellanox mlx5 driver with new features and improvements, including:
131 * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit.
132 * Added support for creating Relaxed Ordering Memory Regions.
133 * Added support for configuring Hairpin queue data buffer size.
134 * Added support for jumbo frame size (9K MTU) in Multi-Packet RQ mode.
135 * Removed flow rules caching for memory saving and compliance with ethdev API.
136 * Optimized the memory consumption of flows.
137 * Added support for flow aging based on hardware counters.
138 * Added support for flow patterns with wildcard VLAN items (without VID value).
139 * Updated support for matching on GTP headers, added match on GTP flags.
141 * **Added Chacha20-Poly1305 algorithm to Cryptodev API.**
143 Added support for Chacha20-Poly1305 AEAD algorithm in Cryptodev.
145 * **Updated the AESNI MB crypto PMD.**
147 * Added support for intel-ipsec-mb version 0.54.
148 * Updated the AESNI MB PMD with AES-256 DOCSIS algorithm.
149 * Added support for synchronous Crypto burst API.
151 * **Updated the AESNI GCM crypto PMD.**
153 Added support for intel-ipsec-mb version 0.54.
155 * **Updated the ZUC crypto PMD.**
157 * Added support for intel-ipsec-mb version 0.54.
158 * Updated the PMD to support Multi-buffer ZUC-EIA3,
159 improving performance significantly, when using
160 intel-ipsec-mb version 0.54
162 * **Updated the SNOW3G crypto PMD.**
164 Added support for intel-ipsec-mb version 0.54.
166 * **Updated the KASUMI crypto PMD.**
168 Added support for intel-ipsec-mb version 0.54.
170 * **Updated the QuickAssist Technology (QAT) Crypto PMD.**
172 * Added handling of mixed crypto algorithms in QAT PMD for GEN2.
174 Enabled handling of mixed algorithms in encrypted digest hash-cipher
175 (generation) and cipher-hash (verification) requests in QAT PMD when
176 running on GEN2 QAT hardware with particular firmware versions (GEN3
177 support was added in DPDK 20.02).
179 * Added plain SHA-1, 224, 256, 384, 512 support to QAT PMD.
181 Added support for plain SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512
184 * Added AES-GCM/GMAC J0 support to QAT PMD.
186 Added support for AES-GCM/GMAC J0 to Intel QuickAssist Technology PMD. The
187 user can use this feature by passing a zero length IV in the appropriate
188 xform. For more information refer to the doxygen comments in
189 ``rte_crypto_sym.h`` for ``J0``.
191 * Updated the QAT PMD for AES-256 DOCSIS.
193 Added AES-256 DOCSIS algorithm support to the QAT PMD.
195 * **Updated the QuickAssist Technology (QAT) Compression PMD.**
197 Added special buffer handling when the internal QAT intermediate buffer is
198 too small for the Huffman dynamic compression operation. Instead of falling
199 back to fixed compression, the operation is now split into multiple smaller
200 dynamic compression requests (which are possible to execute on QAT) and
201 their results are then combined and copied into the output buffer. This is
202 not possible if any checksum calculation was requested - in such cases the
203 code falls back to fixed compression as before.
205 * **Updated the turbo_sw bbdev PMD.**
207 Added support for large size code blocks which do not fit in one mbuf
210 * **Added Intel FPGA_5GNR_FEC bbdev PMD.**
212 Added a new ``fpga_5gnr_fec`` bbdev driver for the Intel\ |reg| FPGA PAC
213 (Programmable Acceleration Card) N3000. See the
214 :doc:`../bbdevs/fpga_5gnr_fec` BBDEV guide for more details on this new driver.
216 * **Updated the DSW event device.**
218 Updated the DSW PMD with new features and improvements, including:
220 * Improved flow migration mechanism, allowing faster and more
221 accurate load balancing.
222 * Improved behavior on high-core count systems.
223 * Reduced latency in low-load situations.
224 * Extended DSW xstats with migration and load-related statistics.
226 * **Updated ipsec-secgw sample application.**
228 Updated the ``ipsec-secgw`` sample application with the following features:
230 * Updated the application to add event based packet processing. The worker
231 thread(s) would receive events and submit them back to the event device
232 after the processing. This way, multicore scaling and HW assisted
233 scheduling is achieved by making use of the event device capabilities. The
234 event mode currently only supports inline IPsec protocol offload.
236 * Updated the application to support key sizes for AES-192-CBC, AES-192-GCM,
237 AES-256-GCM algorithms.
239 * Added IPsec inbound load-distribution support for the application using
240 NIC load distribution feature (Flow Director).
242 * **Updated Telemetry Library.**
244 The updated Telemetry library has been significantly improved in relation to
245 the original version to make it more accessible and scalable:
247 * It now enables DPDK libraries and applications to provide their own
248 specific telemetry information, rather than being limited to what could be
249 reported through the metrics library.
251 * It is no longer dependent on the external Jansson library, which allows
252 Telemetry be enabled by default.
254 * The socket handling has been simplified making it easier for clients to
255 connect and retrieve information.
257 * **Added the rte_graph library.**
259 The Graph architecture abstracts the data processing functions as ``nodes``
260 and ``links`` them together to create a complex ``graph`` to enable
261 reusable/modular data processing functions. The graph library provides APIs
262 to enable graph framework operations such as create, lookup, dump and
263 destroy on graph and node operations such as clone, edge update, and edge
264 shrink, etc. The API also allows the creation of a stats cluster to monitor
265 per graph and per node statistics.
267 * **Added the rte_node library.**
269 Added the ``rte_node`` library that consists of nodes used by the
270 ``rte_graph`` library. Each node performs a specific packet processing
271 function based on the application configuration.
273 The following nodes are added:
275 * Null node: A skeleton node that defines the general structure of a node.
276 * Ethernet device node: Consists of Ethernet Rx/Tx nodes as well as Ethernet
278 * IPv4 lookup node: Consists of IPv4 extract and LPM lookup node. Routes can
279 be configured by the application through the ``rte_node_ip4_route_add``
281 * IPv4 rewrite node: Consists of IPv4 and Ethernet header rewrite
282 functionality that can be configured through the
283 ``rte_node_ip4_rewrite_add`` function.
284 * Packet drop node: Frees the packets received to their respective mempool.
286 * **Added new l3fwd-graph sample application.**
288 Added an example application ``l3fwd-graph``. This demonstrates the usage of
289 the graph library and node library for packet processing. In addition to the
290 library usage demonstration, this application can be used for performance
291 comparison of the existing ``l3fwd`` (static code without any nodes) with
292 the modular ``l3fwd-graph`` approach.
294 * **Updated the testpmd application.**
296 Added a new cmdline option ``--rx-mq-mode`` which can be used to test PMD's
297 behaviour on handling Rx mq mode.
299 * **Added support for GCC 10.**
301 Added support for building with GCC 10.1.
307 * mempool: The API of ``rte_mempool_populate_iova()`` and
308 ``rte_mempool_populate_virt()`` changed to return 0 instead of ``-EINVAL``
309 when there is not enough room to store one object.
315 * No ABI change that would break compatibility with DPDK 20.02 and 19.11.
321 * Intel\ |reg| platforms with Broadcom\ |reg| NICs combinations
325 * Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz
326 * Intel\ |reg| Xeon\ |reg| CPU E5-2650 v2 @ 2.60GHz
327 * Intel\ |reg| Xeon\ |reg| CPU E5-2667 v3 @ 3.20GHz
328 * Intel\ |reg| Xeon\ |reg| Gold 6142 CPU @ 2.60GHz
329 * Intel\ |reg| Xeon\ |reg| Silver 4110 CPU @ 2.10GHz
333 * Red Hat Enterprise Linux Server release 8.1
334 * Red Hat Enterprise Linux Server release 7.6
335 * Red Hat Enterprise Linux Server release 7.5
346 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P225p (2x25G)
348 * Host interface: PCI Express 3.0 x8
349 * Firmware version: 214.4.81.0 and above
351 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P425p (4x25G)
353 * Host interface: PCI Express 3.0 x16
354 * Firmware version: 216.4.259.0 and above
356 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P2100G (2x100G)
358 * Host interface: PCI Express 3.0 x16
359 * Firmware version: 216.1.259.0 and above
361 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P425p (4x25G)
363 * Host interface: PCI Express 4.0 x16
364 * Firmware version: 216.1.259.0 and above
366 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P2100G (2x100G)
368 * Host interface: PCI Express 4.0 x16
369 * Firmware version: 216.1.259.0 and above
371 * Intel\ |reg| platforms with Intel\ |reg| NICs combinations
375 * Intel\ |reg| Atom\ |trade| CPU C3758 @ 2.20GHz
376 * Intel\ |reg| Atom\ |trade| CPU C3858 @ 2.00GHz
377 * Intel\ |reg| Atom\ |trade| CPU C3958 @ 2.00GHz
378 * Intel\ |reg| Xeon\ |reg| CPU D-1541 @ 2.10GHz
379 * Intel\ |reg| Xeon\ |reg| CPU D-1553N @ 2.30GHz
380 * Intel\ |reg| Xeon\ |reg| CPU E5-2680 0 @ 2.70GHz
381 * Intel\ |reg| Xeon\ |reg| CPU E5-2680 v2 @ 2.80GHz
382 * Intel\ |reg| Xeon\ |reg| CPU E5-2699 v3 @ 2.30GHz
383 * Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @ 2.20GHz
384 * Intel\ |reg| Xeon\ |reg| Gold 5218N CPU @ 2.30GHz
385 * Intel\ |reg| Xeon\ |reg| Gold 6139 CPU @ 2.30GHz
386 * Intel\ |reg| Xeon\ |reg| Gold 6252N CPU @ 2.30GHz
387 * Intel\ |reg| Xeon\ |reg| Platinum 8180 CPU @ 2.50GHz
388 * Intel\ |reg| Xeon\ |reg| Platinum 8280M CPU @ 2.70GHz
397 * Red Hat Enterprise Linux Server release 8.0
398 * Red Hat Enterprise Linux Server release 7.7
406 * Intel\ |reg| 82599ES 10 Gigabit Ethernet Controller
408 * Firmware version: 0x61bf0001
409 * Device id (pf/vf): 8086:10fb / 8086:10ed
410 * Driver version: 5.6.5 (ixgbe)
412 * Intel\ |reg| Corporation Ethernet Connection X552/X557-AT 10GBASE-T
414 * Firmware version: 0x800003e7
415 * Device id (pf/vf): 8086:15ad / 8086:15a8
416 * Driver version: 5.1.0-k (ixgbe)
418 * Intel\ |reg| Corporation Ethernet Controller 10G X550T
420 * Firmware version: 0x80000482
421 * Device id (pf): 8086:1563
422 * Driver version: 5.6.5 (ixgbe)
424 * Intel\ |reg| Ethernet Converged Network Adapter X710-DA4 (4x10G)
426 * Firmware version: 7.20 0x800079e8 1.2585.0
427 * Device id (pf/vf): 8086:1572 / 8086:154c
428 * Driver version: 2.11.29 (i40e)
430 * Intel\ |reg| Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G)
432 * Firmware version: 4.11 0x80001def 1.1999.0
433 * Device id (pf/vf): 8086:37d0 / 8086:37cd
434 * Driver version: 2.11.29 (i40e)
436 * Intel\ |reg| Corporation Ethernet Connection X722 for 10GBASE-T (2x10G)
438 * Firmware version: 4.10 0x80001a7a
439 * Device id (pf/vf): 8086:37d2 / 8086:37cd
440 * Driver version: 2.11.29 (i40e)
442 * Intel\ |reg| Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
444 * Firmware version: 7.30 0x800080a2 1.2658.0
445 * Device id (pf/vf): 8086:158b / 8086:154c
446 * Driver version: 2.11.27_rc13 (i40e)
448 * Intel\ |reg| Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
450 * Firmware version: 7.30 0x800080ab 1.2658.0
451 * Device id (pf/vf): 8086:1583 / 8086:154c
452 * Driver version: 2.11.27_rc13 (i40e)
454 * Intel\ |reg| Corporation I350 Gigabit Network Connection
456 * Firmware version: 1.63, 0x80000cbc
457 * Device id (pf/vf): 8086:1521 / 8086:1520
458 * Driver version: 5.4.0-k (igb)
460 * Intel\ |reg| Corporation I210 Gigabit Network Connection
462 * Firmware version: 3.25, 0x800006eb
463 * Device id (pf): 8086:1533
464 * Driver version: 5.6.5(igb)
466 * Intel\ |reg| Ethernet Controller 10-Gigabit X540-AT2
468 * Firmware version: 0x800005f9
469 * Device id (pf): 8086:1528
470 * Driver version: 5.1.0-k(ixgbe)
472 * Intel\ |reg| Ethernet Converged Network Adapter X710-T2L
474 * Firmware version: 7.30 0x80008061 1.2585.0
475 * Device id (pf): 8086:15ff
476 * Driver version: 2.11.27_rc13(i40e)
478 * Intel\ |reg| platforms with Mellanox\ |reg| NICs combinations
482 * Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz
483 * Intel\ |reg| Xeon\ |reg| CPU E5-2697A v4 @ 2.60GHz
484 * Intel\ |reg| Xeon\ |reg| CPU E5-2697 v3 @ 2.60GHz
485 * Intel\ |reg| Xeon\ |reg| CPU E5-2680 v2 @ 2.80GHz
486 * Intel\ |reg| Xeon\ |reg| CPU E5-2650 v4 @ 2.20GHz
487 * Intel\ |reg| Xeon\ |reg| CPU E5-2640 @ 2.50GHz
488 * Intel\ |reg| Xeon\ |reg| CPU E5-2620 v4 @ 2.10GHz
492 * Red Hat Enterprise Linux Server release 7.5 (Maipo)
493 * Red Hat Enterprise Linux Server release 7.4 (Maipo)
494 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
495 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
501 * MLNX_OFED 4.7-3.2.9.0
502 * MLNX_OFED 5.0-2.1.8.0 and above
506 * Linux 5.7.0-rc5 and above
510 * rdma-core-29.0-1 and above
514 * Mellanox\ |reg| ConnectX\ |reg|-3 Pro 40G MCX354A-FCC_Ax (2x40G)
516 * Host interface: PCI Express 3.0 x8
517 * Device ID: 15b3:1007
518 * Firmware version: 2.42.5000
520 * Mellanox\ |reg| ConnectX\ |reg|-3 Pro 40G MCX354A-FCCT (2x40G)
522 * Host interface: PCI Express 3.0 x8
523 * Device ID: 15b3:1007
524 * Firmware version: 2.42.5000
526 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G)
528 * Host interface: PCI Express 3.0 x8
529 * Device ID: 15b3:1015
530 * Firmware version: 14.27.2008 and above
532 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 50G MCX4131A-GCAT (1x50G)
534 * Host interface: PCI Express 3.0 x8
535 * Device ID: 15b3:1015
536 * Firmware version: 14.27.2008 and above
538 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX516A-CCAT (2x100G)
540 * Host interface: PCI Express 3.0 x16
541 * Device ID: 15b3:1017
542 * Firmware version: 16.27.2008 and above
544 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
546 * Host interface: PCI Express 3.0 x16
547 * Device ID: 15b3:1017
548 * Firmware version: 16.27.2008 and above
550 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-EDAT (2x100G)
552 * Host interface: PCI Express 3.0 x16
553 * Device ID: 15b3:1017
554 * Firmware version: 16.27.2008 and above
556 * Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G)
558 * Host interface: PCI Express 4.0 x16
559 * Device ID: 15b3:1019
560 * Firmware version: 16.27.2008 and above
562 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
564 * Host interface: PCI Express 4.0 x16
565 * Device ID: 15b3:101d
566 * Firmware version: 22.27.2008 and above
568 * IBM Power 9 platforms with Mellanox\ |reg| NICs combinations
572 * POWER9 2.2 (pvr 004e 1202) 2300MHz
576 * Red Hat Enterprise Linux Server release 7.6
580 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G)
582 * Host interface: PCI Express 4.0 x16
583 * Device ID: 15b3:1017
584 * Firmware version: 16.27.2008
586 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx 100G MCX623106AN-CDAT (2x100G)
588 * Host interface: PCI Express 4.0 x16
589 * Device ID: 15b3:101d
590 * Firmware version: 22.27.2008
594 * MLNX_OFED 5.0-2.1.8.0
596 * ARMv8 SoC combinations from Marvell (with integrated NICs)
600 * CN83xx, CN96xx, CN93xx
602 * OS (Based on Marvell OCTEON TX SDK-10.3.2.0-PR12):