1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2018 The DPDK contributors
10 * **Added function to allow releasing internal EAL resources on exit.**
12 During ``rte_eal_init()`` EAL allocates memory from hugepages to enable its
13 core libraries to perform their tasks. The ``rte_eal_cleanup()`` function
14 releases these resources, ensuring that no hugepage memory is leaked. It is
15 expected that all DPDK applications call ``rte_eal_cleanup()`` before
16 exiting. Not calling this function could result in leaking hugepages, leading
17 to failure during initialization of secondary processes.
19 * **Added igb, ixgbe and i40e ethernet driver to support RSS with flow API.**
21 Added support for igb, ixgbe and i40e NICs with existing RSS configuration
22 using the ``rte_flow`` API.
24 Also enabled queue region configuration using the ``rte_flow`` API for i40e.
26 * **Updated i40e driver to support PPPoE/PPPoL2TP.**
28 Updated i40e PMD to support PPPoE/PPPoL2TP with PPPoE/PPPoL2TP supporting
29 profiles which can be programmed by dynamic device personalization (DDP)
32 * **Added MAC loopback support for i40e.**
34 Added MAC loopback support for i40e in order to support test tasks requested
35 by users. It will setup ``Tx -> Rx`` loopback link according to the device
38 * **Added support of run time determination of number of queues per i40e VF.**
40 The number of queue per VF is determined by its host PF. If the PCI address
41 of an i40e PF is ``aaaa:bb.cc``, the number of queues per VF can be
42 configured with EAL parameter like ``-w aaaa:bb.cc,queue-num-per-vf=n``. The
43 value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
44 number of queues per VF is 4 by default.
46 * **Updated mlx5 driver.**
48 Updated the mlx5 driver including the following changes:
50 * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core.
51 With the special compilation, the rdma-core libraries will be loaded only in case
52 Mellanox device is being used. For binaries creation the PMD can be enabled, still not
53 requiring from every end user to install rdma-core.
54 * Improved multi-segment packet performance.
55 * Changed driver name to use the PCI address to be compatible with OVS-DPDK APIs.
56 * Extended statistics for physical port packet/byte counters.
57 * Converted to the new offloads API.
58 * Supported device removal check operation.
60 * **Updated mlx4 driver.**
62 Updated the mlx4 driver including the following changes:
64 * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core.
65 With the special compilation, the rdma-core libraries will be loaded only in case
66 Mellanox device is being used. For binaries creation the PMD can be enabled, still not
67 requiring from every end user to install rdma-core.
68 * Improved data path performance.
69 * Converted to the new offloads API.
70 * Supported device removal check operation.
72 * **Added NVGRE and UDP tunnels support in Solarflare network PMD.**
74 Added support for NVGRE, VXLAN and GENEVE tunnels.
76 * Added support for UDP tunnel ports configuration.
77 * Added tunneled packets classification.
78 * Added inner checksum offload.
80 * **Added AVF (Adaptive Virtual Function) net PMD.**
82 Added a new net PMD called AVF (Adaptive Virtual Function), which supports
83 IntelĀ® Ethernet Adaptive Virtual Function (AVF) with features such as:
86 * SSE vectorized Rx/Tx burst
91 * Jumbo frame and MTU setting
94 * Rx/Tx descriptor status
95 * Link status update/event
97 * **Added feature supports for live migration from vhost-net to vhost-user.**
99 Added feature supports for vhost-user to make live migration from vhost-net
100 to vhost-user possible. The features include:
102 * ``VIRTIO_F_ANY_LAYOUT``
103 * ``VIRTIO_F_EVENT_IDX``
104 * ``VIRTIO_NET_F_GUEST_ECN``, ``VIRTIO_NET_F_HOST_ECN``
105 * ``VIRTIO_NET_F_GUEST_UFO``, ``VIRTIO_NET_F_HOST_UFO``
106 * ``VIRTIO_NET_F_GSO``
108 Also added ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature support in virtio pmd.
109 In a scenario where the vhost backend doesn't have the ability to generate
110 RARP packets, the VM running virtio pmd can still be live migrated if
111 ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature is negotiated.
113 * **Updated the AESNI-MB PMD.**
115 The AESNI-MB PMD has been updated with additional support for:
119 * **Updated the DPAA_SEC crypto driver to support rte_security.**
121 Updated the ``dpaa_sec`` crypto PMD to support ``rte_security`` lookaside
122 protocol offload for IPsec.
124 * **Added Wireless Base Band Device (bbdev) abstraction.**
126 The Wireless Baseband Device library is an acceleration abstraction
127 framework for 3gpp Layer 1 processing functions that provides a common
128 programming interface for seamless operation on integrated or discrete
129 hardware accelerators or using optimized software libraries for signal
132 The current release only supports 3GPP CRC, Turbo Coding and Rate
133 Matching operations, as specified in 3GPP TS 36.212.
135 See the :doc:`../prog_guide/bbdev` programmer's guide for more details.
137 * **Added New eventdev Ordered Packet Distribution Library (OPDL) PMD.**
139 The OPDL (Ordered Packet Distribution Library) eventdev is a specific
140 implementation of the eventdev API. It is particularly suited to packet
141 processing workloads that have high throughput and low latency requirements.
142 All packets follow the same path through the device. The order in which
143 packets follow is determined by the order in which queues are set up.
144 Events are left on the ring until they are transmitted. As a result packets
145 do not go out of order.
147 With this change, applications can use the OPDL PMD via the eventdev api.
149 * **Added new pipeline use case for dpdk-test-eventdev application.**
151 Added a new "pipeline" use case for the ``dpdk-test-eventdev`` application.
152 The pipeline case can be used to simulate various stages in a real world
153 application from packet receive to transmit while maintaining the packet
154 ordering. It can also be used to measure the performance of the event device
155 across the stages of the pipeline.
157 The pipeline use case has been made generic to work with all the event
158 devices based on the capabilities.
160 * **Updated Eventdev sample application to support event devices based on capability.**
162 Updated the Eventdev pipeline sample application to support various types of
163 pipelines based on the capabilities of the attached event and ethernet
164 devices. Also, renamed the application from software PMD specific
165 ``eventdev_pipeline_sw_pmd`` to the more generic ``eventdev_pipeline``.
167 * **Added Rawdev, a generic device support library.**
169 The Rawdev library provides support for integrating any generic device type with
170 the DPDK framework. Generic devices are those which do not have a pre-defined
171 type within DPDK, for example, ethernet, crypto, event etc.
173 A set of northbound APIs have been defined which encompass a generic set of
174 operations by allowing applications to interact with device using opaque
175 structures/buffers. Also, southbound APIs provide a means of integrating devices
176 either as part of a physical bus (PCI, FSLMC etc) or through ``vdev``.
178 See the :doc:`../prog_guide/rawdev` programmer's guide for more details.
180 * **Added new multi-process communication channel.**
182 Added a generic channel in EAL for multi-process (primary/secondary) communication.
183 Consumers of this channel need to register an action with an action name to response
184 a message received; the actions will be identified by the action name and executed
185 in the context of a new dedicated thread for this channel. The list of new APIs:
187 * ``rte_mp_register`` and ``rte_mp_unregister`` are for action (un)registration.
188 * ``rte_mp_sendmsg`` is for sending a message without blocking for a response.
189 * ``rte_mp_request`` is for sending a request message and will block until
190 it gets a reply message which is sent from the peer by ``rte_mp_reply``.
192 * **Added GRO support for VxLAN-tunneled packets.**
194 Added GRO support for VxLAN-tunneled packets. Supported VxLAN packets
195 must contain an outer IPv4 header and inner TCP/IPv4 headers. VxLAN
196 GRO doesn't check if input packets have correct checksums and doesn't
197 update checksums for output packets. Additionally, it assumes the
198 packets are complete (i.e., ``MF==0 && frag_off==0``), when IP
199 fragmentation is possible (i.e., ``DF==0``).
201 * **Increased default Rx and Tx ring size in sample applications.**
203 Increased the default ``RX_RING_SIZE`` and ``TX_RING_SIZE`` to 1024 entries
204 in testpmd and the sample applications to give better performance in the
205 general case. The user should experiment with various Rx and Tx ring sizes
206 for their specific application to get best performance.
208 * **Added new DPDK build system using the tools "meson" and "ninja" [EXPERIMENTAL].**
210 Added support for building DPDK using ``meson`` and ``ninja``, which gives
211 additional features, such as automatic build-time configuration, over the
212 current build system using ``make``. For instructions on how to do a DPDK build
213 using the new system, see the instructions in ``doc/build-sdk-meson.txt``.
217 This new build system support is incomplete at this point and is added
218 as experimental in this release. The existing build system using ``make``
219 is unaffected by these changes, and can continue to be used for this
220 and subsequent releases until such time as it's deprecation is announced.
223 Shared Library Versions
224 -----------------------
226 The libraries prepended with a plus sign were incremented in this version.
232 librte_bitratestats.so.2
234 librte_bus_fslmc.so.1
239 librte_cryptodev.so.4
240 librte_distributor.so.1
244 librte_flow_classify.so.1
252 librte_latencystats.so.1
265 librte_pmd_ixgbe.so.2
267 librte_pmd_softnic.so.1
268 librte_pmd_vhost.so.2
284 * Intel(R) platforms with Intel(R) NICs combinations
288 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
289 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
290 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
291 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
292 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
293 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
294 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
295 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
296 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
297 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
306 * Red Hat Enterprise Linux Server release 7.3
307 * SUSE Enterprise Linux 12
316 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
318 * Firmware version: 0x61bf0001
319 * Device id (pf/vf): 8086:10fb / 8086:10ed
320 * Driver version: 5.2.3 (ixgbe)
322 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
324 * Firmware version: 0x800003e7
325 * Device id (pf/vf): 8086:15ad / 8086:15a8
326 * Driver version: 4.4.6 (ixgbe)
328 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
330 * Firmware version: 6.01 0x80003221
331 * Device id (pf/vf): 8086:1572 / 8086:154c
332 * Driver version: 2.4.3 (i40e)
334 * Intel Corporation Ethernet Connection X722 for 10GBASE-T
336 * firmware-version: 6.01 0x80003221
337 * Device id: 8086:37d2 / 8086:154c
338 * Driver version: 2.4.3 (i40e)
340 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
342 * Firmware version: 6.01 0x80003221
343 * Device id (pf/vf): 8086:158b / 8086:154c
344 * Driver version: 2.4.3 (i40e)
346 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
348 * Firmware version: 6.01 0x8000321c
349 * Device id (pf/vf): 8086:1583 / 8086:154c
350 * Driver version: 2.4.3 (i40e)
352 * Intel(R) Corporation I350 Gigabit Network Connection
354 * Firmware version: 1.63, 0x80000dda
355 * Device id (pf/vf): 8086:1521 / 8086:1520
356 * Driver version: 5.3.0-k (igb)
358 * Intel(R) platforms with Mellanox(R) NICs combinations
362 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
363 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
364 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
365 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
366 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
367 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
371 * Red Hat Enterprise Linux Server release 7.5 Beta (Maipo)
372 * Red Hat Enterprise Linux Server release 7.4 (Maipo)
373 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
374 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
379 * MLNX_OFED: 4.2-1.0.0.0
380 * MLNX_OFED: 4.3-0.1.6.0
384 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
386 * Host interface: PCI Express 3.0 x8
387 * Device ID: 15b3:1007
388 * Firmware version: 2.42.5000
390 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
392 * Host interface: PCI Express 3.0 x8
393 * Device ID: 15b3:1013
394 * Firmware version: 12.21.1000 and above
396 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
398 * Host interface: PCI Express 3.0 x8
399 * Device ID: 15b3:1013
400 * Firmware version: 12.21.1000 and above
402 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
404 * Host interface: PCI Express 3.0 x8
405 * Device ID: 15b3:1013
406 * Firmware version: 12.21.1000 and above
408 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
410 * Host interface: PCI Express 3.0 x8
411 * Device ID: 15b3:1013
412 * Firmware version: 12.21.1000 and above
414 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
416 * Host interface: PCI Express 3.0 x8
417 * Device ID: 15b3:1013
418 * Firmware version: 12.21.1000 and above
420 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
422 * Host interface: PCI Express 3.0 x16
423 * Device ID: 15b3:1013
424 * Firmware version: 12.21.1000 and above
426 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
428 * Host interface: PCI Express 3.0 x8
429 * Device ID: 15b3:1013
430 * Firmware version: 12.21.1000 and above
432 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
434 * Host interface: PCI Express 3.0 x8
435 * Device ID: 15b3:1013
436 * Firmware version: 12.21.1000 and above
438 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
440 * Host interface: PCI Express 3.0 x16
441 * Device ID: 15b3:1013
442 * Firmware version: 12.21.1000 and above
443 * Firmware version: 12.21.1000 and above
445 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
447 * Host interface: PCI Express 3.0 x16
448 * Device ID: 15b3:1013
449 * Firmware version: 12.21.1000 and above
451 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
453 * Host interface: PCI Express 3.0 x16
454 * Device ID: 15b3:1013
455 * Firmware version: 12.21.1000 and above
457 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
459 * Host interface: PCI Express 3.0 x8
460 * Device ID: 15b3:1015
461 * Firmware version: 14.21.1000 and above
463 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
465 * Host interface: PCI Express 3.0 x8
466 * Device ID: 15b3:1015
467 * Firmware version: 14.21.1000 and above
469 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
471 * Host interface: PCI Express 3.0 x16
472 * Device ID: 15b3:1017
473 * Firmware version: 16.21.1000 and above
475 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
477 * Host interface: PCI Express 4.0 x16
478 * Device ID: 15b3:1019
479 * Firmware version: 16.21.1000 and above
481 * ARM platforms with Mellanox(R) NICs combinations
485 * Qualcomm ARM 1.1 2500MHz
491 * MLNX_OFED: 4.2-1.0.0.0
495 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
497 * Host interface: PCI Express 3.0 x8
498 * Device ID: 15b3:1015
499 * Firmware version: 14.21.1000
501 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
503 * Host interface: PCI Express 3.0 x16
504 * Device ID: 15b3:1017
505 * Firmware version: 16.21.1000