4 .. **Read this first.**
6 The text in the sections below explains how to update the release notes.
8 Use proper spelling, capitalization and punctuation in all sections.
10 Variable and config names should be quoted as fixed width text:
13 Build the docs and view the output file to ensure the changes are correct::
17 xdg-open build/doc/html/guides/rel_notes/release_18_02.html
23 .. This section should contain new features added in this release. Sample
26 * **Add a title in the past tense with a full stop.**
28 Add a short 1-2 sentence description in the past tense. The description
29 should be enough to allow someone scanning the release notes to
30 understand the new feature.
32 If the feature adds a lot of sub-features you can use a bullet list like
35 * Added feature foo to do something.
36 * Enhanced feature bar to do something else.
38 Refer to the previous release notes for examples.
40 This section is a comment. do not overwrite or remove it.
41 Also, make sure to start the actual text at the margin.
42 =========================================================
44 * **Added function to allow releasing internal EAL resources on exit.**
46 During ``rte_eal_init()`` EAL allocates memory from hugepages to enable its
47 core libraries to perform their tasks. The ``rte_eal_cleanup()`` function
48 releases these resources, ensuring that no hugepage memory is leaked. It is
49 expected that all DPDK applications call ``rte_eal_cleanup()`` before
50 exiting. Not calling this function could result in leaking hugepages, leading
51 to failure during initialization of secondary processes.
53 * **Added igb, ixgbe and i40e ethernet driver to support RSS with flow API.**
55 Added support for igb, ixgbe and i40e NICs with existing RSS configuration
56 using the ``rte_flow`` API.
58 Also enabled queue region configuration using the ``rte_flow`` API for i40e.
60 * **Updated i40e driver to support PPPoE/PPPoL2TP.**
62 Updated i40e PMD to support PPPoE/PPPoL2TP with PPPoE/PPPoL2TP supporting
63 profiles which can be programmed by dynamic device personalization (DDP)
66 * **Added MAC loopback support for i40e.**
68 Added MAC loopback support for i40e in order to support test tasks requested
69 by users. It will setup ``Tx -> Rx`` loopback link according to the device
72 * **Added support of run time determination of number of queues per i40e VF.**
74 The number of queue per VF is determined by its host PF. If the PCI address
75 of an i40e PF is ``aaaa:bb.cc``, the number of queues per VF can be
76 configured with EAL parameter like ``-w aaaa:bb.cc,queue-num-per-vf=n``. The
77 value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
78 number of queues per VF is 4 by default.
80 * **Updated mlx5 driver.**
82 Updated the mlx5 driver including the following changes:
84 * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core.
85 With the special compilation, the rdma-core libraries will be loaded only in case
86 Mellanox device is being used. For binaries creation the PMD can be enabled, still not
87 requiring from every end user to install rdma-core.
88 * Improved multi-segment packet performance.
89 * Changed driver name to use the PCI address to be compatible with OVS-DPDK APIs.
90 * Extended statistics for physical port packet/byte counters.
91 * Converted to the new offloads API.
92 * Supported device removal check operation.
94 * **Updated mlx4 driver.**
96 Updated the mlx4 driver including the following changes:
98 * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core.
99 With the special compilation, the rdma-core libraries will be loaded only in case
100 Mellanox device is being used. For binaries creation the PMD can be enabled, still not
101 requiring from every end user to install rdma-core.
102 * Improved data path performance.
103 * Converted to the new offloads API.
104 * Supported device removal check operation.
106 * **Added NVGRE and UDP tunnels support in Solarflare network PMD.**
108 Added support for NVGRE, VXLAN and GENEVE tunnels.
110 * Added support for UDP tunnel ports configuration.
111 * Added tunneled packets classification.
112 * Added inner checksum offload.
114 * **Added AVF (Adaptive Virtual Function) net PMD.**
116 Added a new net PMD called AVF (Adaptive Virtual Function), which supports
117 IntelĀ® Ethernet Adaptive Virtual Function (AVF) with features such as:
120 * SSE vectorized Rx/Tx burst
125 * Jumbo frame and MTU setting
128 * Rx/Tx descriptor status
129 * Link status update/event
131 * **Added feature supports for live migration from vhost-net to vhost-user.**
133 Added feature supports for vhost-user to make live migration from vhost-net
134 to vhost-user possible. The features include:
136 * ``VIRTIO_F_ANY_LAYOUT``
137 * ``VIRTIO_F_EVENT_IDX``
138 * ``VIRTIO_NET_F_GUEST_ECN``, ``VIRTIO_NET_F_HOST_ECN``
139 * ``VIRTIO_NET_F_GUEST_UFO``, ``VIRTIO_NET_F_HOST_UFO``
140 * ``VIRTIO_NET_F_GSO``
142 Also added ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature support in virtio pmd.
143 In a scenario where the vhost backend doesn't have the ability to generate
144 RARP packets, the VM running virtio pmd can still be live migrated if
145 ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature is negotiated.
147 * **Updated the AESNI-MB PMD.**
149 The AESNI-MB PMD has been updated with additional support for:
153 * **Updated the DPAA_SEC crypto driver to support rte_security.**
155 Updated the ``dpaa_sec`` crypto PMD to support ``rte_security`` lookaside
156 protocol offload for IPsec.
158 * **Added Wireless Base Band Device (bbdev) abstraction.**
160 The Wireless Baseband Device library is an acceleration abstraction
161 framework for 3gpp Layer 1 processing functions that provides a common
162 programming interface for seamless operation on integrated or discrete
163 hardware accelerators or using optimized software libraries for signal
166 The current release only supports 3GPP CRC, Turbo Coding and Rate
167 Matching operations, as specified in 3GPP TS 36.212.
169 See the :doc:`../prog_guide/bbdev` programmer's guide for more details.
171 * **Added New eventdev Ordered Packet Distribution Library (OPDL) PMD.**
173 The OPDL (Ordered Packet Distribution Library) eventdev is a specific
174 implementation of the eventdev API. It is particularly suited to packet
175 processing workloads that have high throughput and low latency requirements.
176 All packets follow the same path through the device. The order in which
177 packets follow is determined by the order in which queues are set up.
178 Events are left on the ring until they are transmitted. As a result packets
179 do not go out of order.
181 With this change, applications can use the OPDL PMD via the eventdev api.
183 * **Added new pipeline use case for dpdk-test-eventdev application.**
185 Added a new "pipeline" use case for the ``dpdk-test-eventdev`` application.
186 The pipeline case can be used to simulate various stages in a real world
187 application from packet receive to transmit while maintaining the packet
188 ordering. It can also be used to measure the performance of the event device
189 across the stages of the pipeline.
191 The pipeline use case has been made generic to work with all the event
192 devices based on the capabilities.
194 * **Updated Eventdev sample application to support event devices based on capability.**
196 Updated the Eventdev pipeline sample application to support various types of
197 pipelines based on the capabilities of the attached event and ethernet
198 devices. Also, renamed the application from software PMD specific
199 ``eventdev_pipeline_sw_pmd`` to the more generic ``eventdev_pipeline``.
201 * **Added Rawdev, a generic device support library.**
203 The Rawdev library provides support for integrating any generic device type with
204 the DPDK framework. Generic devices are those which do not have a pre-defined
205 type within DPDK, for example, ethernet, crypto, event etc.
207 A set of northbound APIs have been defined which encompass a generic set of
208 operations by allowing applications to interact with device using opaque
209 structures/buffers. Also, southbound APIs provide a means of integrating devices
210 either as as part of a physical bus (PCI, FSLMC etc) or through ``vdev``.
212 See the :doc:`../prog_guide/rawdev` programmer's guide for more details.
214 * **Added new multi-process communication channel.**
216 Added a generic channel in EAL for multi-process (primary/secondary) communication.
217 Consumers of this channel need to register an action with an action name to response
218 a message received; the actions will be identified by the action name and executed
219 in the context of a new dedicated thread for this channel. The list of new APIs:
221 * ``rte_mp_register`` and ``rte_mp_unregister`` are for action (un)registration.
222 * ``rte_mp_sendmsg`` is for sending a message without blocking for a response.
223 * ``rte_mp_request`` is for sending a request message and will block until
224 it gets a reply message which is sent from the peer by ``rte_mp_reply``.
226 * **Added GRO support for VxLAN-tunneled packets.**
228 Added GRO support for VxLAN-tunneled packets. Supported VxLAN packets
229 must contain an outer IPv4 header and inner TCP/IPv4 headers. VxLAN
230 GRO doesn't check if input packets have correct checksums and doesn't
231 update checksums for output packets. Additionally, it assumes the
232 packets are complete (i.e., ``MF==0 && frag_off==0``), when IP
233 fragmentation is possible (i.e., ``DF==0``).
235 * **Increased default Rx and Tx ring size in sample applications.**
237 Increased the default ``RX_RING_SIZE`` and ``TX_RING_SIZE`` to 1024 entries
238 in testpmd and the sample applications to give better performance in the
239 general case. The user should experiment with various Rx and Tx ring sizes
240 for their specific application to get best performance.
242 * **Added new DPDK build system using the tools "meson" and "ninja" [EXPERIMENTAL].**
244 Added support for building DPDK using ``meson`` and ``ninja``, which gives
245 additional features, such as automatic build-time configuration, over the
246 current build system using ``make``. For instructions on how to do a DPDK build
247 using the new system, see the instructions in ``doc/build-sdk-meson.txt``.
251 This new build system support is incomplete at this point and is added
252 as experimental in this release. The existing build system using ``make``
253 is unaffected by these changes, and can continue to be used for this
254 and subsequent releases until such time as it's deprecation is announced.
257 Shared Library Versions
258 -----------------------
260 .. Update any library version updated in this release and prepend with a ``+``
264 + librte_cfgfile.so.2
267 This section is a comment. do not overwrite or remove it.
268 =========================================================
271 The libraries prepended with a plus sign were incremented in this version.
277 librte_bitratestats.so.2
279 librte_bus_fslmc.so.1
284 librte_cryptodev.so.4
285 librte_distributor.so.1
289 librte_flow_classify.so.1
297 librte_latencystats.so.1
310 librte_pmd_ixgbe.so.2
312 librte_pmd_softnic.so.1
313 librte_pmd_vhost.so.2
330 .. This section should contain a list of platforms that were tested with this
335 * <vendor> platform with <vendor> <type of devices> combinations
340 * Other relevant details...
342 This section is a comment. do not overwrite or remove it.
343 Also, make sure to start the actual text at the margin.
344 =========================================================
346 * Intel(R) platforms with Intel(R) NICs combinations
350 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
351 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
352 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
353 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
354 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
355 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
356 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
357 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
358 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
359 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
368 * Red Hat Enterprise Linux Server release 7.3
369 * SUSE Enterprise Linux 12
378 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
380 * Firmware version: 0x61bf0001
381 * Device id (pf/vf): 8086:10fb / 8086:10ed
382 * Driver version: 5.2.3 (ixgbe)
384 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
386 * Firmware version: 0x800003e7
387 * Device id (pf/vf): 8086:15ad / 8086:15a8
388 * Driver version: 4.4.6 (ixgbe)
390 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
392 * Firmware version: 6.01 0x80003221
393 * Device id (pf/vf): 8086:1572 / 8086:154c
394 * Driver version: 2.4.3 (i40e)
396 * Intel Corporation Ethernet Connection X722 for 10GBASE-T
398 * firmware-version: 6.01 0x80003221
399 * Device id: 8086:37d2 / 8086:154c
400 * Driver version: 2.4.3 (i40e)
402 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
404 * Firmware version: 6.01 0x80003221
405 * Device id (pf/vf): 8086:158b / 8086:154c
406 * Driver version: 2.4.3 (i40e)
408 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
410 * Firmware version: 6.01 0x8000321c
411 * Device id (pf/vf): 8086:1583 / 8086:154c
412 * Driver version: 2.4.3 (i40e)
414 * Intel(R) Corporation I350 Gigabit Network Connection
416 * Firmware version: 1.63, 0x80000dda
417 * Device id (pf/vf): 8086:1521 / 8086:1520
418 * Driver version: 5.3.0-k (igb)
420 * Intel(R) platforms with Mellanox(R) NICs combinations
424 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
425 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
426 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
427 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
428 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
429 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
433 * Red Hat Enterprise Linux Server release 7.5 Beta (Maipo)
434 * Red Hat Enterprise Linux Server release 7.4 (Maipo)
435 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
436 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
441 * MLNX_OFED: 4.2-1.0.0.0
442 * MLNX_OFED: 4.3-0.1.6.0
446 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
448 * Host interface: PCI Express 3.0 x8
449 * Device ID: 15b3:1007
450 * Firmware version: 2.42.5000
452 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
454 * Host interface: PCI Express 3.0 x8
455 * Device ID: 15b3:1013
456 * Firmware version: 12.21.1000 and above
458 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
460 * Host interface: PCI Express 3.0 x8
461 * Device ID: 15b3:1013
462 * Firmware version: 12.21.1000 and above
464 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
466 * Host interface: PCI Express 3.0 x8
467 * Device ID: 15b3:1013
468 * Firmware version: 12.21.1000 and above
470 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
472 * Host interface: PCI Express 3.0 x8
473 * Device ID: 15b3:1013
474 * Firmware version: 12.21.1000 and above
476 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
478 * Host interface: PCI Express 3.0 x8
479 * Device ID: 15b3:1013
480 * Firmware version: 12.21.1000 and above
482 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
484 * Host interface: PCI Express 3.0 x16
485 * Device ID: 15b3:1013
486 * Firmware version: 12.21.1000 and above
488 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
490 * Host interface: PCI Express 3.0 x8
491 * Device ID: 15b3:1013
492 * Firmware version: 12.21.1000 and above
494 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
496 * Host interface: PCI Express 3.0 x8
497 * Device ID: 15b3:1013
498 * Firmware version: 12.21.1000 and above
500 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
502 * Host interface: PCI Express 3.0 x16
503 * Device ID: 15b3:1013
504 * Firmware version: 12.21.1000 and above
505 * Firmware version: 12.21.1000 and above
507 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
509 * Host interface: PCI Express 3.0 x16
510 * Device ID: 15b3:1013
511 * Firmware version: 12.21.1000 and above
513 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
515 * Host interface: PCI Express 3.0 x16
516 * Device ID: 15b3:1013
517 * Firmware version: 12.21.1000 and above
519 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
521 * Host interface: PCI Express 3.0 x8
522 * Device ID: 15b3:1015
523 * Firmware version: 14.21.1000 and above
525 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
527 * Host interface: PCI Express 3.0 x8
528 * Device ID: 15b3:1015
529 * Firmware version: 14.21.1000 and above
531 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
533 * Host interface: PCI Express 3.0 x16
534 * Device ID: 15b3:1017
535 * Firmware version: 16.21.1000 and above
537 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
539 * Host interface: PCI Express 4.0 x16
540 * Device ID: 15b3:1019
541 * Firmware version: 16.21.1000 and above
543 * ARM platforms with Mellanox(R) NICs combinations
547 * Qualcomm ARM 1.1 2500MHz
553 * MLNX_OFED: 4.2-1.0.0.0
557 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
559 * Host interface: PCI Express 3.0 x8
560 * Device ID: 15b3:1015
561 * Firmware version: 14.21.1000
563 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
565 * Host interface: PCI Express 3.0 x16
566 * Device ID: 15b3:1017
567 * Firmware version: 16.21.1000