1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2017 The DPDK contributors
10 * **Reorganized mbuf structure.**
12 The mbuf structure has been reorganized as follows:
14 * Align fields to facilitate the writing of ``data_off``, ``refcnt``, and
15 ``nb_segs`` in one operation.
16 * Use 2 bytes for port and number of segments.
17 * Move the sequence number to the second cache line.
18 * Add a timestamp field.
19 * Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free.
21 * **Added mbuf raw free API.**
23 Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to
26 * **Added free Tx mbuf on demand API.**
28 Added a new function ``rte_eth_tx_done_cleanup()`` which allows an
29 application to request the driver to release mbufs that are no longer in use
30 from a Tx ring, independent of whether or not the ``tx_rs_thresh`` has been
33 * **Added device removal interrupt.**
35 Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify
36 the sudden removal of a device.
37 This event can be advertised by PCI drivers and enabled accordingly.
39 * **Added EAL dynamic log framework.**
41 Added new APIs to dynamically register named log types, and control
42 the level of each type independently.
44 * **Added descriptor status ethdev API.**
46 Added a new API to get the status of a descriptor.
48 For Rx, it is almost similar to the ``rx_descriptor_done`` API, except
49 it differentiates descriptors which are held by the driver and not
50 returned to the hardware. For Tx, it is a new API.
52 * **Increased number of next hops for LPM IPv6 to 2^21.**
54 The next_hop field has been extended from 8 bits to 21 bits for IPv6.
56 * **Added VFIO hotplug support.**
58 Added hotplug support for VFIO in addition to the existing UIO support.
60 * **Added PowerPC support to pci probing for vfio-pci devices.**
62 Enabled sPAPR IOMMU based pci probing for vfio-pci devices.
64 * **Kept consistent PMD batching behavior.**
66 Removed the limit of fm10k/i40e/ixgbe Tx burst size and vhost Rx/Tx burst size
67 in order to support the same policy of "make an best effort to Rx/Tx pkts"
70 * **Updated the ixgbe base driver.**
72 Updated the ixgbe base driver, including the following changes:
74 * Add link block check for KR.
75 * Complete HW initialization even if SFP is not present.
76 * Add VF xcast promiscuous mode.
78 * **Added PowerPC support for i40e and its vector PMD.**
80 Enabled i40e PMD and its vector PMD by default in PowerPC.
82 * **Added VF max bandwidth setting in i40e.**
84 Enabled capability to set the max bandwidth for a VF in i40e.
86 * **Added VF TC min and max bandwidth setting in i40e.**
88 Enabled capability to set the min and max allocated bandwidth for a TC on a
91 * **Added TC strict priority mode setting on i40e.**
93 There are 2 Tx scheduling modes supported for TCs by i40e HW: round robin
94 mode and strict priority mode. By default the round robin mode is used. It
95 is now possible to change the Tx scheduling mode for a TC. This is a global
96 setting on a physical port.
98 * **Added i40e dynamic device personalization support.**
100 * Added dynamic device personalization processing to i40e firmware.
102 * **Updated i40e driver to support MPLSoUDP/MPLSoGRE.**
104 Updated i40e PMD to support MPLSoUDP/MPLSoGRE with MPLSoUDP/MPLSoGRE
105 supporting profiles which can be programmed by dynamic device personalization
108 * **Added Cloud Filter for QinQ steering to i40e.**
110 * Added a QinQ cloud filter on the i40e PMD, for steering traffic to a VM
111 using both VLAN tags. Note, this feature is not supported in Vector Mode.
113 * **Updated mlx5 PMD.**
115 Updated the mlx5 driver, including the following changes:
117 * Added Generic flow API support for classification according to ether type.
118 * Extended Generic flow API support for classification of IPv6 flow
119 according to Vtc flow, Protocol and Hop limit.
120 * Added Generic flow API support for FLAG action.
121 * Added Generic flow API support for RSS action.
122 * Added support for TSO for non-tunneled and VXLAN packets.
123 * Added support for hardware Tx checksum offloads for VXLAN packets.
124 * Added support for user space Rx interrupt mode.
125 * Improved ConnectX-5 single core and maximum performance.
127 * **Updated mlx4 PMD.**
129 Updated the mlx4 driver, including the following changes:
131 * Added support for Generic flow API basic flow items and actions.
132 * Added support for device removal event.
134 * **Updated the sfc_efx driver.**
136 * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
137 pattern items with QUEUE action for ingress traffic.
139 * Added support for virtual functions (VFs).
141 * **Added LiquidIO network PMD.**
143 Added poll mode driver support for Cavium LiquidIO II server adapter VFs.
145 * **Added Atomic Rules Arkville PMD.**
147 Added a new poll mode driver for the Arkville family of
148 devices from Atomic Rules. The net/ark PMD supports line-rate
149 agnostic, multi-queue data movement on Arkville core FPGA instances.
151 * **Added support for NXP DPAA2 - FSLMC bus.**
153 Added the new bus "fslmc" driver for NXP DPAA2 devices. See the
154 "Network Interface Controller Drivers" document for more details of this new
157 * **Added support for NXP DPAA2 Network PMD.**
159 Added the new "dpaa2" net driver for NXP DPAA2 devices. See the
160 "Network Interface Controller Drivers" document for more details of this new
163 * **Added support for the Wind River Systems AVP PMD.**
165 Added a new networking driver for the AVP device type. Theses devices are
166 specific to the Wind River Systems virtualization platforms.
168 * **Added vmxnet3 version 3 support.**
170 Added support for vmxnet3 version 3 which includes several
171 performance enhancements such as configurable Tx data ring, Receive
172 Data Ring, and the ability to register memory regions.
174 * **Updated the TAP driver.**
176 Updated the TAP PMD to:
178 * Support MTU modification.
179 * Support packet type for Rx.
180 * Support segmented packets on Rx and Tx.
181 * Speed up Rx on TAP when no packets are available.
182 * Support capturing traffic from another netdevice.
183 * Dynamically change link status when the underlying interface state changes.
184 * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and
185 TCP pattern items with DROP, QUEUE and PASSTHRU actions for ingress
188 * **Added MTU feature support to Virtio and Vhost.**
190 Implemented new Virtio MTU feature in Vhost and Virtio:
192 * Add ``rte_vhost_mtu_get()`` API to Vhost library.
193 * Enable Vhost PMD's MTU get feature.
194 * Get max MTU value from host in Virtio PMD
196 * **Added interrupt mode support for virtio-user.**
198 Implemented Rxq interrupt mode and LSC support for virtio-user as a virtual
199 device. Supported cases:
201 * Rxq interrupt for virtio-user + vhost-user as the backend.
202 * Rxq interrupt for virtio-user + vhost-kernel as the backend.
203 * LSC interrupt for virtio-user + vhost-user as the backend.
205 * **Added event driven programming model library (rte_eventdev).**
207 This API introduces an event driven programming model.
209 In a polling model, lcores poll ethdev ports and associated
210 Rx queues directly to look for a packet. By contrast in an event
211 driven model, lcores call the scheduler that selects packets for
212 them based on programmer-specified criteria. The Eventdev library
213 adds support for an event driven programming model, which offers
214 applications automatic multicore scaling, dynamic load balancing,
215 pipelining, packet ingress order maintenance and
216 synchronization services to simplify application packet processing.
218 By introducing an event driven programming model, DPDK can support
219 both polling and event driven programming models for packet processing,
220 and applications are free to choose whatever model
221 (or combination of the two) best suits their needs.
223 * **Added Software Eventdev PMD.**
225 Added support for the software eventdev PMD. The software eventdev is a
226 software based scheduler device that implements the eventdev API. This
227 PMD allows an application to configure a pipeline using the eventdev
228 library, and run the scheduling workload on a CPU core.
230 * **Added Cavium OCTEONTX Eventdev PMD.**
232 Added the new octeontx ssovf eventdev driver for OCTEONTX devices. See the
233 "Event Device Drivers" document for more details on this new driver.
235 * **Added information metrics library.**
237 Added a library that allows information metrics to be added and updated
238 by producers, typically other libraries, for later retrieval by
239 consumers such as applications. It is intended to provide a
240 reporting mechanism that is independent of other libraries such
243 * **Added bit-rate calculation library.**
245 Added a library that can be used to calculate device bit-rates. Calculated
246 bitrates are reported using the metrics library.
248 * **Added latency stats library.**
250 Added a library that measures packet latency. The collected statistics are
251 jitter and latency. For latency the minimum, average, and maximum is
254 * **Added NXP DPAA2 SEC crypto PMD.**
256 A new "dpaa2_sec" hardware based crypto PMD for NXP DPAA2 devices has been
257 added. See the "Crypto Device Drivers" document for more details on this
260 * **Updated the Cryptodev Scheduler PMD.**
262 * Added a packet-size based distribution mode, which distributes the enqueued
263 crypto operations among two slaves, based on their data lengths.
264 * Added fail-over scheduling mode, which enqueues crypto operations to a
265 primary slave first. Then, any operation that cannot be enqueued is
266 enqueued to a secondary slave.
267 * Added mode specific option support, so each scheduling mode can
268 now be configured individually by the new API.
270 * **Updated the QAT PMD.**
272 The QAT PMD has been updated with additional support for:
274 * AES DOCSIS BPI algorithm.
275 * DES DOCSIS BPI algorithm.
276 * ZUC EEA3/EIA3 algorithms.
278 * **Updated the AESNI MB PMD.**
280 The AESNI MB PMD has been updated with additional support for:
282 * AES DOCSIS BPI algorithm.
284 * **Updated the OpenSSL PMD.**
286 The OpenSSL PMD has been updated with additional support for:
288 * DES DOCSIS BPI algorithm.
294 * **l2fwd-keepalive: Fixed unclean shutdowns.**
296 Added clean shutdown to l2fwd-keepalive so that it can free up
297 stale resources used for inter-process communication.
303 * **LSC interrupt doesn't work for virtio-user + vhost-kernel.**
305 LSC interrupt cannot be detected when setting the backend, tap device,
306 up/down as we fail to find a way to monitor such event.
312 * The LPM ``next_hop`` field is extended from 8 bits to 21 bits for IPv6
313 while keeping ABI compatibility.
315 * **Reworked rte_ring library.**
317 The rte_ring library has been reworked and updated. The following changes
318 have been made to it:
320 * Removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``.
321 * Removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``.
322 * Removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``.
323 * Removed the function ``rte_ring_set_water_mark`` as part of a general
324 removal of watermarks support in the library.
325 * Added an extra parameter to the burst/bulk enqueue functions to
326 return the number of free spaces in the ring after enqueue. This can
327 be used by an application to implement its own watermark functionality.
328 * Added an extra parameter to the burst/bulk dequeue functions to return
329 the number elements remaining in the ring after dequeue.
330 * Changed the return value of the enqueue and dequeue bulk functions to
331 match that of the burst equivalents. In all cases, ring functions which
332 operate on multiple packets now return the number of elements enqueued
333 or dequeued, as appropriate. The updated functions are:
335 - ``rte_ring_mp_enqueue_bulk``
336 - ``rte_ring_sp_enqueue_bulk``
337 - ``rte_ring_enqueue_bulk``
338 - ``rte_ring_mc_dequeue_bulk``
339 - ``rte_ring_sc_dequeue_bulk``
340 - ``rte_ring_dequeue_bulk``
342 NOTE: the above functions all have different parameters as well as
343 different return values, due to the other listed changes above. This
344 means that all instances of the functions in existing code will be
345 flagged by the compiler. The return value usage should be checked
346 while fixing the compiler error due to the extra parameter.
348 * **Reworked rte_vhost library.**
350 The rte_vhost library has been reworked to make it generic enough so that
351 the user could build other vhost-user drivers on top of it. To achieve this
352 the following changes have been made:
354 * The following vhost-pmd APIs are removed:
356 * ``rte_eth_vhost_feature_disable``
357 * ``rte_eth_vhost_feature_enable``
358 * ``rte_eth_vhost_feature_get``
360 * The vhost API ``rte_vhost_driver_callback_register(ops)`` is reworked to
361 be per vhost-user socket file. Thus, it takes one more argument:
362 ``rte_vhost_driver_callback_register(path, ops)``.
364 * The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
365 ``rte_vhost_get_vring_num`` should be used.
367 * The following macros are removed in ``rte_virtio_net.h``
373 * The following net specific header files are removed in ``rte_virtio_net.h``
375 * ``linux/virtio_net.h``
380 * The vhost struct ``virtio_net_device_ops`` is renamed to
383 * The vhost API ``rte_vhost_driver_session_start`` is removed. Instead,
384 ``rte_vhost_driver_start`` should be used, and there is no need to create
387 * The vhost public header file ``rte_virtio_net.h`` is renamed to
394 * **Reorganized the mbuf structure.**
396 The order and size of the fields in the ``mbuf`` structure changed,
397 as described in the `New Features`_ section.
399 * The ``rte_cryptodev_info.sym`` structure has a new field ``max_nb_sessions_per_qp``
400 to support drivers which may support a limited number of sessions per queue_pair.
406 * KNI vhost support has been removed.
408 * The dpdk_qat sample application has been removed.
410 Shared Library Versions
411 -----------------------
413 The libraries prepended with a plus sign were incremented in this version.
418 + librte_bitratestats.so.1
421 librte_cryptodev.so.2
422 librte_distributor.so.1
425 + librte_eventdev.so.1
431 + librte_latencystats.so.1
436 + librte_metrics.so.1
455 * Intel(R) platforms with Intel(R) NICs combinations
459 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
460 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
461 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
462 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
463 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
464 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
465 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
466 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
473 * Red Hat Enterprise Linux Server release 7.3
474 * SUSE Enterprise Linux 12
481 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
483 * Firmware version: 0x61bf0001
484 * Device id (pf/vf): 8086:10fb / 8086:10ed
485 * Driver version: 4.0.1-k (ixgbe)
487 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
489 * Firmware version: 0x800001cf
490 * Device id (pf/vf): 8086:15ad / 8086:15a8
491 * Driver version: 4.2.5 (ixgbe)
493 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
495 * Firmware version: 5.05
496 * Device id (pf/vf): 8086:1572 / 8086:154c
497 * Driver version: 1.5.23 (i40e)
499 * Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G)
501 * Firmware version: 5.05
502 * Device id (pf/vf): 8086:1572 / 8086:154c
503 * Driver version: 1.5.23 (i40e)
505 * Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G)
507 * Firmware version: 5.05
508 * Device id (pf/vf): 8086:1584 / 8086:154c
509 * Driver version: 1.5.23 (i40e)
511 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
513 * Firmware version: 5.05
514 * Device id (pf/vf): 8086:1583 / 8086:154c
515 * Driver version: 1.5.23 (i40e)
517 * Intel(R) Corporation I350 Gigabit Network Connection
519 * Firmware version: 1.48, 0x800006e7
520 * Device id (pf/vf): 8086:1521 / 8086:1520
521 * Driver version: 5.2.13-k (igb)
523 * Intel(R) platforms with Mellanox(R) NICs combinations
527 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
528 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
529 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
530 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
534 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
535 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
540 * MLNX_OFED: 4.0-2.0.0.0
544 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
546 * Host interface: PCI Express 3.0 x8
547 * Device ID: 15b3:1007
548 * Firmware version: 2.40.5030
550 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
552 * Host interface: PCI Express 3.0 x8
553 * Device ID: 15b3:1013
554 * Firmware version: 12.18.2000
556 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
558 * Host interface: PCI Express 3.0 x8
559 * Device ID: 15b3:1013
560 * Firmware version: 12.18.2000
562 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
564 * Host interface: PCI Express 3.0 x8
565 * Device ID: 15b3:1013
566 * Firmware version: 12.18.2000
568 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
570 * Host interface: PCI Express 3.0 x8
571 * Device ID: 15b3:1013
572 * Firmware version: 12.18.2000
574 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
576 * Host interface: PCI Express 3.0 x8
577 * Device ID: 15b3:1013
578 * Firmware version: 12.18.2000
580 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
582 * Host interface: PCI Express 3.0 x16
583 * Device ID: 15b3:1013
584 * Firmware version: 12.18.2000
586 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
588 * Host interface: PCI Express 3.0 x8
589 * Device ID: 15b3:1013
590 * Firmware version: 12.18.2000
592 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
594 * Host interface: PCI Express 3.0 x8
595 * Device ID: 15b3:1013
596 * Firmware version: 12.18.2000
598 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
600 * Host interface: PCI Express 3.0 x16
601 * Device ID: 15b3:1013
602 * Firmware version: 12.18.2000
604 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
606 * Host interface: PCI Express 3.0 x16
607 * Device ID: 15b3:1013
608 * Firmware version: 12.18.2000
610 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
612 * Host interface: PCI Express 3.0 x16
613 * Device ID: 15b3:1013
614 * Firmware version: 12.18.2000
616 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
618 * Host interface: PCI Express 3.0 x8
619 * Device ID: 15b3:1015
620 * Firmware version: 14.18.2000
622 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
624 * Host interface: PCI Express 3.0 x8
625 * Device ID: 15b3:1015
626 * Firmware version: 14.18.2000
628 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
630 * Host interface: PCI Express 3.0 x16
631 * Device ID: 15b3:1017
632 * Firmware version: 16.19.1200
634 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
636 * Host interface: PCI Express 4.0 x16
637 * Device ID: 15b3:1019
638 * Firmware version: 16.19.1200
640 * IBM(R) Power8(R) with Mellanox(R) NICs combinations
644 * Processor: POWER8E (raw), AltiVec supported
645 * type-model: 8247-22L
646 * Firmware FW810.21 (SV810_108)
648 * OS: Ubuntu 16.04 LTS PPC le
650 * MLNX_OFED: 4.0-2.0.0.0
654 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
656 * Host interface: PCI Express 3.0 x8
657 * Device ID: 15b3:1013
658 * Firmware version: 12.18.2000
660 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
662 * Host interface: PCI Express 3.0 x8
663 * Device ID: 15b3:1013
664 * Firmware version: 12.18.2000
666 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
668 * Host interface: PCI Express 3.0 x8
669 * Device ID: 15b3:1013
670 * Firmware version: 12.18.2000
672 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
674 * Host interface: PCI Express 3.0 x8
675 * Device ID: 15b3:1013
676 * Firmware version: 12.18.2000
678 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
680 * Host interface: PCI Express 3.0 x8
681 * Device ID: 15b3:1013
682 * Firmware version: 12.18.2000
684 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
686 * Host interface: PCI Express 3.0 x16
687 * Device ID: 15b3:1013
688 * Firmware version: 12.18.2000
690 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
692 * Host interface: PCI Express 3.0 x8
693 * Device ID: 15b3:1013
694 * Firmware version: 12.18.2000
696 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
698 * Host interface: PCI Express 3.0 x8
699 * Device ID: 15b3:1013
700 * Firmware version: 12.18.2000
702 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
704 * Host interface: PCI Express 3.0 x16
705 * Device ID: 15b3:1013
706 * Firmware version: 12.18.2000
708 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
710 * Host interface: PCI Express 3.0 x16
711 * Device ID: 15b3:1013
712 * Firmware version: 12.18.2000
714 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
716 * Host interface: PCI Express 3.0 x16
717 * Device ID: 15b3:1013
718 * Firmware version: 12.18.2000