1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2017 The DPDK contributors
10 * **Added support for representing buses in EAL**
12 The ``rte_bus`` structure was introduced into the EAL. This allows for
13 devices to be represented by buses they are connected to. A new bus can be
14 added to DPDK by extending the ``rte_bus`` structure and implementing the
15 scan and probe functions. Once a new bus is registered using the provided
16 APIs, new devices can be detected and initialized using bus scan and probe
19 With this change, devices other than PCI or VDEV type can be represented
20 in the DPDK framework.
22 * **Added generic EAL API for I/O device memory read/write operations.**
24 This API introduces 8 bit, 16 bit, 32 bit and 64 bit I/O device
25 memory read/write operations along with "relaxed" versions.
27 Weakly-ordered architectures like ARM need an additional I/O barrier for
28 device memory read/write access over PCI bus. By introducing the EAL
29 abstraction for I/O device memory read/write access, the drivers can access
30 I/O device memory in an architecture-agnostic manner. The relaxed version
31 does not have an additional I/O memory barrier, which is useful in accessing
32 the device registers of integrated controllers which is implicitly strongly
33 ordered with respect to memory access.
35 * **Added generic flow API (rte_flow).**
37 This API provides a generic means to configure hardware to match specific
38 ingress or egress traffic, alter its behavior and query related counters
39 according to any number of user-defined rules.
41 In order to expose a single interface with an unambiguous behavior that is
42 common to all poll-mode drivers (PMDs) the ``rte_flow`` API is slightly
43 higher-level than the legacy filtering framework, which it encompasses and
44 supersedes (including all functions and filter types) .
46 See the :doc:`../prog_guide/rte_flow` documentation for more information.
48 * **Added firmware version get API.**
50 Added a new function ``rte_eth_dev_fw_version_get()`` to fetch the firmware
51 version for a given device.
53 * **Added APIs for MACsec offload support to the ixgbe PMD.**
55 Six new APIs have been added to the ixgbe PMD for MACsec offload support.
56 The declarations for the APIs can be found in ``rte_pmd_ixgbe.h``.
58 * **Added I219 NICs support.**
60 Added support for I219 Intel 1GbE NICs.
62 * **Added VF Daemon (VFD) for i40e. - EXPERIMENTAL**
64 This is an EXPERIMENTAL feature to enhance the capability of the DPDK PF as
65 many VF management features are not currently supported by the kernel PF
66 driver. Some new private APIs are implemented directly in the PMD without an
67 abstraction layer. They can be used directly by some users who have the
70 The new APIs to control VFs directly from PF include:
72 * Set VF MAC anti-spoofing.
73 * Set VF VLAN anti-spoofing.
75 * Set VF unicast promiscuous mode.
76 * Set VF multicast promiscuous mode.
80 * Set VF VLAN stripping.
82 * Set VF broadcast mode.
86 VFD also includes VF to PF mailbox message management from an application.
87 When the PF receives mailbox messages from the VF the PF should call the
88 callback provided by the application to know if they're permitted to be
91 As an EXPERIMENTAL feature, please be aware it can be changed or even
92 removed without prior notice.
94 * **Updated the i40e base driver.**
96 Updated the i40e base driver, including the following changes:
98 * Replace existing legacy ``memcpy()`` calls with ``i40e_memcpy()`` calls.
99 * Use ``BIT()`` macro instead of bit fields.
100 * Add clear all WoL filters implementation.
101 * Add broadcast promiscuous control per VLAN.
102 * Remove unused ``X722_SUPPORT`` and ``I40E_NDIS_SUPPORT`` macros.
104 * **Updated the enic driver.**
106 * Set new Rx checksum flags in mbufs to indicate unknown, good or bad checksums.
107 * Fix set/remove of MAC addresses. Allow up to 64 addresses per device.
108 * Enable TSO on outer headers.
110 * **Added Solarflare libefx-based network PMD.**
112 Added a new network PMD which supports Solarflare SFN7xxx and SFN8xxx family
113 of 10/40 Gbps adapters.
115 * **Updated the mlx4 driver.**
117 * Addressed a few bugs.
119 * **Added support for Mellanox ConnectX-5 adapters (mlx5).**
121 Added support for Mellanox ConnectX-5 family of 10/25/40/50/100 Gbps
122 adapters to the existing mlx5 PMD.
124 * **Updated the mlx5 driver.**
126 * Improve Tx performance by using vector logic.
127 * Improve RSS balancing when number of queues is not a power of two.
128 * Generic flow API support for Ethernet, IPv4, IPv4, UDP, TCP, VLAN and
129 VXLAN pattern items with DROP and QUEUE actions.
130 * Support for extended statistics.
131 * Addressed several data path bugs.
132 * As of MLNX_OFED 4.0-1.0.1.0, the Toeplitz RSS hash function is not
133 symmetric anymore for consistency with other PMDs.
135 * **virtio-user with vhost-kernel as another exceptional path.**
137 Previously, we upstreamed a virtual device, virtio-user with vhost-user as
138 the backend as a way of enabling IPC (Inter-Process Communication) and user
139 space container networking.
141 Virtio-user with vhost-kernel as the backend is a solution for the exception
142 path, such as KNI, which exchanges packets with the kernel networking stack.
143 This solution is very promising in:
145 * Maintenance: vhost and vhost-net (kernel) is an upstreamed and extensively
147 * Features: vhost-net is designed to be a networking solution, which has
148 lots of networking related features, like multi-queue, TSO, multi-seg
150 * Performance: similar to KNI, this solution would use one or more
151 kthreads to send/receive packets from user space DPDK applications,
152 which has little impact on user space polling thread (except that
153 it might enter into kernel space to wake up those kthreads if
156 * **Added virtio Rx interrupt support.**
158 Added a feature to enable Rx interrupt mode for virtio pci net devices as
159 bound to VFIO (noiommu mode) and driven by virtio PMD.
161 With this feature, the virtio PMD can switch between polling mode and
162 interrupt mode, to achieve best performance, and at the same time save
163 power. It can work on both legacy and modern virtio devices. In this mode,
164 each ``rxq`` is mapped with an excluded MSIx interrupt.
166 See the :ref:`Virtio Interrupt Mode <virtio_interrupt_mode>` documentation
167 for more information.
169 * **Added ARMv8 crypto PMD.**
171 A new crypto PMD has been added, which provides combined mode cryptographic
172 operations optimized for ARMv8 processors. The driver can be used to enhance
173 performance in processing chained operations such as cipher + HMAC.
175 * **Updated the QAT PMD.**
177 The QAT PMD has been updated with additional support for:
180 * Scatter-gather list (SGL) support.
182 * **Updated the AESNI MB PMD.**
184 * The Intel(R) Multi Buffer Crypto for IPsec library used in
185 AESNI MB PMD has been moved to a new repository, in GitHub.
186 * Support has been added for single operations (cipher only and
187 authentication only).
189 * **Updated the AES-NI GCM PMD.**
191 The AES-NI GCM PMD was migrated from the Multi Buffer library to the ISA-L
192 library. The migration entailed adding additional support for:
195 * 256-bit cipher key.
197 * Out-of place processing
198 * Scatter-gather support for chained mbufs (only out-of place and destination
199 mbuf must be contiguous)
201 * **Added crypto performance test application.**
203 Added a new performance test application for measuring performance
204 parameters of PMDs available in the crypto tree.
206 * **Added Elastic Flow Distributor library (rte_efd).**
208 Added a new library which uses perfect hashing to determine a target/value
209 for a given incoming flow key.
211 The library does not store the key itself for lookup operations, and
212 therefore, lookup performance is not dependent on the key size. Also, the
213 target/value can be any arbitrary value (8 bits by default). Finally, the
214 storage requirement is much smaller than a hash-based flow table and
215 therefore, it can better fit in CPU cache and scale to millions of flow
218 See the :ref:`Elastic Flow Distributor Library <Efd_Library>` documentation in
219 the Programmers Guide document, for more information.
228 * **net/virtio: Fixed multiple process support.**
230 Fixed a few regressions introduced in recent releases that break the virtio
231 multiple process support.
237 * **examples/ethtool: Fixed crash with non-PCI devices.**
239 Fixed issue where querying a non-PCI device was dereferencing non-existent
240 PCI data resulting in a segmentation fault.
246 * **Moved five APIs for VF management from the ethdev to the ixgbe PMD.**
248 The following five APIs for VF management from the PF have been removed from
249 the ethdev, renamed, and added to the ixgbe PMD::
251 rte_eth_dev_set_vf_rate_limit()
252 rte_eth_dev_set_vf_rx()
253 rte_eth_dev_set_vf_rxmode()
254 rte_eth_dev_set_vf_tx()
255 rte_eth_dev_set_vf_vlan_filter()
257 The API's have been renamed to the following::
259 rte_pmd_ixgbe_set_vf_rate_limit()
260 rte_pmd_ixgbe_set_vf_rx()
261 rte_pmd_ixgbe_set_vf_rxmode()
262 rte_pmd_ixgbe_set_vf_tx()
263 rte_pmd_ixgbe_set_vf_vlan_filter()
265 The declarations for the API’s can be found in ``rte_pmd_ixgbe.h``.
268 Shared Library Versions
269 -----------------------
271 The libraries prepended with a plus sign were incremented in this version.
278 librte_cryptodev.so.2
279 librte_distributor.so.1
309 This release has been tested with the below list of CPU/device/firmware/OS.
310 Each section describes a different set of combinations.
312 * Intel(R) platforms with Mellanox(R) NICs combinations
316 * Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
317 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
318 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
326 * Red Hat Enterprise Linux 7.2
327 * SUSE Enterprise Linux 12
333 * MLNX_OFED: 4.0-1.0.1.0
337 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
339 * Host interface: PCI Express 3.0 x8
340 * Device ID: 15b3:1007
341 * Firmware version: 2.40.5030
343 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
345 * Host interface: PCI Express 3.0 x8
346 * Device ID: 15b3:1013
347 * Firmware version: 12.18.1000
349 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
351 * Host interface: PCI Express 3.0 x8
352 * Device ID: 15b3:1013
353 * Firmware version: 12.18.1000
355 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
357 * Host interface: PCI Express 3.0 x8
358 * Device ID: 15b3:1013
359 * Firmware version: 12.18.1000
361 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
363 * Host interface: PCI Express 3.0 x8
364 * Device ID: 15b3:1013
365 * Firmware version: 12.18.1000
367 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
369 * Host interface: PCI Express 3.0 x8
370 * Device ID: 15b3:1013
371 * Firmware version: 12.18.1000
373 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
375 * Host interface: PCI Express 3.0 x16
376 * Device ID: 15b3:1013
377 * Firmware version: 12.18.1000
379 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
381 * Host interface: PCI Express 3.0 x8
382 * Device ID: 15b3:1013
383 * Firmware version: 12.18.1000
385 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
387 * Host interface: PCI Express 3.0 x8
388 * Device ID: 15b3:1013
389 * Firmware version: 12.18.1000
391 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
393 * Host interface: PCI Express 3.0 x16
394 * Device ID: 15b3:1013
395 * Firmware version: 12.18.1000
397 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
399 * Host interface: PCI Express 3.0 x16
400 * Device ID: 15b3:1013
401 * Firmware version: 12.18.1000
403 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
405 * Host interface: PCI Express 3.0 x16
406 * Device ID: 15b3:1013
407 * Firmware version: 12.18.1000
409 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
411 * Host interface: PCI Express 3.0 x8
412 * Device ID: 15b3:1015
413 * Firmware version: 14.18.1000
415 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
417 * Host interface: PCI Express 3.0 x8
418 * Device ID: 15b3:1015
419 * Firmware version: 14.18.1000
421 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
423 * Host interface: PCI Express 3.0 x16
424 * Device ID: 15b3:1017
425 * Firmware version: 16.18.1000
427 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
429 * Host interface: PCI Express 4.0 x16
430 * Device ID: 15b3:1019
431 * Firmware version: 16.18.1000
433 * IBM(R) Power8(R) with Mellanox(R) NICs combinations
437 * Processor: POWER8E (raw), AltiVec supported
439 * type-model: 8247-22L
440 * Firmware FW810.21 (SV810_108)
442 * OS: Ubuntu 16.04 LTS PPC le
444 * MLNX_OFED: 4.0-1.0.1.0
448 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
450 * Host interface: PCI Express 3.0 x8
451 * Device ID: 15b3:1013
452 * Firmware version: 12.18.1000
454 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
456 * Host interface: PCI Express 3.0 x8
457 * Device ID: 15b3:1013
458 * Firmware version: 12.18.1000
460 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
462 * Host interface: PCI Express 3.0 x8
463 * Device ID: 15b3:1013
464 * Firmware version: 12.18.1000
466 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
468 * Host interface: PCI Express 3.0 x8
469 * Device ID: 15b3:1013
470 * Firmware version: 12.18.1000
472 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
474 * Host interface: PCI Express 3.0 x8
475 * Device ID: 15b3:1013
476 * Firmware version: 12.18.1000
478 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
480 * Host interface: PCI Express 3.0 x16
481 * Device ID: 15b3:1013
482 * Firmware version: 12.18.1000
484 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
486 * Host interface: PCI Express 3.0 x8
487 * Device ID: 15b3:1013
488 * Firmware version: 12.18.1000
490 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
492 * Host interface: PCI Express 3.0 x8
493 * Device ID: 15b3:1013
494 * Firmware version: 12.18.1000
496 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
498 * Host interface: PCI Express 3.0 x16
499 * Device ID: 15b3:1013
500 * Firmware version: 12.18.1000
502 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
504 * Host interface: PCI Express 3.0 x16
505 * Device ID: 15b3:1013
506 * Firmware version: 12.18.1000
508 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
510 * Host interface: PCI Express 3.0 x16
511 * Device ID: 15b3:1013
512 * Firmware version: 12.18.1000
514 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
516 * Host interface: PCI Express 3.0 x8
517 * Device ID: 15b3:1015
518 * Firmware version: 14.18.1000
520 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
522 * Host interface: PCI Express 3.0 x8
523 * Device ID: 15b3:1015
524 * Firmware version: 14.18.1000
526 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
528 * Host interface: PCI Express 3.0 x16
529 * Device ID: 15b3:1017
530 * Firmware version: 16.18.1000
532 * Intel(R) platforms with Intel(R) NICs combinations
536 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
537 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
538 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
539 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
540 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
541 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
542 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
549 * Red Hat Enterprise Linux Server release 7.3
550 * SUSE Enterprise Linux 12
557 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
559 * Firmware version: 0x61bf0001
560 * Device id (pf/vf): 8086:10fb / 8086:10ed
561 * Driver version: 4.0.1-k (ixgbe)
563 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
565 * Firmware version: 0x800001cf
566 * Device id (pf/vf): 8086:15ad / 8086:15a8
567 * Driver version: 4.2.5 (ixgbe)
569 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
571 * Firmware version: 5.05
572 * Device id (pf/vf): 8086:1572 / 8086:154c
573 * Driver version: 1.5.23 (i40e)
575 * Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G)
577 * Firmware version: 5.05
578 * Device id (pf/vf): 8086:1572 / 8086:154c
579 * Driver version: 1.5.23 (i40e)
581 * Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G)
583 * Firmware version: 5.05
584 * Device id (pf/vf): 8086:1584 / 8086:154c
585 * Driver version: 1.5.23 (i40e)
587 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
589 * Firmware version: 5.05
590 * Device id (pf/vf): 8086:1583 / 8086:154c
591 * Driver version: 1.5.23 (i40e)
593 * Intel(R) Corporation I350 Gigabit Network Connection
595 * Firmware version: 1.48, 0x800006e7
596 * Device id (pf/vf): 8086:1521 / 8086:1520
597 * Driver version: 5.2.13-k (igb)