1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2016 The DPDK contributors
10 * **Added software parser for packet type.**
12 * Added a new function ``rte_pktmbuf_read()`` to read the packet data from an
13 mbuf chain, linearizing if required.
14 * Added a new function ``rte_net_get_ptype()`` to parse an Ethernet packet
15 in an mbuf chain and retrieve its packet type from software.
16 * Added new functions ``rte_get_ptype_*()`` to dump a packet type as a string.
18 * **Improved offloads support in mbuf.**
20 * Added a new function ``rte_raw_cksum_mbuf()`` to process the checksum of
21 data embedded in an mbuf chain.
22 * Added new Rx checksum flags in mbufs to describe more states: unknown,
23 good, bad, or not present (useful for virtual drivers). This modification
24 was done for IP and L4.
25 * Added a new Rx LRO mbuf flag, used when packets are coalesced. This
26 flag indicates that the segment size of original packets is known.
28 * **Added vhost-user dequeue zero copy support.**
30 The copy in the dequeue path is avoided in order to improve the performance.
31 In the VM2VM case, the boost is quite impressive. The bigger the packet size,
32 the bigger performance boost you may get. However, for the VM2NIC case, there
33 are some limitations, so the boost is not as impressive as the VM2VM case.
34 It may even drop quite a bit for small packets.
36 For that reason, this feature is disabled by default. It can be enabled when
37 the ``RTE_VHOST_USER_DEQUEUE_ZERO_COPY`` flag is set. Check the VHost section
38 of the Programming Guide for more information.
40 * **Added vhost-user indirect descriptors support.**
42 If the indirect descriptor feature is enabled, each packet sent by the guest
43 will take exactly one slot in the enqueue virtqueue. Without this feature, as in
44 the current version, even 64 bytes packets take two slots with Virtio PMD on guest
47 The main impact is better performance for 0% packet loss use cases, as it
48 behaves as if the virtqueue size was enlarged, so more packets can be buffered
49 in the case of system perturbations. On the downside, small performance degradations
50 were measured when running micro-benchmarks.
52 * **Added vhost PMD xstats.**
54 Added extended statistics to vhost PMD from a per port perspective.
56 * **Supported offloads with virtio.**
58 Added support for the following offloads in virtio:
64 * **Added virtio NEON support for ARM.**
66 Added NEON support for ARM based virtio.
68 * **Updated the ixgbe base driver.**
70 Updated the ixgbe base driver, including the following changes:
72 * Added X550em_a 10G PHY support.
73 * Added support for flow control auto negotiation for X550em_a 1G PHY.
74 * Added X550em_a FW ALEF support.
75 * Increased mailbox version to ``ixgbe_mbox_api_13``.
76 * Added two MAC operations for Hyper-V support.
78 * **Added APIs for VF management to the ixgbe PMD.**
80 Eight new APIs have been added to the ixgbe PMD for VF management from the PF.
81 The declarations for the API's can be found in ``rte_pmd_ixgbe.h``.
83 * **Updated the enic driver.**
85 * Added update to use interrupt for link status checking instead of polling.
86 * Added more flow director modes on UCS Blade with firmware version >= 2.0(13e).
87 * Added full support for MTU update.
88 * Added support for the ``rte_eth_rx_queue_count`` function.
90 * **Updated the mlx5 driver.**
92 * Added support for RSS hash results.
93 * Added several performance improvements.
94 * Added several bug fixes.
96 * **Updated the QAT PMD.**
98 The QAT PMD was updated with additional support for:
100 * MD5_HMAC algorithm.
101 * SHA224-HMAC algorithm.
102 * SHA384-HMAC algorithm.
104 * KASUMI (F8 and F9) algorithm.
110 * **Added openssl PMD.**
112 A new crypto PMD has been added, which provides several ciphering and hashing algorithms.
113 All cryptography operations use the Openssl library crypto API.
115 * **Updated the IPsec example.**
117 Updated the IPsec example with the following support:
119 * Configuration file support.
120 * AES CBC IV generation with cipher forward function.
123 * **Added support for new gcc -march option.**
125 The GCC 4.9 ``-march`` option supports the Intel processor code names.
126 The config option ``RTE_MACHINE`` can be used to pass code names to the compiler via the ``-march`` flag.
132 * **enic: Fixed several flow director issues.**
134 * **enic: Fixed inadvertent setting of L4 checksum ptype on ICMP packets.**
136 * **enic: Fixed high driver overhead when servicing Rx queues beyond the first.**
142 * **L3fwd-power app does not work properly when Rx vector is enabled.**
144 The L3fwd-power app doesn't work properly with some drivers in vector mode
145 since the queue monitoring works differently between scalar and vector modes
146 leading to incorrect frequency scaling. In addition, L3fwd-power application
147 requires the mbuf to have correct packet type set but in some drivers the
148 vector mode must be disabled for this.
150 Therefore, in order to use L3fwd-power, vector mode should be disabled
153 * **Digest address must be supplied for crypto auth operation on QAT PMD.**
155 The cryptodev API specifies that if the rte_crypto_sym_op.digest.data field,
156 and by inference the digest.phys_addr field which points to the same location,
157 is not set for an auth operation the driver is to understand that the digest
158 result is located immediately following the region over which the digest is
159 computed. The QAT PMD doesn't correctly handle this case and reads and writes
160 to an incorrect location.
162 Callers can workaround this by always supplying the digest virtual and
163 physical address fields in the rte_crypto_sym_op for an auth operation.
169 * The driver naming convention has been changed to make them more
170 consistent. It especially impacts ``--vdev`` arguments. For example
171 ``eth_pcap`` becomes ``net_pcap`` and ``cryptodev_aesni_mb_pmd`` becomes
174 For backward compatibility an alias feature has been enabled to support the
177 * The log history has been removed.
179 * The ``rte_ivshmem`` feature (including library and EAL code) has been removed
180 in 16.11 because it had some design issues which were not planned to be fixed.
182 * The ``file_name`` data type of ``struct rte_port_source_params`` and
183 ``struct rte_port_sink_params`` is changed from ``char *`` to ``const char *``.
185 * **Improved device/driver hierarchy and generalized hotplugging.**
187 The device and driver relationship has been restructured by introducing generic
188 classes. This paves the way for having PCI, VDEV and other device types as
189 instantiated objects rather than classes in themselves. Hotplugging has also
190 been generalized into EAL so that Ethernet or crypto devices can use the
191 common infrastructure.
193 * Removed ``pmd_type`` as a way of segregation of devices.
194 * Moved ``numa_node`` and ``devargs`` into ``rte_driver`` from
195 ``rte_pci_driver``. These can now be used by any instantiated object of
197 * Added ``rte_device`` class and all PCI and VDEV devices inherit from it
198 * Renamed devinit/devuninit handlers to probe/remove to make it more
199 semantically correct with respect to the device <=> driver relationship.
200 * Moved hotplugging support to EAL. Hereafter, PCI and vdev can use the
201 APIs ``rte_eal_dev_attach`` and ``rte_eal_dev_detach``.
202 * Renamed helpers and support macros to make them more synonymous
203 with their device types
204 (e.g. ``PMD_REGISTER_DRIVER`` => ``RTE_PMD_REGISTER_PCI``).
205 * Device naming functions have been generalized from ethdev and cryptodev
206 to EAL. ``rte_eal_pci_device_name`` has been introduced for obtaining
207 unique device name from PCI Domain-BDF description.
208 * Virtual device registration APIs have been added: ``rte_eal_vdrv_register``
209 and ``rte_eal_vdrv_unregister``.
212 Shared Library Versions
213 -----------------------
215 The libraries prepended with a plus sign were incremented in this version.
222 + librte_cryptodev.so.2
223 librte_distributor.so.1
256 - Processor: Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
261 - Processor: Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
262 - Onboard NIC: Intel(R) X552/X557-AT (2x10G)
264 - Firmware-version: 0x800001cf
265 - Device ID (PF/VF): 8086:15ad /8086:15a8
267 - kernel driver version: 4.2.5 (ixgbe)
272 - Processor: Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
274 #. Intel(R) Server board S2600GZ
276 - BIOS: SE5C600.86B.02.02.0002.122320131210
277 - Processor: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
279 #. Intel(R) Server board W2600CR
281 - BIOS: SE5C600.86B.02.01.0002.082220131453
282 - Processor: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
284 #. Intel(R) Server board S2600CWT
286 - BIOS: SE5C610.86B.01.01.0009.060120151350
287 - Processor: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
289 #. Intel(R) Server board S2600WTT
291 - BIOS: SE5C610.86B.01.01.0005.101720141054
292 - Processor: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
294 #. Intel(R) Server board S2600WTT
296 - BIOS: SE5C610.86B.11.01.0044.090120151156
297 - Processor: Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
299 #. Intel(R) Server board S2600WTT
301 - Processor: Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
305 - Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
309 - Machine type-model: 8247-22L
310 - Firmware FW810.21 (SV810_108)
311 - Processor: POWER8E (raw), AltiVec supported
317 #. Intel(R) Ethernet Controller X540-AT2
319 - Firmware version: 0x80000389
320 - Device id (pf): 8086:1528
321 - Driver version: 3.23.2 (ixgbe)
323 #. Intel(R) 82599ES 10 Gigabit Ethernet Controller
325 - Firmware version: 0x61bf0001
326 - Device id (pf/vf): 8086:10fb / 8086:10ed
327 - Driver version: 4.0.1-k (ixgbe)
329 #. Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
331 - Firmware version: 0x800001cf
332 - Device id (pf/vf): 8086:15ad / 8086:15a8
333 - Driver version: 4.2.5 (ixgbe)
335 #. Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
337 - Firmware version: 5.05
338 - Device id (pf/vf): 8086:1572 / 8086:154c
339 - Driver version: 1.5.23 (i40e)
341 #. Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G)
343 - Firmware version: 5.05
344 - Device id (pf/vf): 8086:1572 / 8086:154c
345 - Driver version: 1.5.23 (i40e)
347 #. Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G)
349 - Firmware version: 5.05
350 - Device id (pf/vf): 8086:1584 / 8086:154c
351 - Driver version: 1.5.23 (i40e)
353 #. Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
355 - Firmware version: 5.05
356 - Device id (pf/vf): 8086:1583 / 8086:154c
357 - Driver version: 1.5.23 (i40e)
359 #. Intel(R) Corporation I350 Gigabit Network Connection
361 - Firmware version: 1.48, 0x800006e7
362 - Device id (pf/vf): 8086:1521 / 8086:1520
363 - Driver version: 5.2.13-k (igb)
365 #. Intel(R) Ethernet Multi-host Controller FM10000
367 - Firmware version: N/A
368 - Device id (pf/vf): 8086:15d0
369 - Driver version: 0.17.0.9 (fm10k)
371 #. Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
373 * Host interface: PCI Express 3.0 x8
374 * Device ID: 15b3:1013
375 * MLNX_OFED: 3.4-1.0.0.0
376 * Firmware version: 12.17.1010
378 #. Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
380 * Host interface: PCI Express 3.0 x8
381 * Device ID: 15b3:1013
382 * MLNX_OFED: 3.4-1.0.0.0
383 * Firmware version: 12.17.1010
385 #. Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
387 * Host interface: PCI Express 3.0 x8
388 * Device ID: 15b3:1013
389 * MLNX_OFED: 3.4-1.0.0.0
390 * Firmware version: 12.17.1010
392 #. Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
394 * Host interface: PCI Express 3.0 x8
395 * Device ID: 15b3:1013
396 * MLNX_OFED: 3.4-1.0.0.0
397 * Firmware version: 12.17.1010
399 #. Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
401 * Host interface: PCI Express 3.0 x8
402 * Device ID: 15b3:1013
403 * MLNX_OFED: 3.4-1.0.0.0
404 * Firmware version: 12.17.1010
406 #. Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
408 * Host interface: PCI Express 3.0 x16
409 * Device ID: 15b3:1013
410 * MLNX_OFED: 3.4-1.0.0.0
411 * Firmware version: 12.17.1010
413 #. Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
415 * Host interface: PCI Express 3.0 x8
416 * Device ID: 15b3:1013
417 * MLNX_OFED: 3.4-1.0.0.0
418 * Firmware version: 12.17.1010
420 #. Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
422 * Host interface: PCI Express 3.0 x8
423 * Device ID: 15b3:1013
424 * MLNX_OFED: 3.4-1.0.0.0
425 * Firmware version: 12.17.1010
427 #. Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
429 * Host interface: PCI Express 3.0 x16
430 * Device ID: 15b3:1013
431 * MLNX_OFED: 3.4-1.0.0.0
432 * Firmware version: 12.17.1010
434 #. Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
436 * Host interface: PCI Express 3.0 x16
437 * Device ID: 15b3:1013
438 * MLNX_OFED: 3.4-1.0.0.0
439 * Firmware version: 12.17.1010
441 #. Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
443 * Host interface: PCI Express 3.0 x16
444 * Device ID: 15b3:1013
445 * MLNX_OFED: 3.4-1.0.0.0
446 * Firmware version: 12.17.1010
448 #. Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
450 * Host interface: PCI Express 3.0 x8
451 * Device ID: 15b3:1015
452 * MLNX_OFED: 3.4-1.0.0.0
453 * Firmware version: 14.17.1010
455 #. Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
457 * Host interface: PCI Express 3.0 x8
458 * Device ID: 15b3:1015
459 * MLNX_OFED: 3.4-1.0.0.0
460 * Firmware version: 14.17.1010
471 * Red Hat Enterprise Linux Server release 6.7 (Santiago)
472 * Red Hat Enterprise Linux Server release 7.0 (Maipo)
473 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
474 * SUSE Enterprise Linux 12
475 * Wind River Linux 6.0.0.26