1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright 2018 The DPDK contributors
10 * **Reworked memory subsystem.**
12 Memory subsystem has been reworked to support new functionality.
14 On Linux, support for reserving/unreserving hugepage memory at runtime has been
15 added, so applications no longer need to pre-reserve memory at startup. Due to
16 reorganized internal workings of memory subsystem, any memory allocated
17 through ``rte_malloc()`` or ``rte_memzone_reserve()`` is no longer guaranteed
18 to be IOVA-contiguous.
20 This functionality has introduced the following changes:
22 * ``rte_eal_get_physmem_layout()`` was removed.
23 * A new flag for memzone reservation (``RTE_MEMZONE_IOVA_CONTIG``) was added
24 to ensure reserved memory will be IOVA-contiguous, for use with device
25 drivers and other cases requiring such memory.
26 * New callbacks for memory allocation/deallocation events, allowing users (or
27 drivers) to be notified of new memory being allocated or deallocated
28 * New callbacks for validating memory allocations above a specified limit,
29 allowing user to permit or deny memory allocations.
30 * A new command-line switch ``--legacy-mem`` to enable EAL behavior similar to
31 how older versions of DPDK worked (memory segments that are IOVA-contiguous,
32 but hugepages are reserved at startup only, and can never be released).
33 * A new command-line switch ``--single-file-segments`` to put all memory
34 segments within a segment list in a single file.
35 * A set of convenience function calls to look up and iterate over allocated
37 * ``-m`` and ``--socket-mem`` command-line arguments now carry an additional
38 meaning and mark pre-reserved hugepages as "unfree-able", thereby acting as
39 a mechanism guaranteeing minimum availability of hugepage memory to the
42 Reserving/unreserving memory at runtime is not currently supported on FreeBSD.
44 * **Added bucket mempool driver.**
46 Added a bucket mempool driver which provides a way to allocate contiguous
48 The number of objects in the block depends on how many objects fit in the
49 ``RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB`` memory chunk which is a build time option.
50 The number may be obtained using ``rte_mempool_ops_get_info()`` API.
51 Contiguous blocks may be allocated using ``rte_mempool_get_contig_blocks()`` API.
53 * **Added support for port representors.**
55 Added DPDK port representors (also known as "VF representors" in the specific
56 context of VFs), which are to DPDK what the Ethernet switch device driver
57 model (**switchdev**) is to Linux, and which can be thought as a software
58 "patch panel" front-end for applications. DPDK port representors are
59 implemented as additional virtual Ethernet device (**ethdev**) instances,
60 spawned on an as-needed basis through configuration parameters passed to the
61 driver of the underlying device using devargs.
63 * **Added support for VXLAN and NVGRE tunnel endpoint.**
65 New actions types have been added to support encapsulation and decapsulation
66 operations for a tunnel endpoint. The new action types are
67 ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP``, ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP``,
68 ``RTE_FLOW_ACTION_TYPE_JUMP``. A new item type ``RTE_FLOW_ACTION_TYPE_MARK`` has been
69 added to match a flow against a previously marked flow. A shared counter has also been
70 introduced to the flow API to count a group of flows.
72 * **Added PMD-recommended Tx and Rx parameters.**
74 Applications can now query drivers for device-tuned values of
75 ring sizes, burst sizes, and number of queues.
77 * **Added RSS hash and key update to CXGBE PMD.**
79 Added support for updating the RSS hash and key to the CXGBE PMD.
81 * **Added CXGBE VF PMD.**
83 CXGBE VF Poll Mode Driver has been added to run DPDK over Chelsio
84 T5/T6 NIC VF instances.
86 * **Updated mlx5 driver.**
88 Updated the mlx5 driver including the following changes:
90 * Introduced Multi-packet Rx to enable 100Gb/sec with 64B frames.
91 * Support for being run by non-root users given a reduced set of capabilities
92 ``CAP_NET_ADMIN``, ``CAP_NET_RAW`` and ``CAP_IPC_LOCK``.
93 * Support for TSO and checksum for generic UDP and IP tunnels.
94 * Support for inner checksum and RSS for GRE, VXLAN-GPE, MPLSoGRE
96 * Accommodate the new memory hotplug model.
97 * Support for non virtually contiguous mempools.
98 * Support for MAC adding along with allmulti and promiscuous modes from VF.
99 * Support for Mellanox BlueField SoC device.
100 * Support for PMD defaults for queue number and depth to improve the out
101 of the box performance.
103 * **Updated mlx4 driver.**
105 Updated the mlx4 driver including the following changes:
107 * Support for to being run by non-root users given a reduced set of capabilities
108 ``CAP_NET_ADMIN``, ``CAP_NET_RAW`` and ``CAP_IPC_LOCK``.
109 * Supported CRC strip toggling.
110 * Accommodate the new memory hotplug model.
111 * Support non virtually contiguous mempools.
112 * Dropped support for Mellanox OFED 4.2.
114 * **Updated Solarflare network PMD.**
116 Updated the sfc_efx driver including the following changes:
118 * Added support for Solarflare XtremeScale X2xxx family adapters.
119 * Added support for NVGRE, VXLAN and GENEVE filters in flow API.
120 * Added support for DROP action in flow API.
121 * Added support for equal stride super-buffer Rx mode (X2xxx only).
122 * Added support for MARK and FLAG actions in flow API (X2xxx only).
124 * **Added Ethernet poll mode driver for AMD XGBE devices.**
126 Added the new ``axgbe`` ethernet poll mode driver for AMD XGBE devices.
127 See the :doc:`../nics/axgbe` nic driver guide for more details on this
130 * **Updated szedata2 PMD.**
132 Added support for new NFB-200G2QL card.
133 A new API was introduced in the libsze2 library which the szedata2 PMD depends
134 on, thus the new version of the library was needed.
135 New versions of the packages are available and the minimum required version
138 * **Added support for Broadcom NetXtreme-S (BCM58800) family of controllers (aka Stingray).**
140 Added support for the Broadcom NetXtreme-S (BCM58800) family of controllers
141 (aka Stingray). The BCM58800 devices feature a NetXtreme E-Series advanced
142 network controller, a high-performance ARM CPU block, PCI Express (PCIe)
143 Gen3 interfaces, key accelerators for compute offload and a high-speed
144 memory subsystem including L3 cache and DDR4 interfaces, all interconnected
145 by a coherent Network-on-chip (NOC) fabric.
147 The ARM CPU subsystem features eight ARMv8 Cortex-A72 CPUs at 3.0 GHz,
148 arranged in a multi-cluster configuration.
150 * **Added vDPA in vhost-user lib.**
152 Added support for selective datapath in the vhost-user lib. vDPA stands for vhost
153 Data Path Acceleration. It supports virtio ring compatible devices to serve
154 the virtio driver directly to enable datapath acceleration.
156 * **Added IFCVF vDPA driver.**
158 Added IFCVF vDPA driver to support Intel FPGA 100G VF devices. IFCVF works
159 as a HW vhost data path accelerator, it supports live migration and is
160 compatible with virtio 0.95 and 1.0. This driver registers the ifcvf vDPA driver
161 to vhost lib, when virtio connects. With the help of the registered vDPA
162 driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
165 * **Added support for vhost dequeue interrupt mode.**
167 Added support for vhost dequeue interrupt mode to release CPUs to others
168 when there is no data to transmit. Applications can register an epoll event
169 file descriptor to associate Rx queues with interrupt vectors.
171 * **Added support for virtio-user server mode.**
173 In a container environment if the vhost-user backend restarts, there's no way
174 for it to reconnect to virtio-user. To address this, support for server mode
175 has been added. In this mode the socket file is created by virtio-user, which the
176 backend connects to. This means that if the backend restarts, it can reconnect
177 to virtio-user and continue communications.
179 * **Added crypto workload support to vhost library.**
181 New APIs have been introduced in the vhost library to enable virtio crypto support
182 including session creation/deletion handling and translating virtio-crypto
183 requests into DPDK crypto operations. A sample application has also been introduced.
185 * **Added virtio crypto PMD.**
187 Added a new Poll Mode Driver for virtio crypto devices, which provides
188 AES-CBC ciphering and AES-CBC with HMAC-SHA1 algorithm-chaining. See the
189 :doc:`../cryptodevs/virtio` crypto driver guide for more details on
192 * **Added AMD CCP Crypto PMD.**
194 Added the new ``ccp`` crypto driver for AMD CCP devices. See the
195 :doc:`../cryptodevs/ccp` crypto driver guide for more details on
198 * **Updated AESNI MB PMD.**
200 The AESNI MB PMD has been updated with additional support for:
202 * AES-CMAC (128-bit key).
204 * **Added the Compressdev Library, a generic compression service library.**
206 Added the Compressdev library which provides an API for offload of compression and
207 decompression operations to hardware or software accelerator devices.
209 * **Added a new compression poll mode driver using Intels ISA-L.**
211 Added the new ``ISA-L`` compression driver, for compression and decompression
212 operations in software. See the :doc:`../compressdevs/isal` compression driver
213 guide for details on this new driver.
215 * **Added the Event Timer Adapter Library.**
217 The Event Timer Adapter Library extends the event-based model by introducing
218 APIs that allow applications to arm/cancel event timers that generate
219 timer expiry events. This new type of event is scheduled by an event device
220 along with existing types of events.
222 * **Added OcteonTx TIM Driver (Event timer adapter).**
224 The OcteonTx Timer block enables software to schedule events for a future
225 time, it is exposed to an application via the Event timer adapter library.
227 See the :doc:`../eventdevs/octeontx` guide for more details
229 * **Added Event Crypto Adapter Library.**
231 Added the Event Crypto Adapter Library. This library extends the
232 event-based model by introducing APIs that allow applications to
233 enqueue/dequeue crypto operations to/from cryptodev as events scheduled
236 * **Added Ifpga Bus, a generic Intel FPGA Bus library.**
238 Added the Ifpga Bus library which provides support for integrating any Intel
239 FPGA device with the DPDK framework. It provides Intel FPGA Partial Bit
240 Stream AFU (Accelerated Function Unit) scan and drivers probe.
242 * **Added IFPGA (Intel FPGA) Rawdev Driver.**
244 Added a new Rawdev driver called IFPGA (Intel FPGA) Rawdev Driver, which cooperates
245 with OPAE (Open Programmable Acceleration Engine) shared code to provide common FPGA
246 management ops for FPGA operation.
248 See the :doc:`../rawdevs/ifpga` programmer's guide for more details.
250 * **Added DPAA2 QDMA Driver (in rawdev).**
252 The DPAA2 QDMA is an implementation of the rawdev API, that provide a means
253 of initiating a DMA transaction from CPU. The initiated DMA is performed
254 without the CPU being involved in the actual DMA transaction.
256 See the :doc:`../rawdevs/dpaa2_qdma` guide for more details.
258 * **Added DPAA2 Command Interface Driver (in rawdev).**
260 The DPAA2 CMDIF is an implementation of the rawdev API, that provides
261 communication between the GPP and NXP's QorIQ based AIOP Block (Firmware).
262 Advanced IO Processor i.e. AIOP are clusters of programmable RISC engines
263 optimized for flexible networking and I/O operations. The communication
264 between GPP and AIOP is achieved via using DPCI devices exposed by MC for
265 GPP <--> AIOP interaction.
267 See the :doc:`../rawdevs/dpaa2_cmdif` guide for more details.
269 * **Added device event monitor framework.**
271 Added a general device event monitor framework to EAL, for device dynamic
272 management to facilitate device hotplug awareness and associated
273 actions. The list of new APIs is:
275 * ``rte_dev_event_monitor_start`` and ``rte_dev_event_monitor_stop`` for
276 the event monitor enabling and disabling.
277 * ``rte_dev_event_callback_register`` and ``rte_dev_event_callback_unregister``
278 for registering and un-registering user callbacks.
280 Linux uevent is supported as a backend of this device event notification framework.
282 * **Added support for procinfo and pdump on eth vdev.**
284 For ethernet virtual devices (like TAP, PCAP, etc.), with this feature, we can get
285 stats/xstats on shared memory from a secondary process, and also pdump packets on
286 those virtual devices.
288 * **Enhancements to the Packet Framework Library.**
290 Design and development of new API functions for Packet Framework library that
291 implement a common set of actions such as traffic metering, packet
292 encapsulation, network address translation, TTL update, etc., for pipeline
293 table and input ports to speed up application development. The API functions
294 includes creating action profiles, registering actions to the profiles,
295 instantiating action profiles for pipeline table and input ports, etc.
297 * **Added the BPF Library.**
299 The BPF Library provides the ability to load and execute
300 Enhanced Berkeley Packet Filters (eBPF) within user-space DPDK applications.
301 It also introduces a basic framework to load/unload BPF-based filters
302 on Eth devices (right now only via SW RX/TX callbacks).
303 It also adds a dependency on libelf.
309 * service cores: No longer marked as experimental.
311 The service cores functions are no longer marked as experimental, and have
312 become part of the normal DPDK API and ABI. Any future ABI changes will be
313 announced at least one release before the ABI change is made. There are no
314 ABI breaking changes planned.
316 * eal: The ``rte_lcore_has_role()`` return value changed.
318 This function now returns true or false, respectively,
319 rather than 0 or < 0 for success or failure.
320 It makes use of the function more intuitive.
322 * mempool: The capability flags and related functions have been removed.
324 Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
325 ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
326 driver to customize generic mempool library behavior.
327 Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
328 used to achieve it without specific knowledge in the generic code.
330 * mempool: The following xmem functions have been deprecated:
332 - ``rte_mempool_xmem_create``
333 - ``rte_mempool_xmem_size``
334 - ``rte_mempool_xmem_usage``
335 - ``rte_mempool_populate_iova_tab``
337 * mbuf: The control mbuf API has been removed in v18.05. The impacted
338 functions and macros are:
340 - ``rte_ctrlmbuf_init()``
341 - ``rte_ctrlmbuf_alloc()``
342 - ``rte_ctrlmbuf_free()``
343 - ``rte_ctrlmbuf_data()``
344 - ``rte_ctrlmbuf_len()``
345 - ``rte_is_ctrlmbuf()``
348 The packet mbuf API should be used as a replacement.
350 * meter: API updated to accommodate configuration profiles.
352 The meter API has been changed to support meter configuration profiles. The
353 configuration profile represents the set of configuration parameters
354 for a given meter object, such as the rates and sizes for the token
355 buckets. These configuration parameters were previously part of the meter
356 object internal data structure. The separation of the configuration
357 parameters from the meter object data structure results in reducing its
358 memory footprint which helps in better cache utilization when a large number
359 of meter objects are used.
361 * ethdev: The function ``rte_eth_dev_count()``, often mis-used to iterate
362 over ports, is deprecated and replaced by ``rte_eth_dev_count_avail()``.
363 There is also a new function ``rte_eth_dev_count_total()`` to get the
364 total number of allocated ports, available or not.
365 The hotplug-proof applications should use ``RTE_ETH_FOREACH_DEV`` or
366 ``RTE_ETH_FOREACH_DEV_OWNED_BY`` as port iterators.
368 * ethdev: In struct ``struct rte_eth_dev_info``, field ``rte_pci_device *pci_dev``
369 has been replaced with field ``struct rte_device *device``.
371 * ethdev: Changes to the semantics of ``rte_eth_dev_configure()`` parameters.
373 If both the ``nb_rx_q`` and ``nb_tx_q`` parameters are zero,
374 ``rte_eth_dev_configure()`` will now use PMD-recommended queue sizes, or if
375 recommendations are not provided by the PMD the function will use ethdev
376 fall-back values. Previously setting both of the parameters to zero would
377 have resulted in ``-EINVAL`` being returned.
379 * ethdev: Changes to the semantics of ``rte_eth_rx_queue_setup()`` parameters.
381 If the ``nb_rx_desc`` parameter is zero, ``rte_eth_rx_queue_setup`` will
382 now use the PMD-recommended Rx ring size, or in the case where the PMD
383 does not provide a recommendation, will use an ethdev-provided
384 fall-back value. Previously, setting ``nb_rx_desc`` to zero would have
385 resulted in an error.
387 * ethdev: Changes to the semantics of ``rte_eth_tx_queue_setup()`` parameters.
389 If the ``nb_tx_desc`` parameter is zero, ``rte_eth_tx_queue_setup`` will
390 now use the PMD-recommended Tx ring size, or in the case where the PMD
391 does not provide a recommendation, will use an ethdev-provided
392 fall-back value. Previously, setting ``nb_tx_desc`` to zero would have
393 resulted in an error.
395 * ethdev: Several changes were made to the flow API.
397 * The unused DUP action was removed.
398 * Actions semantics in flow rules: list order now matters ("first
399 to last" instead of "all simultaneously"), repeated actions are now
400 all performed, and they do not individually have (non-)terminating
402 * Flow rules are now always terminating unless a ``PASSTHRU`` action is
404 * C99-style flexible arrays were replaced with standard pointers in RSS
405 action and in RAW pattern item structures due to compatibility issues.
406 * The RSS action was modified to not rely on external
407 ``struct rte_eth_rss_conf`` anymore to instead expose its own and more
408 appropriately named configuration fields directly
409 (``rss_conf->rss_key`` => ``key``,
410 ``rss_conf->rss_key_len`` => ``key_len``,
411 ``rss_conf->rss_hf`` => ``types``,
412 ``num`` => ``queue_num``), and the addition of missing RSS parameters
413 (``func`` for RSS hash function to apply and ``level`` for the
414 encapsulation level).
415 * The VLAN pattern item (``struct rte_flow_item_vlan``) was modified to
416 include inner EtherType instead of outer TPID. Its default mask was also
417 modified to cover the VID part (lower 12 bits) of TCI only.
418 * A new transfer attribute was added to ``struct rte_flow_attr`` in order
419 to clarify the behavior of some pattern items.
420 * PF and VF pattern items are now only accepted by PMDs that implement
421 them (bnxt and i40e) when the transfer attribute is also present, for
423 * Pattern item PORT was renamed PHY_PORT to avoid confusion with DPDK port
425 * An action counterpart to the PHY_PORT pattern item was added in order to
426 redirect matching traffic to a specific physical port.
427 * PORT_ID pattern item and actions were added to match and target DPDK
428 port IDs at a higher level than PHY_PORT.
429 * ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP`` action items were added to support
430 tunnel encapsulation operation for VXLAN and NVGRE type tunnel endpoint.
431 * ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP`` action items were added to support
432 tunnel decapsulation operation for VXLAN and NVGRE type tunnel endpoint.
433 * ``RTE_FLOW_ACTION_TYPE_JUMP`` action item was added to support a matched flow
434 to be redirected to the specific group.
435 * ``RTE_FLOW_ACTION_TYPE_MARK`` item type has been added to match a flow against
436 a previously marked flow.
438 * ethdev: Change flow APIs regarding count action:
440 * ``rte_flow_create()`` API count action now requires the ``struct rte_flow_action_count``.
441 * ``rte_flow_query()`` API parameter changed from action type to action structure.
443 * ethdev: Changes to offload API
445 A pure per-port offloading isn't requested to be repeated in [rt]x_conf->offloads to
446 ``rte_eth_[rt]x_queue_setup()``. Now any offloading enabled in ``rte_eth_dev_configure()``
447 can't be disabled by ``rte_eth_[rt]x_queue_setup()``. Any new added offloading which has
448 not been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled in
449 ``rte_eth_[rt]x_queue_setup()`` must be per-queue type, or otherwise trigger an error log.
451 * ethdev: Runtime queue setup
453 ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup`` can be called after
454 ``rte_eth_dev_start`` if the device supports runtime queue setup. The device driver can
455 expose this capability through ``rte_eth_dev_info_get``. A Rx or Tx queue
456 set up at runtime need to be started explicitly by ``rte_eth_dev_rx_queue_start``
457 or ``rte_eth_dev_tx_queue_start``.
463 * ring: The alignment constraints on the ring structure has been relaxed
464 to one cache line instead of two, and an empty cache line padding is
465 added between the producer and consumer structures. The size of the
466 structure and the offset of the fields remains the same on platforms
467 with 64B cache line, but changes on other platforms.
469 * mempool: Some ops have changed.
471 A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
472 to allow customization of the required memory size calculation.
473 A new callback ``populate`` has been added to ``rte_mempool_ops``
474 to allow customized object population.
475 Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
476 since its features are covered by ``calc_mem_size`` and ``populate``
478 Callback ``register_memory_area`` has been removed from ``rte_mempool_ops``
479 since the new callback ``populate`` may be used instead of it.
481 * ethdev: Additional fields in rte_eth_dev_info.
483 The ``rte_eth_dev_info`` structure has had two extra entries appended to the
484 end of it: ``default_rxportconf`` and ``default_txportconf``. Each of these
485 in turn are ``rte_eth_dev_portconf`` structures containing three fields of
486 type ``uint16_t``: ``burst_size``, ``ring_size``, and ``nb_queues``. These
487 are parameter values recommended for use by the PMD.
489 * ethdev: ABI for all flow API functions was updated.
491 This includes functions ``rte_flow_copy``, ``rte_flow_create``,
492 ``rte_flow_destroy``, ``rte_flow_error_set``, ``rte_flow_flush``,
493 ``rte_flow_isolate``, ``rte_flow_query`` and ``rte_flow_validate``, due to
494 changes in error type definitions (``enum rte_flow_error_type``), removal
495 of the unused DUP action (``enum rte_flow_action_type``), modified
496 behavior for flow rule actions (see API changes), removal of C99 flexible
497 array from RAW pattern item (``struct rte_flow_item_raw``), complete
498 rework of the RSS action definition (``struct rte_flow_action_rss``),
499 sanity fix in the VLAN pattern item (``struct rte_flow_item_vlan``) and
500 new transfer attribute (``struct rte_flow_attr``).
502 * bbdev: New parameter added to rte_bbdev_op_cap_turbo_dec.
504 A new parameter ``max_llr_modulus`` has been added to
505 ``rte_bbdev_op_cap_turbo_dec`` structure to specify maximal LLR (likelihood
506 ratio) absolute value.
508 * bbdev: Queue Groups split into UL/DL Groups.
510 Queue Groups have been split into UL/DL Groups in the Turbo Software Driver.
511 They are independent for Decode/Encode. ``rte_bbdev_driver_info`` reflects
518 * **Secondary process launch is not reliable.**
520 Recent memory hotplug patches have made multiprocess startup less reliable
521 than it was in past releases. A number of workarounds are known to work depending
522 on the circumstances. As such it isn't recommended to use the secondary
523 process mechanism for critical systems. The underlying issues will be
524 addressed in upcoming releases.
526 The issue is explained in more detail, including potential workarounds,
527 in the Bugzilla entry referenced below.
529 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=50
531 * **pdump is not compatible with old applications.**
533 As we changed to use generic multi-process communication for pdump
534 negotiation instead of previous dedicated unix socket way, pdump
535 applications, including the dpdk-pdump example and any other applications
536 using ``librte_pdump``, will not work with older version DPDK primary
539 * **rte_abort takes a long time on FreeBSD.**
541 DPDK processes now allocates a large area of virtual memory address space.
542 As a result ``rte_abort`` on FreeBSD now dumps the contents of the
543 whole reserved memory range, not just the used portion, to a core dump file.
544 Writing this large core file can take a significant amount of time, causing
545 processes to appear to hang on the system.
547 The work around for the issue is to set the system resource limits for core
548 dumps before running any tests, e.g. ``limit coredumpsize 0``. This will
549 effectively disable core dumps on FreeBSD. If they are not to be completely
550 disabled, a suitable limit, e.g. 1G might be specified instead of 0. This
551 needs to be run per-shell session, or before every test run. This change
552 can also be made persistent by adding ``kern.coredump=0`` to ``/etc/sysctl.conf``.
554 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=53
556 * **ixgbe PMD crash on hotplug detach when no VF created.**
558 ixgbe PMD uninit path cause null pointer dereference because of port representor
559 cleanup when number of VF is zero.
561 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=57
563 * **Bonding PMD may fail to accept new slave ports in certain conditions.**
565 In certain conditions when using testpmd,
566 bonding may fail to register new slave ports.
568 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=52.
570 * **Unexpected performance regression in Vhost library.**
572 Patches fixing CVE-2018-1059 were expected to introduce a small performance
573 drop. However, in some setups, bigger performance drops have been measured
574 when running micro-benchmarks.
576 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=48
579 Shared Library Versions
580 -----------------------
582 The libraries prepended with a plus sign were incremented in this version.
588 librte_bitratestats.so.2
591 librte_bus_fslmc.so.1
596 + librte_common_octeontx.so.1
597 + librte_compressdev.so.1
598 librte_cryptodev.so.4
599 librte_distributor.so.1
602 + librte_eventdev.so.4
603 librte_flow_classify.so.1
611 librte_latencystats.so.1
614 + librte_mempool.so.4
624 librte_pmd_ixgbe.so.2
625 + librte_pmd_dpaa2_cmdif.so.1
626 + librte_pmd_dpaa2_qdma.so.1
628 librte_pmd_softnic.so.1
629 librte_pmd_vhost.so.2
645 * Intel(R) platforms with Intel(R) NICs combinations
649 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
650 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
651 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
652 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
653 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
654 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
655 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
656 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
657 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
666 * Red Hat Enterprise Linux Server release 7.3
667 * SUSE Enterprise Linux 12
676 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
678 * Firmware version: 0x61bf0001
679 * Device id (pf/vf): 8086:10fb / 8086:10ed
680 * Driver version: 5.2.3 (ixgbe)
682 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
684 * Firmware version: 0x800003e7
685 * Device id (pf/vf): 8086:15ad / 8086:15a8
686 * Driver version: 4.4.6 (ixgbe)
688 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
690 * Firmware version: 6.01 0x80003221
691 * Device id (pf/vf): 8086:1572 / 8086:154c
692 * Driver version: 2.4.6 (i40e)
694 * Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G)
696 * Firmware version: 3.33 0x80000fd5 0.0.0
697 * Device id (pf/vf): 8086:37d0 / 8086:37cd
698 * Driver version: 2.4.3 (i40e)
700 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
702 * Firmware version: 6.01 0x80003221
703 * Device id (pf/vf): 8086:158b / 8086:154c
704 * Driver version: 2.4.6 (i40e)
706 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
708 * Firmware version: 6.01 0x8000321c
709 * Device id (pf/vf): 8086:1583 / 8086:154c
710 * Driver version: 2.4.6 (i40e)
712 * Intel(R) Corporation I350 Gigabit Network Connection
714 * Firmware version: 1.63, 0x80000dda
715 * Device id (pf/vf): 8086:1521 / 8086:1520
716 * Driver version: 5.4.0-k (igb)
718 * Intel(R) platforms with Mellanox(R) NICs combinations
722 * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz
723 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
724 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
725 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
726 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
727 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
728 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
732 * Red Hat Enterprise Linux Server release 7.5 (Maipo)
733 * Red Hat Enterprise Linux Server release 7.4 (Maipo)
734 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
735 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
740 * SUSE Linux Enterprise Server 15
742 * MLNX_OFED: 4.2-1.0.0.0
743 * MLNX_OFED: 4.3-2.0.2.0
747 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
749 * Host interface: PCI Express 3.0 x8
750 * Device ID: 15b3:1007
751 * Firmware version: 2.42.5000
753 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
755 * Host interface: PCI Express 3.0 x8
756 * Device ID: 15b3:1013
757 * Firmware version: 12.21.1000 and above
759 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
761 * Host interface: PCI Express 3.0 x8
762 * Device ID: 15b3:1013
763 * Firmware version: 12.21.1000 and above
765 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
767 * Host interface: PCI Express 3.0 x8
768 * Device ID: 15b3:1013
769 * Firmware version: 12.21.1000 and above
771 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
773 * Host interface: PCI Express 3.0 x8
774 * Device ID: 15b3:1013
775 * Firmware version: 12.21.1000 and above
777 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
779 * Host interface: PCI Express 3.0 x8
780 * Device ID: 15b3:1013
781 * Firmware version: 12.21.1000 and above
783 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
785 * Host interface: PCI Express 3.0 x16
786 * Device ID: 15b3:1013
787 * Firmware version: 12.21.1000 and above
789 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
791 * Host interface: PCI Express 3.0 x8
792 * Device ID: 15b3:1013
793 * Firmware version: 12.21.1000 and above
795 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
797 * Host interface: PCI Express 3.0 x8
798 * Device ID: 15b3:1013
799 * Firmware version: 12.21.1000 and above
801 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
803 * Host interface: PCI Express 3.0 x16
804 * Device ID: 15b3:1013
805 * Firmware version: 12.21.1000 and above
806 * Firmware version: 12.21.1000 and above
808 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
810 * Host interface: PCI Express 3.0 x16
811 * Device ID: 15b3:1013
812 * Firmware version: 12.21.1000 and above
814 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
816 * Host interface: PCI Express 3.0 x16
817 * Device ID: 15b3:1013
818 * Firmware version: 12.21.1000 and above
820 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
822 * Host interface: PCI Express 3.0 x8
823 * Device ID: 15b3:1015
824 * Firmware version: 14.21.1000 and above
826 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
828 * Host interface: PCI Express 3.0 x8
829 * Device ID: 15b3:1015
830 * Firmware version: 14.21.1000 and above
832 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
834 * Host interface: PCI Express 3.0 x16
835 * Device ID: 15b3:1017
836 * Firmware version: 16.21.1000 and above
838 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
840 * Host interface: PCI Express 4.0 x16
841 * Device ID: 15b3:1019
842 * Firmware version: 16.21.1000 and above
844 * ARM platforms with Mellanox(R) NICs combinations
848 * Qualcomm ARM 1.1 2500MHz
852 * Red Hat Enterprise Linux Server release 7.5 (Maipo)
856 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
858 * Host interface: PCI Express 3.0 x8
859 * Device ID: 15b3:1015
860 * Firmware version: 14.22.0428
862 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
864 * Host interface: PCI Express 3.0 x16
865 * Device ID: 15b3:1017
866 * Firmware version: 16.22.0428
868 * ARM SoC combinations from Cavium (with integrated NICs)
877 * Ubuntu 16.04.2 LTS with Cavium SDK-6.2.0-Patch2 release support package.
879 * ARM SoC combinations from NXP (with integrated NICs)
883 * NXP/Freescale QorIQ LS1046A with ARM Cortex A72
884 * NXP/Freescale QorIQ LS2088A with ARM Cortex A72
888 * Ubuntu 16.04.3 LTS with NXP QorIQ LSDK 1803 support packages