4 .. **Read this first.**
6 The text in the sections below explains how to update the release notes.
8 Use proper spelling, capitalization and punctuation in all sections.
10 Variable and config names should be quoted as fixed width text:
13 Build the docs and view the output file to ensure the changes are correct::
17 xdg-open build/doc/html/guides/rel_notes/release_18_05.html
23 .. This section should contain new features added in this release. Sample
26 * **Add a title in the past tense with a full stop.**
28 Add a short 1-2 sentence description in the past tense. The description
29 should be enough to allow someone scanning the release notes to
30 understand the new feature.
32 If the feature adds a lot of sub-features you can use a bullet list like
35 * Added feature foo to do something.
36 * Enhanced feature bar to do something else.
38 Refer to the previous release notes for examples.
40 This section is a comment. Do not overwrite or remove it.
41 Also, make sure to start the actual text at the margin.
42 =========================================================
44 * **Reworked memory subsystem.**
46 Memory subsystem has been reworked to support new functionality.
48 On Linux, support for reserving/unreserving hugepage memory at runtime was
49 added, so applications no longer need to pre-reserve memory at startup. Due to
50 reorganized internal workings of memory subsystem, any memory allocated
51 through ``rte_malloc()`` or ``rte_memzone_reserve()`` is no longer guaranteed
52 to be IOVA-contiguous.
54 This functionality has introduced the following changes:
56 * ``rte_eal_get_physmem_layout()`` was removed
57 * A new flag for memzone reservation (``RTE_MEMZONE_IOVA_CONTIG``) was added
58 to ensure reserved memory will be IOVA-contiguous, for use with device
59 drivers and other cases requiring such memory
60 * New callbacks for memory allocation/deallocation events, allowing user (or
61 drivers) to be notified of new memory being allocated or deallocated
62 * New callbacks for validating memory allocations above specified limit,
63 allowing user to permit or deny memory allocations
64 * A new command-line switch ``--legacy-mem`` to enable EAL behavior similar to
65 how older versions of DPDK worked (memory segments that are IOVA-contiguous,
66 but hugepages are reserved at startup only, and can never be released)
67 * A new command-line switch ``--single-file-segments`` to put all memory
68 segments within a segment list in a single file
69 * A set of convenience function calls to look up and iterate over allocated
71 * ``-m`` and ``--socket-mem`` command-line arguments now carry an additional
72 meaning and mark pre-reserved hugepages as "unfree-able", thereby acting as
73 a mechanism guaranteeing minimum availability of hugepage memory to the
76 Reserving/unreserving memory at runtime is not currently supported on FreeBSD.
78 * **Added bucket mempool driver.**
80 Added bucket mempool driver which provides a way to allocate contiguous
82 Number of objects in the block depends on how many objects fit in
83 RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB memory chunk which is build time option.
84 The number may be obtained using rte_mempool_ops_get_info() API.
85 Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
87 * **Added support for port representors.**
89 The DPDK port representors (also known as "VF representors" in the specific
90 context of VFs), which are to DPDK what the Ethernet switch device driver
91 model (**switchdev**) is to Linux, and which can be thought as a software
92 "patch panel" front-end for applications. DPDK port representors are
93 implemented as additional virtual Ethernet device (**ethdev**) instances,
94 spawned on an as needed basis through configuration parameters passed to the
95 driver of the underlying device using devargs.
97 * **Added support for VXLAN and NVGRE tunnel endpoint.**
99 New actions types have been added to support encapsulation and decapsulation
100 operations for a tunnel endpoint. The new action types are
101 RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP, RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP,
102 RTE_FLOW_ACTION_TYPE_JUMP. New item type RTE_FLOW_ACTION_TYPE_MARK has been
103 added to match a flow against a previously marked flow. It also introduced shared
104 counter to flow API to counte for a group of flows.
106 * **Added PMD-recommended Tx and Rx parameters**
108 Applications can now query drivers for device-tuned values of
109 ring sizes, burst sizes, and number of queues.
111 * **Added RSS hash and key update to CXGBE PMD.**
113 Support to update RSS hash and key has been added to CXGBE PMD.
115 * **Added CXGBE VF PMD.**
117 CXGBE VF Poll Mode Driver has been added to run DPDK over Chelsio
118 T5/T6 NIC VF instances.
120 * **Updated mlx5 driver.**
122 Updated the mlx5 driver including the following changes:
124 * Introduced Multi-packet Rx. With it, achieved 100Gb/sec with 64B frames.
125 * Supported to be run by non-root users given reduced set of capabilities
126 CAP_NET_ADMIN, CAP_NET_RAW and CAP_IPC_LOCK.
127 * Supported TSO and checksum for generic UDP and IP tunnels.
128 * Supported inner checksum and RSS for GRE, VXLAN-GPE, MPLSoGRE
129 and MPLSoUDP tunnels.
130 * Accommodate to the new memory hotplug model.
131 * Supported non virtually contiguous mempools.
132 * Supported MAC adding along with allmulti and promiscuous modes from VF.
133 * Supported Mellanox BlueField SoC device.
134 * Supported PMD defaults for queue number and depth to improve the out
135 of the box performance.
137 * **Updated mlx4 driver.**
139 Updated the mlx4 driver including the following changes:
141 * Supported to be run by non-root users given reduced set of capabilities
142 CAP_NET_ADMIN, CAP_NET_RAW and CAP_IPC_LOCK.
143 * Supported CRC strip toggling.
144 * Accommodate to the new memory hotplug model.
145 * Supported non virtually contiguous mempools.
146 * Dropped support in Mellanox OFED 4.2.
148 * **Updated Solarflare network PMD.**
150 Updated the sfc_efx driver including the following changes:
152 * Added support for Solarflare XtremeScale X2xxx family adapters.
153 * Added support for NVGRE, VXLAN and GENEVE filters in flow API.
154 * Added support for DROP action in flow API.
155 * Added support for equal stride super-buffer Rx mode (X2xxx only).
156 * Added support for MARK and FLAG actions in flow API (X2xxx only).
158 * **Added Ethernet poll mode driver for AMD XGBE devices.**
160 Added the new ``axgbe`` ethernet poll mode driver for AMD XGBE devices.
161 See the :doc:`../nics/axgbe` nic driver guide for more details on this
164 * **Updated szedata2 PMD.**
166 Added support for new NFB-200G2QL card.
167 New API was introduced in the libsze2 library which the szedata2 PMD depends
168 on thus the new version of the library was needed.
169 New versions of the packages are available and the minimum required version
172 * **Added support for Broadcom NetXtreme-S (BCM58800) family of controllers (aka Stingray)**
174 The BCM58800 devices feature a NetXtreme E-Series advanced network controller, a high-performance
175 ARM CPU block, PCI Express (PCIe) Gen3 interfaces, key accelerators for compute offload and a high-
176 speed memory subsystem including L3 cache and DDR4 interfaces, all interconnected by a coherent
177 Network-on-chip (NOC) fabric.
179 The ARM CPU subsystem features eight ARMv8 Cortex-A72 CPUs at 3.0 GHz, arranged in a multi-cluster
182 * **Added vDPA in vhost-user lib.**
184 Added support for selective datapath in vhost-user lib. vDPA stands for vhost
185 Data Path Acceleration. It supports virtio ring compatible devices to serve
186 virtio driver directly to enable datapath acceleration.
188 * **Added IFCVF vDPA driver.**
190 Added IFCVF vDPA driver to support Intel FPGA 100G VF device. IFCVF works
191 as a HW vhost data path accelerator, it supports live migration and is
192 compatible with virtio 0.95 and 1.0. This driver registers ifcvf vDPA driver
193 to vhost lib, when virtio connected, with the help of the registered vDPA
194 driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
197 * **Added support for vhost dequeue interrupt mode.**
199 Added support for vhost dequeue interrupt mode to release cpus to others when
200 no data to transmit. Applications could register an epoll event fd to associate
201 Rx queues with interrupt vectors.
203 * **Added support for virtio-user server mode.**
204 In a container environment if the vhost-user backend restarts, there's no way
205 for it to reconnect to virtio-user. To address this, support for server mode
206 is added. In this mode the socket file is created by virtio-user, which the
207 backend connects to. This means that if the backend restarts, it can reconnect
208 to virtio-user and continue communications.
210 * **Added crypto workload support to vhost library.**
212 New APIs are introduced in vhost library to enable virtio crypto support
213 including session creation/deletion handling and translating virtio-crypto
214 request into DPDK crypto operations. A sample application is also introduced.
216 * **Added virtio crypto PMD.**
218 Added a new poll mode driver for virtio crypto devices, which provides
219 AES-CBC ciphering and AES-CBC with HMAC-SHA1 algorithm-chaining. See the
220 :doc:`../cryptodevs/virtio` crypto driver guide for more details on
223 * **Added AMD CCP Crypto PMD.**
225 Added the new ``ccp`` crypto driver for AMD CCP devices. See the
226 :doc:`../cryptodevs/ccp` crypto driver guide for more details on
229 * **Updated AESNI MB PMD.**
231 The AESNI MB PMD has been updated with additional support for:
233 * AES-CMAC (128-bit key).
235 * **Added Compressdev Library, a generic compression service library.**
237 The compressdev library provides an API for offload of compression and
238 decompression operations to hardware or software accelerator devices.
240 * **Added a new compression poll mode driver using Intels ISA-L.**
242 Added the new ``ISA-L`` compression driver, for compression and decompression
243 operations in software. See the :doc:`../compressdevs/isal` compression driver
244 guide for details on this new driver.
246 * **Added the Event Timer Adapter Library.**
248 The Event Timer Adapter Library extends the event-based model by introducing
249 APIs that allow applications to arm/cancel event timers that generate
250 timer expiry events. This new type of event is scheduled by an event device
251 along with existing types of events.
253 * **Added OcteonTx TIM Driver (Event timer adapter).**
255 The OcteonTx Timer block enables software to schedule events for a future
256 time, it is exposed to an application via Event timer adapter library.
258 See the :doc:`../eventdevs/octeontx` guide for more details
260 * **Added Event Crypto Adapter Library.**
262 Added the Event Crypto Adapter Library. This library extends the
263 event-based model by introducing APIs that allow applications to
264 enqueue/dequeue crypto operations to/from cryptodev as events scheduled
267 * **Added Ifpga Bus, a generic Intel FPGA Bus library.**
269 The Ifpga Bus library provides support for integrating any Intel FPGA device with
270 the DPDK framework. It provides Intel FPGA Partial Bit Stream AFU (Accelerated
271 Function Unit) scan and drivers probe.
273 * **Added IFPGA (Intel FPGA) Rawdev Driver.**
275 Added a new Rawdev driver called IFPGA(Intel FPGA) Rawdev Driver, which cooperates
276 with OPAE (Open Programmable Acceleration Engine) share code provides common FPGA
277 management ops for FPGA operation.
279 See the :doc:`../rawdevs/ifpga_rawdev` programmer's guide for more details.
281 * **Added DPAA2 QDMA Driver (in rawdev).**
283 The DPAA2 QDMA is an implementation of the rawdev API, that provide means
284 to initiate a DMA transaction from CPU. The initiated DMA is performed
285 without CPU being involved in the actual DMA transaction.
287 See the :doc:`../rawdevs/dpaa2_qdma` guide for more details.
289 * **Added DPAA2 Command Interface Driver (in rawdev).**
291 The DPAA2 CMDIF is an implementation of the rawdev API, that provides
292 communication between the GPP and NXP's QorIQ based AIOP Block (Firmware).
293 Advanced IO Processor i.e. AIOP is clusters of programmable RISC engines
294 optimised for flexible networking and I/O operations. The communication
295 between GPP and AIOP is achieved via using DPCI devices exposed by MC for
296 GPP <--> AIOP interaction.
298 See the :doc:`../rawdevs/dpaa2_cmdif` guide for more details.
300 * **Added device event monitor framework.**
302 Added a general device event monitor framework at EAL, for device dynamic management.
303 Such as device hotplug awareness and actions adopted accordingly. The list of new APIs:
305 * ``rte_dev_event_monitor_start`` and ``rte_dev_event_monitor_stop`` are for
306 the event monitor enable and disable.
307 * ``rte_dev_event_callback_register`` and ``rte_dev_event_callback_unregister``
308 are for the user's callbacks register and unregister.
310 Linux uevent is supported as backend of this device event notification framework.
312 * **Added support for procinfo and pdump on eth vdev.**
314 For ethernet virtual devices (like tap, pcap, etc), with this feature, we can get
315 stats/xstats on shared memory from secondary process, and also pdump packets on
316 those virtual devices.
318 * **Advancement to Packet Framework Library.**
320 Design and development of new API functions for Packet Framework library that
321 implements common set of actions such as traffic metering, packet
322 encapsulation, network address translation, TTL update, etc., for pipeline
323 table and input ports to speed up application development. The API functions
324 includes creating action profiles, registering actions to the profiles,
325 instantiating action profiles for pipeline table and input ports, etc.
327 * **Added the BPF Library.**
329 The BPF Library provides the ability to load and execute
330 Enhanced Berkeley Packet Filter (eBPF) within user-space dpdk application.
331 Also it introduces basic framework to load/unload BPF-based filters
332 on eth devices (right now only via SW RX/TX callbacks).
333 It also adds dependency on libelf.
339 .. This section should contain API changes. Sample format:
341 * Add a short 1-2 sentence description of the API change. Use fixed width
342 quotes for ``rte_function_names`` or ``rte_struct_names``. Use the past
345 This section is a comment. Do not overwrite or remove it.
346 Also, make sure to start the actual text at the margin.
347 =========================================================
349 * service cores: no longer marked as experimental.
351 The service cores functions are no longer marked as experimental, and have
352 become part of the normal DPDK API and ABI. Any future ABI changes will be
353 announced at least one release before the ABI change is made. There are no
354 ABI breaking changes planned.
356 * eal: ``rte_lcore_has_role()`` return value changed.
358 This function now returns true or false, respectively,
359 rather than 0 or <0 for success or failure.
360 It makes use of the function more intuitive.
362 * mempool: capability flags and related functions have been removed.
364 Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and
365 ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool
366 driver to customize generic mempool library behaviour.
367 Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be
368 used to achieve it without specific knowledge in the generic code.
370 * mempool: xmem functions have been deprecated:
372 - ``rte_mempool_xmem_create``
373 - ``rte_mempool_xmem_size``
374 - ``rte_mempool_xmem_usage``
375 - ``rte_mempool_populate_iova_tab``
377 * mbuf: The control mbuf API has been removed in v18.05. The impacted
378 functions and macros are:
380 - ``rte_ctrlmbuf_init()``
381 - ``rte_ctrlmbuf_alloc()``
382 - ``rte_ctrlmbuf_free()``
383 - ``rte_ctrlmbuf_data()``
384 - ``rte_ctrlmbuf_len()``
385 - ``rte_is_ctrlmbuf()``
388 The packet mbuf API should be used as a replacement.
390 * meter: updated to accommodate configuration profiles.
392 The meter API is changed to support meter configuration profiles. The
393 configuration profile represents the set of configuration parameters
394 for a given meter object, such as the rates and sizes for the token
395 buckets. These configuration parameters were previously the part of meter
396 object internal data strcuture. The separation of the configuration
397 parameters from meter object data structure results in reducing its
398 memory footprint which helps in better cache utilization when large number
399 of meter objects are used.
401 * ethdev: The function ``rte_eth_dev_count``, often mis-used to iterate
402 over ports, is deprecated and replaced by ``rte_eth_dev_count_avail``.
403 There is also a new function ``rte_eth_dev_count_total`` to get the
404 total number of allocated ports, available or not.
405 The hotplug-proof applications should use ``RTE_ETH_FOREACH_DEV`` or
406 ``RTE_ETH_FOREACH_DEV_OWNED_BY`` as port iterators.
408 * ethdev, in struct ``struct rte_eth_dev_info``, field ``rte_pci_device *pci_dev``
409 replaced with field ``struct rte_device *device``.
411 * **Changes to semantics of rte_eth_dev_configure() parameters.**
413 If both the ``nb_rx_q`` and ``nb_tx_q`` parameters are zero,
414 ``rte_eth_dev_configure`` will now use PMD-recommended queue sizes, or if
415 recommendations are not provided by the PMD the function will use ethdev
416 fall-back values. Previously setting both of the parameters to zero would
417 have resulted in ``-EINVAL`` being returned.
419 * **Changes to semantics of rte_eth_rx_queue_setup() parameters.**
421 If the ``nb_rx_desc`` parameter is zero, ``rte_eth_rx_queue_setup`` will
422 now use the PMD-recommended Rx ring size, or in the case where the PMD
423 does not provide a recommendation, will use an ethdev-provided
424 fall-back value. Previously, setting ``nb_rx_desc`` to zero would have
425 resulted in an error.
427 * **Changes to semantics of rte_eth_tx_queue_setup() parameters.**
429 If the ``nb_tx_desc`` parameter is zero, ``rte_eth_tx_queue_setup`` will
430 now use the PMD-recommended Tx ring size, or in the case where the PMD
431 does not provide a recoomendation, will use an ethdev-provided
432 fall-back value. Previously, setting ``nb_tx_desc`` to zero would have
433 resulted in an error.
435 * ethdev: several changes were made to the flow API.
437 * Unused DUP action was removed.
438 * Actions semantics in flow rules: list order now matters ("first
439 to last" instead of "all simultaneously"), repeated actions are now
440 all performed, and they do not individually have (non-)terminating
442 * Flow rules are now always terminating unless a PASSTHRU action is
444 * C99-style flexible arrays were replaced with standard pointers in RSS
445 action and in RAW pattern item structures due to compatibility issues.
446 * The RSS action was modified to not rely on external
447 ``struct rte_eth_rss_conf`` anymore to instead expose its own and more
448 appropriately named configuration fields directly
449 (``rss_conf->rss_key`` => ``key``,
450 ``rss_conf->rss_key_len`` => ``key_len``,
451 ``rss_conf->rss_hf`` => ``types``,
452 ``num`` => ``queue_num``), and the addition of missing RSS parameters
453 (``func`` for RSS hash function to apply and ``level`` for the
454 encapsulation level).
455 * The VLAN pattern item (``struct rte_flow_item_vlan``) was modified to
456 include inner EtherType instead of outer TPID. Its default mask was also
457 modified to cover the VID part (lower 12 bits) of TCI only.
458 * A new transfer attribute was added to ``struct rte_flow_attr`` in order
459 to clarify the behavior of some pattern items.
460 * PF and VF pattern items are now only accepted by PMDs that implement
461 them (bnxt and i40e) when the transfer attribute is also present for
463 * Pattern item PORT was renamed PHY_PORT to avoid confusion with DPDK port
465 * An action counterpart to the PHY_PORT pattern item was added in order to
466 redirect matching traffic to a specific physical port.
467 * PORT_ID pattern item and actions were added to match and target DPDK
468 port IDs at a higher level than PHY_PORT.
469 * RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP action items were added to support
470 tunnel encapsulation operation for VXLAN and NVGRE type tunnel endpoint.
471 * RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP action items were added to support
472 tunnel decapsulation operation for VXLAN and NVGRE type tunnel endpoint.
473 * RTE_FLOW_ACTION_TYPE_JUMP action item was added to support a matched flow
474 to be redirected to the specific group.
475 * RTE_FLOW_ACTION_TYPE_MARK item type has been added to match a flow against
476 a previously marked flow.
478 * ethdev: change flow APIs regarding count action:
479 * ``rte_flow_create()`` API count action now requires the ``struct rte_flow_action_count``.
480 * ``rte_flow_query()`` API parameter changed from action type to action structure.
482 * ethdev: changes to offload API
484 A pure per-port offloading isn't requested to be repeated in [rt]x_conf->offloads to
485 ``rte_eth_[rt]x_queue_setup()``. Now any offloading enabled in ``rte_eth_dev_configure()``
486 can't be disabled by ``rte_eth_[rt]x_queue_setup()``. Any new added offloading which has
487 not been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled in
488 ``rte_eth_[rt]x_queue_setup()`` must be per-queue type, otherwise trigger an error log.
490 * ethdev: runtime queue setup:
492 ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup`` can be called after
493 ``rte_eth_dev_start`` if device support runtime queue setup. Device driver can
494 expose this capability through ``rte_eth_dev_info_get``. A Rx or Tx queue be
495 setup at runtime need to be started explicitly by ``rte_eth_dev_rx_queue_start``
496 or ``rte_eth_dev_tx_queue_start``.
502 .. This section should contain ABI changes. Sample format:
504 * Add a short 1-2 sentence description of the ABI change that was announced
505 in the previous releases and made in this release. Use fixed width quotes
506 for ``rte_function_names`` or ``rte_struct_names``. Use the past tense.
508 This section is a comment. Do not overwrite or remove it.
509 Also, make sure to start the actual text at the margin.
510 =========================================================
512 * ring: the alignment constraints on the ring structure has been relaxed
513 to one cache line instead of two, and an empty cache line padding is
514 added between the producer and consumer structures. The size of the
515 structure and the offset of the fields remains the same on platforms
516 with 64B cache line, but change on other platforms.
518 * mempool: ops have changed.
520 A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops``
521 to allow to customize required memory size calculation.
522 A new callback ``populate`` has been added to ``rte_mempool_ops``
523 to allow to customize objects population.
524 Callback ``get_capabilities`` has been removed from ``rte_mempool_ops``
525 since its features are covered by ``calc_mem_size`` and ``populate``
527 Callback ``register_memory_area`` has been removed from ``rte_mempool_ops``
528 since the new callback ``populate`` may be used instead of it.
530 * **Additional fields in rte_eth_dev_info.**
532 The ``rte_eth_dev_info`` structure has had two extra entries appended to the
533 end of it: ``default_rxportconf`` and ``default_txportconf``. Each of these
534 in turn are ``rte_eth_dev_portconf`` structures containing three fields of
535 type ``uint16_t``: ``burst_size``, ``ring_size``, and ``nb_queues``. These
536 are parameter values recommended for use by the PMD.
538 * ethdev: ABI for all flow API functions was updated.
540 This includes functions ``rte_flow_copy``, ``rte_flow_create``,
541 ``rte_flow_destroy``, ``rte_flow_error_set``, ``rte_flow_flush``,
542 ``rte_flow_isolate``, ``rte_flow_query`` and ``rte_flow_validate``, due to
543 changes in error type definitions (``enum rte_flow_error_type``), removal
544 of the unused DUP action (``enum rte_flow_action_type``), modified
545 behavior for flow rule actions (see API changes), removal of C99 flexible
546 array from RAW pattern item (``struct rte_flow_item_raw``), complete
547 rework of the RSS action definition (``struct rte_flow_action_rss``),
548 sanity fix in the VLAN pattern item (``struct rte_flow_item_vlan``) and
549 new transfer attribute (``struct rte_flow_attr``).
551 **New parameter added to rte_bbdev_op_cap_turbo_dec.**
553 A new parameter ``max_llr_modulus`` has been added to
554 ``rte_bbdev_op_cap_turbo_dec`` structure to specify maximal LLR (likelihood
555 ratio) absolute value.
557 * **BBdev Queue Groups split into UL/DL Groups**
559 Queue Groups have been split into UL/DL Groups in Turbo Software Driver.
560 They are independent for Decode/Encode. ``rte_bbdev_driver_info`` reflects
567 .. This section should contain removed items in this release. Sample format:
569 * Add a short 1-2 sentence description of the removed item in the past
572 This section is a comment. Do not overwrite or remove it.
573 Also, make sure to start the actual text at the margin.
574 =========================================================
580 .. This section should contain new known issues in this release. Sample format:
582 * **Add title in present tense with full stop.**
584 Add a short 1-2 sentence description of the known issue in the present
585 tense. Add information on any known workarounds.
587 This section is a comment. Do not overwrite or remove it.
588 Also, make sure to start the actual text at the margin.
589 =========================================================
591 * **Secondary process launch is not reliable.**
593 Recent memory hotplug patches have made multiprocess startup less reliable
594 than it was in the past. A number of workarounds are known to work depending
595 on the circumstances. As such it isn't recommended to use the secondary
596 process mechanism for critical systems. The underlying issues will be
597 addressed in upcoming releases.
599 The issue is explained in more detail, including potential workarounds,
600 in the Bugzilla entry referenced below.
602 Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=50
604 * **pdump is not compatible with old applications.**
606 As we changed to use generic multi-process communication for pdump negotiation
607 instead of previous dedicated unix socket way, pdump applications, including
608 dpdk-pdump example and any other applications using librte_pdump, cannot work
609 with older version DPDK primary applications.
611 * **rte_abort takes a long time on FreeBSD.**
613 DPDK processes now allocates a large area of virtual memory address space,
614 with this change during rte_abort FreeBSD now dumps the contents of the
615 whole reserved memory range, not just the used portion, to a core dump file.
616 Write this large core file can take a significant amount of time, causing
617 processes to appear hung on the system.
619 The work around for the issue is to set the system resource limits for core
620 dumps before running any tests e.g."limit coredumpsize 0". This will
621 effectively disable core dumps on FreeBSD. If they are not to be completely
622 disabled, a suitable limit, e.g. 1G might be specified instead of 0. This
623 needs to be run per-shell session, or before every test run. This change
624 can also be made persistent by adding "kern.coredump=0" to /etc/sysctl.conf
626 Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=53
628 * **ixgbe PMD crash on hotplug detach when no VF created.**
630 ixgbe PMD uninit path cause null pointer dereference because of port representor
631 cleanup when number of VF is zero.
633 Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=57
635 * **Bonding PMD may fail to accept new slaves in certain conditions.**
637 In certain conditions when using testpmd,
638 bonding may fail to register new slave ports.
640 Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=52.
642 * **Unexpected performance regression in Vhost library.**
644 Patches fixing CVE-2018-1059 were expected to introduce a small performance
645 drop. However, in some setups, bigger performance drops have been measured
646 when running micro-benchmarks.
648 Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=48
651 Shared Library Versions
652 -----------------------
654 .. Update any library version updated in this release and prepend with a ``+``
658 + librte_cfgfile.so.2
661 This section is a comment. Do not overwrite or remove it.
662 =========================================================
665 The libraries prepended with a plus sign were incremented in this version.
671 librte_bitratestats.so.2
674 librte_bus_fslmc.so.1
679 + librte_common_octeontx.so.1
680 + librte_compressdev.so.1
681 librte_cryptodev.so.4
682 librte_distributor.so.1
685 + librte_eventdev.so.4
686 librte_flow_classify.so.1
694 librte_latencystats.so.1
697 + librte_mempool.so.4
707 librte_pmd_ixgbe.so.2
708 + librte_pmd_dpaa2_cmdif.so.1
709 + librte_pmd_dpaa2_qdma.so.1
711 librte_pmd_softnic.so.1
712 librte_pmd_vhost.so.2
728 .. This section should contain a list of platforms that were tested with this
733 * <vendor> platform with <vendor> <type of devices> combinations
738 * Other relevant details...
740 This section is a comment. Do not overwrite or remove it.
741 Also, make sure to start the actual text at the margin.
742 =========================================================
744 * Intel(R) platforms with Intel(R) NICs combinations
748 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
749 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
750 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
751 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
752 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
753 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
754 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
755 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
756 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
765 * Red Hat Enterprise Linux Server release 7.3
766 * SUSE Enterprise Linux 12
775 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
777 * Firmware version: 0x61bf0001
778 * Device id (pf/vf): 8086:10fb / 8086:10ed
779 * Driver version: 5.2.3 (ixgbe)
781 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
783 * Firmware version: 0x800003e7
784 * Device id (pf/vf): 8086:15ad / 8086:15a8
785 * Driver version: 4.4.6 (ixgbe)
787 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
789 * Firmware version: 6.01 0x80003221
790 * Device id (pf/vf): 8086:1572 / 8086:154c
791 * Driver version: 2.4.6 (i40e)
793 * Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G)
795 * Firmware version: 3.33 0x80000fd5 0.0.0
796 * Device id (pf/vf): 8086:37d0 / 8086:37cd
797 * Driver version: 2.4.3 (i40e)
799 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
801 * Firmware version: 6.01 0x80003221
802 * Device id (pf/vf): 8086:158b / 8086:154c
803 * Driver version: 2.4.6 (i40e)
805 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
807 * Firmware version: 6.01 0x8000321c
808 * Device id (pf/vf): 8086:1583 / 8086:154c
809 * Driver version: 2.4.6 (i40e)
811 * Intel(R) Corporation I350 Gigabit Network Connection
813 * Firmware version: 1.63, 0x80000dda
814 * Device id (pf/vf): 8086:1521 / 8086:1520
815 * Driver version: 5.4.0-k (igb)
817 * Intel(R) platforms with Mellanox(R) NICs combinations
821 * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz
822 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
823 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
824 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
825 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
826 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
827 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
831 * Red Hat Enterprise Linux Server release 7.5 (Maipo)
832 * Red Hat Enterprise Linux Server release 7.4 (Maipo)
833 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
834 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
839 * SUSE Linux Enterprise Server 15
841 * MLNX_OFED: 4.2-1.0.0.0
842 * MLNX_OFED: 4.3-2.0.2.0
846 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
848 * Host interface: PCI Express 3.0 x8
849 * Device ID: 15b3:1007
850 * Firmware version: 2.42.5000
852 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
854 * Host interface: PCI Express 3.0 x8
855 * Device ID: 15b3:1013
856 * Firmware version: 12.21.1000 and above
858 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
860 * Host interface: PCI Express 3.0 x8
861 * Device ID: 15b3:1013
862 * Firmware version: 12.21.1000 and above
864 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
866 * Host interface: PCI Express 3.0 x8
867 * Device ID: 15b3:1013
868 * Firmware version: 12.21.1000 and above
870 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
872 * Host interface: PCI Express 3.0 x8
873 * Device ID: 15b3:1013
874 * Firmware version: 12.21.1000 and above
876 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
878 * Host interface: PCI Express 3.0 x8
879 * Device ID: 15b3:1013
880 * Firmware version: 12.21.1000 and above
882 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
884 * Host interface: PCI Express 3.0 x16
885 * Device ID: 15b3:1013
886 * Firmware version: 12.21.1000 and above
888 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
890 * Host interface: PCI Express 3.0 x8
891 * Device ID: 15b3:1013
892 * Firmware version: 12.21.1000 and above
894 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
896 * Host interface: PCI Express 3.0 x8
897 * Device ID: 15b3:1013
898 * Firmware version: 12.21.1000 and above
900 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
902 * Host interface: PCI Express 3.0 x16
903 * Device ID: 15b3:1013
904 * Firmware version: 12.21.1000 and above
905 * Firmware version: 12.21.1000 and above
907 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
909 * Host interface: PCI Express 3.0 x16
910 * Device ID: 15b3:1013
911 * Firmware version: 12.21.1000 and above
913 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
915 * Host interface: PCI Express 3.0 x16
916 * Device ID: 15b3:1013
917 * Firmware version: 12.21.1000 and above
919 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
921 * Host interface: PCI Express 3.0 x8
922 * Device ID: 15b3:1015
923 * Firmware version: 14.21.1000 and above
925 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
927 * Host interface: PCI Express 3.0 x8
928 * Device ID: 15b3:1015
929 * Firmware version: 14.21.1000 and above
931 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
933 * Host interface: PCI Express 3.0 x16
934 * Device ID: 15b3:1017
935 * Firmware version: 16.21.1000 and above
937 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
939 * Host interface: PCI Express 4.0 x16
940 * Device ID: 15b3:1019
941 * Firmware version: 16.21.1000 and above
943 * ARM platforms with Mellanox(R) NICs combinations
947 * Qualcomm ARM 1.1 2500MHz
951 * Red Hat Enterprise Linux Server release 7.5 (Maipo)
955 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
957 * Host interface: PCI Express 3.0 x8
958 * Device ID: 15b3:1015
959 * Firmware version: 14.22.0428
961 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
963 * Host interface: PCI Express 3.0 x16
964 * Device ID: 15b3:1017
965 * Firmware version: 16.22.0428
967 * ARM SoC combinations from Cavium (with integrated NICs)
976 * Ubuntu 16.04.2 LTS with Cavium SDK-6.2.0-Patch2 release support package.
978 * ARM SoC combinations from NXP (with integrated NICs)
982 * NXP/Freescale QorIQ LS1046A with ARM Cortex A72
983 * NXP/Freescale QorIQ LS2088A with ARM Cortex A72
987 * Ubuntu 16.04.3 LTS with NXP QorIQ LSDK 1803 support packages