Also, make sure to start the actual text at the margin.
=========================================================
+* **Reworked memory subsystem.**
+
+ Memory subsystem has been reworked to support new functionality.
+
+ On Linux, support for reserving/unreserving hugepage memory at runtime was
+ added, so applications no longer need to pre-reserve memory at startup. Due to
+ reorganized internal workings of memory subsystem, any memory allocated
+ through ``rte_malloc()`` or ``rte_memzone_reserve()`` is no longer guaranteed
+ to be IOVA-contiguous.
+
+ This functionality has introduced the following changes:
+
+ * ``rte_eal_get_physmem_layout()`` was removed
+ * A new flag for memzone reservation (``RTE_MEMZONE_IOVA_CONTIG``) was added
+ to ensure reserved memory will be IOVA-contiguous, for use with device
+ drivers and other cases requiring such memory
+ * New callbacks for memory allocation/deallocation events, allowing user (or
+ drivers) to be notified of new memory being allocated or deallocated
+ * New callbacks for validating memory allocations above specified limit,
+ allowing user to permit or deny memory allocations
+ * A new command-line switch ``--legacy-mem`` to enable EAL behavior similar to
+ how older versions of DPDK worked (memory segments that are IOVA-contiguous,
+ but hugepages are reserved at startup only, and can never be released)
+ * A new command-line switch ``--single-file-segments`` to put all memory
+ segments within a segment list in a single file
+ * A set of convenience function calls to look up and iterate over allocated
+ memory segments
+ * ``-m`` and ``--socket-mem`` command-line arguments now carry an additional
+ meaning and mark pre-reserved hugepages as "unfree-able", thereby acting as
+ a mechanism guaranteeing minimum availability of hugepage memory to the
+ application
+
+ Reserving/unreserving memory at runtime is not currently supported on FreeBSD.
+
* **Added bucket mempool driver.**
Added bucket mempool driver which provides a way to allocate contiguous
The number may be obtained using rte_mempool_ops_get_info() API.
Contiguous blocks may be allocated using rte_mempool_get_contig_blocks() API.
+* **Added support for port representors.**
+
+ The DPDK port representors (also known as "VF representors" in the specific
+ context of VFs), which are to DPDK what the Ethernet switch device driver
+ model (**switchdev**) is to Linux, and which can be thought as a software
+ "patch panel" front-end for applications. DPDK port representors are
+ implemented as additional virtual Ethernet device (**ethdev**) instances,
+ spawned on an as needed basis through configuration parameters passed to the
+ driver of the underlying device using devargs.
+
+* **Added support for VXLAN and NVGRE tunnel endpoint.**
+
+ New actions types have been added to support encapsulation and decapsulation
+ operations for a tunnel endpoint. The new action types are
+ RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP, RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP,
+ RTE_FLOW_ACTION_TYPE_JUMP. New item type RTE_FLOW_ACTION_TYPE_MARK has been
+ added to match a flow against a previously marked flow. It also introduced shared
+ counter to flow API to counte for a group of flows.
+
* **Added PMD-recommended Tx and Rx parameters**
Applications can now query drivers for device-tuned values of
CXGBE VF Poll Mode Driver has been added to run DPDK over Chelsio
T5/T6 NIC VF instances.
+* **Updated mlx5 driver.**
+
+ Updated the mlx5 driver including the following changes:
+
+ * Introduced Multi-packet Rx. With it, achieved 100Gb/sec with 64B frames.
+ * Supported to be run by non-root users given reduced set of capabilities
+ CAP_NET_ADMIN, CAP_NET_RAW and CAP_IPC_LOCK.
+ * Supported TSO and checksum for generic UDP and IP tunnels.
+ * Supported inner checksum and RSS for GRE, VXLAN-GPE, MPLSoGRE
+ and MPLSoUDP tunnels.
+ * Accommodate to the new memory hotplug model.
+ * Supported non virtually contiguous mempools.
+ * Supported MAC adding along with allmulti and promiscuous modes from VF.
+ * Supported Mellanox BlueField SoC device.
+ * Supported PMD defaults for queue number and depth to improve the out
+ of the box performance.
+
+* **Updated mlx4 driver.**
+
+ Updated the mlx4 driver including the following changes:
+
+ * Supported to be run by non-root users given reduced set of capabilities
+ CAP_NET_ADMIN, CAP_NET_RAW and CAP_IPC_LOCK.
+ * Supported CRC strip toggling.
+ * Accommodate to the new memory hotplug model.
+ * Supported non virtually contiguous mempools.
+ * Dropped support in Mellanox OFED 4.2.
+
* **Updated Solarflare network PMD.**
Updated the sfc_efx driver including the following changes:
The ARM CPU subsystem features eight ARMv8 Cortex-A72 CPUs at 3.0 GHz, arranged in a multi-cluster
configuration.
+* **Added vDPA in vhost-user lib.**
+
+ Added support for selective datapath in vhost-user lib. vDPA stands for vhost
+ Data Path Acceleration. It supports virtio ring compatible devices to serve
+ virtio driver directly to enable datapath acceleration.
+
* **Added IFCVF vDPA driver.**
Added IFCVF vDPA driver to support Intel FPGA 100G VF device. IFCVF works
driver the assigned VF gets configured to Rx/Tx directly to VM's virtio
vrings.
+* **Added support for vhost dequeue interrupt mode.**
+
+ Added support for vhost dequeue interrupt mode to release cpus to others when
+ no data to transmit. Applications could register an epoll event fd to associate
+ Rx queues with interrupt vectors.
+
* **Added support for virtio-user server mode.**
In a container environment if the vhost-user backend restarts, there's no way
for it to reconnect to virtio-user. To address this, support for server mode
* AES-CMAC (128-bit key).
+* **Added Compressdev Library, a generic compression service library.**
+
+ The compressdev library provides an API for offload of compression and
+ decompression operations to hardware or software accelerator devices.
+
+* **Added a new compression poll mode driver using Intels ISA-L.**
+
+ Added the new ``ISA-L`` compression driver, for compression and decompression
+ operations in software. See the :doc:`../compressdevs/isal` compression driver
+ guide for details on this new driver.
+
* **Added the Event Timer Adapter Library.**
The Event Timer Adapter Library extends the event-based model by introducing
enqueue/dequeue crypto operations to/from cryptodev as events scheduled
by an event device.
+* **Added Ifpga Bus, a generic Intel FPGA Bus library.**
+
+ The Ifpga Bus library provides support for integrating any Intel FPGA device with
+ the DPDK framework. It provides Intel FPGA Partial Bit Stream AFU (Accelerated
+ Function Unit) scan and drivers probe.
+
+* **Added IFPGA (Intel FPGA) Rawdev Driver.**
+
+ Added a new Rawdev driver called IFPGA(Intel FPGA) Rawdev Driver, which cooperates
+ with OPAE (Open Programmable Acceleration Engine) share code provides common FPGA
+ management ops for FPGA operation.
+
+ See the :doc:`../rawdevs/ifpga_rawdev` programmer's guide for more details.
+
* **Added DPAA2 QDMA Driver (in rawdev).**
The DPAA2 QDMA is an implementation of the rawdev API, that provide means
stats/xstats on shared memory from secondary process, and also pdump packets on
those virtual devices.
+* **Advancement to Packet Framework Library.**
+
+ Design and development of new API functions for Packet Framework library that
+ implements common set of actions such as traffic metering, packet
+ encapsulation, network address translation, TTL update, etc., for pipeline
+ table and input ports to speed up application development. The API functions
+ includes creating action profiles, registering actions to the profiles,
+ instantiating action profiles for pipeline table and input ports, etc.
+
+* **Added the BPF Library.**
+
+ The BPF Library provides the ability to load and execute
+ Enhanced Berkeley Packet Filter (eBPF) within user-space dpdk application.
+ Also it introduces basic framework to load/unload BPF-based filters
+ on eth devices (right now only via SW RX/TX callbacks).
+ It also adds dependency on libelf.
+
API Changes
-----------
redirect matching traffic to a specific physical port.
* PORT_ID pattern item and actions were added to match and target DPDK
port IDs at a higher level than PHY_PORT.
+ * RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP action items were added to support
+ tunnel encapsulation operation for VXLAN and NVGRE type tunnel endpoint.
+ * RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP action items were added to support
+ tunnel decapsulation operation for VXLAN and NVGRE type tunnel endpoint.
+ * RTE_FLOW_ACTION_TYPE_JUMP action item was added to support a matched flow
+ to be redirected to the specific group.
+ * RTE_FLOW_ACTION_TYPE_MARK item type has been added to match a flow against
+ a previously marked flow.
* ethdev: change flow APIs regarding count action:
* ``rte_flow_create()`` API count action now requires the ``struct rte_flow_action_count``.
* ``rte_flow_query()`` API parameter changed from action type to action structure.
+* ethdev: changes to offload API
+
+ A pure per-port offloading isn't requested to be repeated in [rt]x_conf->offloads to
+ ``rte_eth_[rt]x_queue_setup()``. Now any offloading enabled in ``rte_eth_dev_configure()``
+ can't be disabled by ``rte_eth_[rt]x_queue_setup()``. Any new added offloading which has
+ not been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled in
+ ``rte_eth_[rt]x_queue_setup()`` must be per-queue type, otherwise trigger an error log.
+
+* ethdev: runtime queue setup:
+
+ ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup`` can be called after
+ ``rte_eth_dev_start`` if device support runtime queue setup. Device driver can
+ expose this capability through ``rte_eth_dev_info_get``. A Rx or Tx queue be
+ setup at runtime need to be started explicitly by ``rte_eth_dev_rx_queue_start``
+ or ``rte_eth_dev_tx_queue_start``.
+
ABI Changes
-----------
sanity fix in the VLAN pattern item (``struct rte_flow_item_vlan``) and
new transfer attribute (``struct rte_flow_attr``).
+**New parameter added to rte_bbdev_op_cap_turbo_dec.**
+
+ A new parameter ``max_llr_modulus`` has been added to
+ ``rte_bbdev_op_cap_turbo_dec`` structure to specify maximal LLR (likelihood
+ ratio) absolute value.
+
+* **BBdev Queue Groups split into UL/DL Groups**
+
+ Queue Groups have been split into UL/DL Groups in Turbo Software Driver.
+ They are independent for Decode/Encode. ``rte_bbdev_driver_info`` reflects
+ introduced changes.
+
Removed Items
-------------
Also, make sure to start the actual text at the margin.
=========================================================
+* **Secondary process launch is not reliable.**
+
+ Recent memory hotplug patches have made multiprocess startup less reliable
+ than it was in the past. A number of workarounds are known to work depending
+ on the circumstances. As such it isn't recommended to use the secondary
+ process mechanism for critical systems. The underlying issues will be
+ addressed in upcoming releases.
+
+ The issue is explained in more detail, including potential workarounds,
+ in the Bugzilla entry referenced below.
+
+ Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=50
+
* **pdump is not compatible with old applications.**
As we changed to use generic multi-process communication for pdump negotiation
dpdk-pdump example and any other applications using librte_pdump, cannot work
with older version DPDK primary applications.
+* **rte_abort takes a long time on FreeBSD.**
+
+ DPDK processes now allocates a large area of virtual memory address space,
+ with this change during rte_abort FreeBSD now dumps the contents of the
+ whole reserved memory range, not just the used portion, to a core dump file.
+ Write this large core file can take a significant amount of time, causing
+ processes to appear hung on the system.
+
+ The work around for the issue is to set the system resource limits for core
+ dumps before running any tests e.g."limit coredumpsize 0". This will
+ effectively disable core dumps on FreeBSD. If they are not to be completely
+ disabled, a suitable limit, e.g. 1G might be specified instead of 0. This
+ needs to be run per-shell session, or before every test run. This change
+ can also be made persistent by adding "kern.coredump=0" to /etc/sysctl.conf
+
+ Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=53
+
+* **Bonding PMD may fail to accept new slaves in certain conditions.**
+
+ In certain conditions when using testpmd,
+ bonding may fail to register new slave ports.
+
+ Bugzilla entry: https://dpdk.org/tracker/show_bug.cgi?id=52.
+
Shared Library Versions
-----------------------
librte_acl.so.2
librte_bbdev.so.1
librte_bitratestats.so.2
+ + librte_bpf.so.1
librte_bus_dpaa.so.1
librte_bus_fslmc.so.1
librte_bus_pci.so.1
librte_cfgfile.so.2
librte_cmdline.so.2
+ librte_common_octeontx.so.1
+ + librte_compressdev.so.1
librte_cryptodev.so.4
librte_distributor.so.1
+ librte_eal.so.7
+ librte_ethdev.so.9
- librte_eventdev.so.3
+ + librte_eventdev.so.4
librte_flow_classify.so.1
librte_gro.so.1
librte_gso.so.1
This section is a comment. Do not overwrite or remove it.
Also, make sure to start the actual text at the margin.
=========================================================
+
+* Intel(R) platforms with Intel(R) NICs combinations
+
+ * CPU
+
+ * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
+ * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
+ * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
+ * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
+ * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
+ * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
+ * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
+ * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
+ * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
+
+ * OS:
+
+ * CentOS 7.4
+ * Fedora 25
+ * Fedora 27
+ * Fedora 28
+ * FreeBSD 11.1
+ * Red Hat Enterprise Linux Server release 7.3
+ * SUSE Enterprise Linux 12
+ * Wind River Linux 8
+ * Ubuntu 14.04
+ * Ubuntu 16.04
+ * Ubuntu 16.10
+ * Ubuntu 17.10
+
+ * NICs:
+
+ * Intel(R) 82599ES 10 Gigabit Ethernet Controller
+
+ * Firmware version: 0x61bf0001
+ * Device id (pf/vf): 8086:10fb / 8086:10ed
+ * Driver version: 5.2.3 (ixgbe)
+
+ * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
+
+ * Firmware version: 0x800003e7
+ * Device id (pf/vf): 8086:15ad / 8086:15a8
+ * Driver version: 4.4.6 (ixgbe)
+
+ * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
+
+ * Firmware version: 6.01 0x80003221
+ * Device id (pf/vf): 8086:1572 / 8086:154c
+ * Driver version: 2.4.6 (i40e)
+
+ * Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G)
+
+ * Firmware version: 3.33 0x80000fd5 0.0.0
+ * Device id (pf/vf): 8086:37d0 / 8086:37cd
+ * Driver version: 2.4.3 (i40e)
+
+ * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
+
+ * Firmware version: 6.01 0x80003221
+ * Device id (pf/vf): 8086:158b / 8086:154c
+ * Driver version: 2.4.6 (i40e)
+
+ * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
+
+ * Firmware version: 6.01 0x8000321c
+ * Device id (pf/vf): 8086:1583 / 8086:154c
+ * Driver version: 2.4.6 (i40e)
+
+ * Intel(R) Corporation I350 Gigabit Network Connection
+
+ * Firmware version: 1.63, 0x80000dda
+ * Device id (pf/vf): 8086:1521 / 8086:1520
+ * Driver version: 5.4.0-k (igb)
+
+* Intel(R) platforms with Mellanox(R) NICs combinations
+
+ * CPU:
+
+ * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz
+ * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
+ * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
+ * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
+ * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
+ * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
+ * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
+
+ * OS:
+
+ * Red Hat Enterprise Linux Server release 7.5 (Maipo)
+ * Red Hat Enterprise Linux Server release 7.4 (Maipo)
+ * Red Hat Enterprise Linux Server release 7.3 (Maipo)
+ * Red Hat Enterprise Linux Server release 7.2 (Maipo)
+ * Ubuntu 18.04
+ * Ubuntu 17.10
+ * Ubuntu 16.10
+ * Ubuntu 16.04
+ * SUSE Linux Enterprise Server 15
+
+ * MLNX_OFED: 4.2-1.0.0.0
+ * MLNX_OFED: 4.3-2.0.2.0
+
+ * NICs:
+
+ * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1007
+ * Firmware version: 2.42.5000
+
+ * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
+
+ * Host interface: PCI Express 3.0 x16
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
+
+ * Host interface: PCI Express 3.0 x16
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
+
+ * Host interface: PCI Express 3.0 x16
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
+
+ * Host interface: PCI Express 3.0 x16
+ * Device ID: 15b3:1013
+ * Firmware version: 12.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1015
+ * Firmware version: 14.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1015
+ * Firmware version: 14.21.1000 and above
+
+ * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
+
+ * Host interface: PCI Express 3.0 x16
+ * Device ID: 15b3:1017
+ * Firmware version: 16.21.1000 and above
+
+ * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
+
+ * Host interface: PCI Express 4.0 x16
+ * Device ID: 15b3:1019
+ * Firmware version: 16.21.1000 and above
+
+* ARM platforms with Mellanox(R) NICs combinations
+
+ * CPU:
+
+ * Qualcomm ARM 1.1 2500MHz
+
+ * OS:
+
+ * Red Hat Enterprise Linux Server release 7.5 (Maipo)
+
+ * NICs:
+
+ * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
+
+ * Host interface: PCI Express 3.0 x8
+ * Device ID: 15b3:1015
+ * Firmware version: 14.22.0428
+
+ * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
+
+ * Host interface: PCI Express 3.0 x16
+ * Device ID: 15b3:1017
+ * Firmware version: 16.22.0428
+
+* ARM SoC combinations from Cavium (with integrated NICs)
+
+ * SoC:
+
+ * Cavium CN81xx
+ * Cavium CN83xx
+
+ * OS:
+
+ * Ubuntu 16.04.2 LTS with Cavium SDK-6.2.0-Patch2 release support package.
+
+* ARM SoC combinations from NXP (with integrated NICs)
+
+ * SoC:
+
+ * NXP/Freescale QorIQ LS1046A with ARM Cortex A72
+ * NXP/Freescale QorIQ LS2088A with ARM Cortex A72
+
+ * OS:
+
+ * Ubuntu 16.04.3 LTS with NXP QorIQ LSDK 1803 support packages