The doc's contain references to pmd but the proper use is to use PMD.
Cc: stable@dpdk.org
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
* ccp_auth_opt: Specify authentication operations to perform on CPU using openssl APIs.
-To validate ccp pmd, l2fwd-crypto example can be used with following command:
+To validate ccp PMD, l2fwd-crypto example can be used with following command:
.. code-block:: console
Initialization
--------------
-User can use app/test application to check how to use this pmd and to verify
+User can use app/test application to check how to use this PMD and to verify
crypto processing.
Test name is cryptodev_openssl_autotest.
- "OOP SGL In SGL Out" feature flag stands for
"Out-of-place Scatter-gather list Input, Scatter-gather list Output",
- which means pmd supports different scatter-gather styled input and output buffers
+ which means PMD supports different scatter-gather styled input and output buffers
(i.e. both can consists of multiple segments).
- "OOP SGL In LB Out" feature flag stands for
that the implementation can achieve such high throughput and low latency
The following list is a comprehensive outline of the what is supported and
-the limitations / restrictions imposed by the opdl pmd
+the limitations / restrictions imposed by the opdl PMD
- The order in which packets moved between queues is static and fixed \
(dynamic scheduling is not supported).
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
-``net_cnxk`` pmd only works with ``mempool_cnxk`` mempool handler
+``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler
as it is performance wise most effective way for packet allocation and Tx buffer
recycling on OCTEON TX2 SoC platform.
Initialization
--------------
-The OCTEON TX ethdev pmd is exposed as a vdev device which consists of a set
+The OCTEON TX ethdev PMD is exposed as a vdev device which consists of a set
of PKI and PKO PCIe VF devices. On EAL initialization,
PKI/PKO PCIe VF devices will be probed and then the vdev device can be created
from the application code, or from the EAL command line based on
Dependency
~~~~~~~~~~
-``eth_octeontx`` pmd is depend on ``event_octeontx`` eventdev device and
+``eth_octeontx`` PMD is depend on ``event_octeontx`` eventdev device and
``octeontx_fpavf`` external mempool handler.
Example:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The OCTEON TX2 SoC family NIC has inbuilt HW assisted external mempool manager.
-``net_octeontx2`` pmd only works with ``mempool_octeontx2`` mempool handler
+``net_octeontx2`` PMD only works with ``mempool_octeontx2`` mempool handler
as it is performance wise most effective way for packet allocation and Tx buffer
recycling on OCTEON TX2 SoC platform.
Multicast MAC filtering
~~~~~~~~~~~~~~~~~~~~~~~
-``net_octeontx2`` pmd supports multicast mac filtering feature only on physical
+``net_octeontx2`` PMD supports multicast mac filtering feature only on physical
function devices.
SDP interface support
Inline Protocol Processing
~~~~~~~~~~~~~~~~~~~~~~~~~~
-``net_octeontx2`` pmd doesn't support the following features for packets to be
+``net_octeontx2`` PMD doesn't support the following features for packets to be
inline protocol processed.
- TSO offload
- VLAN/QinQ offload
skip_data_bytes
~~~~~~~~~~~~~~~
This feature is used to create a hole between HEADROOM and actual data. Size of hole is specified
-in bytes as module param("skip_data_bytes") to pmd.
+in bytes as module param("skip_data_bytes") to PMD.
This scheme is useful when application would like to insert vlan header without disturbing HEADROOM.
Example:
.. code-block:: console
- --vdev '<pmd name>,socket_id=0'
+ --vdev '<PMD name>,socket_id=0'
.. Note::
* pseudocode for stateless compression
*/
- uint8_t cdev_id = rte_compressdev_get_dev_id(<pmd name>);
+ uint8_t cdev_id = rte_compressdev_get_dev_id(<PMD name>);
/* configure the device. */
if (rte_compressdev_configure(cdev_id, &conf) < 0)
* pseudocode for stateful compression
*/
- uint8_t cdev_id = rte_compressdev_get_dev_id(<pmd name>);
+ uint8_t cdev_id = rte_compressdev_get_dev_id(<PMD name>);
/* configure the device. */
if (rte_compressdev_configure(cdev_id, &conf) < 0)
* ``VIRTIO_NET_F_GUEST_UFO``, ``VIRTIO_NET_F_HOST_UFO``
* ``VIRTIO_NET_F_GSO``
- Also added ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature support in virtio pmd.
+ Also added ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature support in virtio PMD.
In a scenario where the vhost backend doesn't have the ability to generate
- RARP packets, the VM running virtio pmd can still be live migrated if
+ RARP packets, the VM running virtio PMD can still be live migrated if
``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature is negotiated.
* **Updated the AESNI-MB PMD.**
* **Added fm10k jumbo frame support.**
Added support for jumbo frame less than 15K in both VF and PF functions in the
- fm10k pmd.
+ fm10k PMD.
* **Added fm10k mac vlan filtering support.**
* Option "builtin-net-driver" is incompatible with QEMU
QEMU vhost net device start will fail if protocol feature is not negotiated.
- DPDK virtio-user pmd can be the replacement of QEMU.
+ DPDK virtio-user PMD can be the replacement of QEMU.
* Device start fails when enabling "builtin-net-driver" without memory
pre-allocation
The builtin example doesn't support dynamic memory allocation. When vhost
backend enables "builtin-net-driver", "--socket-mem" option should be
- added at virtio-user pmd side as a startup item.
+ added at virtio-user PMD side as a startup item.
IFCVF's vendor ID and device ID are same as that of virtio net pci device,
with its specific subsystem vendor ID and device ID. To let the device be
probed by IFCVF driver, adding "vdpa=1" parameter helps to specify that this
-device is to be used in vDPA mode, rather than polling mode, virtio pmd will
+device is to be used in vDPA mode, rather than polling mode, virtio PMD will
skip when it detects this message. If no this parameter specified, device
-will not be used as a vDPA device, and it will be driven by virtio pmd.
+will not be used as a vDPA device, and it will be driven by virtio PMD.
Different VF devices serve different virtio frontends which are in different
VMs, so each VF needs to have its own DMA address translation service. During