X-Git-Url: http://git.droids-corp.org/?a=blobdiff_plain;f=doc%2Fguides%2Fprog_guide%2Fpoll_mode_drv.rst;h=68312898448cb857e45c7a18584022d8c40e677c;hb=34fd4373ce76efd0236e59397c495762c2ec9e64;hp=6fae39f9048a50120538c7c03ce25db38a92c1f3;hpb=5a10413c5812be50a0c03c5ea5622a7b3cd7dbad;p=dpdk.git diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst index 6fae39f904..6831289844 100644 --- a/doc/guides/prog_guide/poll_mode_drv.rst +++ b/doc/guides/prog_guide/poll_mode_drv.rst @@ -57,7 +57,7 @@ Whenever needed and appropriate, asynchronous communication should be introduced Avoiding lock contention is a key issue in a multi-core environment. To address this issue, PMDs are designed to work with per-core private resources as much as possible. -For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable. +For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable. In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore). To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core @@ -119,7 +119,7 @@ This is also true for the pipe-line model provided all logical cores used are lo Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance. -If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()`` +If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()`` concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases: * Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation. @@ -127,7 +127,7 @@ concurrently on the same tx queue without SW lock. This PMD feature found in som * In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus enables more scaling as all workers can send the packets. -See `Hardware Offload`_ for ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capability probing details. +See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details. Device Identification, Ownership and Configuration -------------------------------------------------- @@ -311,7 +311,7 @@ The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get( The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities. Supported offloads can be either per-port or per-queue. -Offloads are enabled using the existing ``DEV_TX_OFFLOAD_*`` or ``DEV_RX_OFFLOAD_*`` flags. +Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags. Any requested offloading by an application must be within the device capabilities. Any offloading is disabled by default if it is not set in the parameter ``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and @@ -372,11 +372,19 @@ parameters to those ports. * ``representor`` for a device which supports the creation of representor ports this argument allows user to specify which switch ports to enable port - representors for.:: - - -w DBDF,representor=0 - -w DBDF,representor=[0,4,6,9] - -w DBDF,representor=[0-31] + representors for. Multiple representors in one device argument is invalid:: + + -a DBDF,representor=vf0 + -a DBDF,representor=vf[0,4,6,9] + -a DBDF,representor=vf[0-31] + -a DBDF,representor=vf[0,2-4,7,9-11] + -a DBDF,representor=sf0 + -a DBDF,representor=sf[1,3,5] + -a DBDF,representor=sf[0-1023] + -a DBDF,representor=sf[0,2-4,7,9-11] + -a DBDF,representor=pf1vf0 + -a DBDF,representor=pf[0-1]sf[0-127] + -a DBDF,representor=pf1 Note: PMDs are not required to support the standard device arguments and users should consult the relevant PMD documentation to see support devargs. @@ -600,9 +608,9 @@ thread safety all these operations should be called from the same thread. For example when PF is reset, the PF sends a message to notify VFs of this event and also trigger an interrupt to VFs. Then in the interrupt service routine the VFs detects this notification message and calls -_rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL). +rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL). This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET -event within VFs. The function _rte_eth_dev_callback_process() will +event within VFs. The function rte_eth_dev_callback_process() will call the registered callback function. The callback function can trigger the application to handle all operations the VF reset requires including stopping Rx/Tx queues and calling rte_eth_dev_reset().