Introduce a new API to configure Rx offloads.
In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.
Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.
The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.
Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Supports Rx jumbo frames.
-* **[uses] user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
``dev_conf.rxmode.max_rx_pkt_len``.
* **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
* **[related] API**: ``rte_eth_dev_set_mtu()``.
Supports receiving segmented mbufs.
-* **[uses] user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
* **[implements] datapath**: ``Scattered Rx function``.
* **[implements] rte_eth_dev_data**: ``scattered_rx``.
* **[provides] eth_dev_ops**: ``rxq_info_get:scattered_rx``.
Supports Large Receive Offload.
-* **[uses] user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
* **[implements] datapath**: ``LRO functionality``.
* **[implements] rte_eth_dev_data**: ``lro``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
.. _nic_features_tso:
Supports filtering of a VLAN Tag identifier.
-* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
* **[implements] eth_dev_ops**: ``vlan_filter_set``.
* **[related] API**: ``rte_eth_dev_vlan_filter()``.
Supports CRC stripping by hardware.
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
.. _nic_features_vlan_offload:
Supports VLAN offload to hardware.
-* **[uses] user config**: ``dev_conf.rxmode.hw_vlan_strip``,
- ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
* **[implements] eth_dev_ops**: ``vlan_offload_set``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
* **[related] API**: ``rte_eth_dev_set_vlan_offload()``,
``rte_eth_dev_get_vlan_offload()``.
Supports QinQ (queue in queue) offload.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
Supports L3 checksum offload.
-* **[uses] user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
Supports L4 checksum offload.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
Supports MACsec.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
Supports inner packet L3 checksum.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
* **[uses] mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
* **[uses] mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
* **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
}
}
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+ uint64_t *rx_offloads)
+{
+ uint64_t offloads = 0;
+
+ if (rxmode->header_split == 1)
+ offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+ if (rxmode->hw_ip_checksum == 1)
+ offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+ if (rxmode->hw_vlan_filter == 1)
+ offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+ if (rxmode->hw_vlan_strip == 1)
+ offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+ if (rxmode->hw_vlan_extend == 1)
+ offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+ if (rxmode->jumbo_frame == 1)
+ offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+ if (rxmode->hw_strip_crc == 1)
+ offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+ if (rxmode->enable_scatter == 1)
+ offloads |= DEV_RX_OFFLOAD_SCATTER;
+ if (rxmode->enable_lro == 1)
+ offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+ *rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+ struct rte_eth_rxmode *rxmode)
+{
+
+ if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+ rxmode->header_split = 1;
+ else
+ rxmode->header_split = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+ rxmode->hw_ip_checksum = 1;
+ else
+ rxmode->hw_ip_checksum = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+ rxmode->hw_vlan_filter = 1;
+ else
+ rxmode->hw_vlan_filter = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+ rxmode->hw_vlan_strip = 1;
+ else
+ rxmode->hw_vlan_strip = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+ rxmode->hw_vlan_extend = 1;
+ else
+ rxmode->hw_vlan_extend = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+ rxmode->jumbo_frame = 1;
+ else
+ rxmode->jumbo_frame = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+ rxmode->hw_strip_crc = 1;
+ else
+ rxmode->hw_strip_crc = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+ rxmode->enable_scatter = 1;
+ else
+ rxmode->enable_scatter = 0;
+ if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+ rxmode->enable_lro = 1;
+ else
+ rxmode->enable_lro = 0;
+}
+
int
rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
const struct rte_eth_conf *dev_conf)
{
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
+ struct rte_eth_conf local_conf = *dev_conf;
int diag;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
return -EBUSY;
}
+ /*
+ * Convert between the offloads API to enable PMDs to support
+ * only one of them.
+ */
+ if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+ rte_eth_convert_rx_offload_bitfield(
+ &dev_conf->rxmode, &local_conf.rxmode.offloads);
+ } else {
+ rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+ &local_conf.rxmode);
+ }
+
/* Copy the dev_conf parameter into the dev structure */
- memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+ memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
/*
* Check that the numbers of RX and TX queues are not greater
* If jumbo frames are enabled, check that the maximum RX packet
* length is supported by the configured device.
*/
- if (dev_conf->rxmode.jumbo_frame == 1) {
+ if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
if (dev_conf->rxmode.max_rx_pkt_len >
dev_info.max_rx_pktlen) {
RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
uint32_t mbp_buf_size;
struct rte_eth_dev *dev;
struct rte_eth_dev_info dev_info;
+ struct rte_eth_rxconf local_conf;
void **rxq;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
if (rx_conf == NULL)
rx_conf = &dev_info.default_rxconf;
+ local_conf = *rx_conf;
+ if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+ /**
+ * Reflect port offloads to queue offloads in order for
+ * offloads to not be discarded.
+ */
+ rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+ &local_conf.offloads);
+ }
+
ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
- socket_id, rx_conf, mp);
+ socket_id, &local_conf, mp);
if (!ret) {
if (!dev->data->min_rx_buf_size ||
dev->data->min_rx_buf_size > mbp_buf_size)
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
- if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+ if (!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_FILTER)) {
RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
return -ENOSYS;
}
/*check which option changed by application*/
cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
- org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+ org = !!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_STRIP);
if (cur != org) {
- dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+ if (cur)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_VLAN_STRIP;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_VLAN_STRIP;
mask |= ETH_VLAN_STRIP_MASK;
}
cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
- org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+ org = !!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_FILTER);
if (cur != org) {
- dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+ if (cur)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_VLAN_FILTER;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_VLAN_FILTER;
mask |= ETH_VLAN_FILTER_MASK;
}
cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
- org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+ org = !!(dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_EXTEND);
if (cur != org) {
- dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+ if (cur)
+ dev->data->dev_conf.rxmode.offloads |=
+ DEV_RX_OFFLOAD_VLAN_EXTEND;
+ else
+ dev->data->dev_conf.rxmode.offloads &=
+ ~DEV_RX_OFFLOAD_VLAN_EXTEND;
mask |= ETH_VLAN_EXTEND_MASK;
}
return ret;
RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+ /*
+ * Convert to the offload bitfield API just in case the underlying PMD
+ * still supporting it.
+ */
+ rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+ &dev->data->dev_conf.rxmode);
(*dev->dev_ops->vlan_offload_set)(dev, mask);
return ret;
RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
dev = &rte_eth_devices[port_id];
- if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+ if (dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_STRIP)
ret |= ETH_VLAN_STRIP_OFFLOAD;
- if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+ if (dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_FILTER)
ret |= ETH_VLAN_FILTER_OFFLOAD;
- if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+ if (dev->data->dev_conf.rxmode.offloads &
+ DEV_RX_OFFLOAD_VLAN_EXTEND)
ret |= ETH_VLAN_EXTEND_OFFLOAD;
return ret;
enum rte_eth_rx_mq_mode mq_mode;
uint32_t max_rx_pkt_len; /**< Only used if jumbo_frame enabled. */
uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/
+ /**
+ * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+ * structure are allowed to be set.
+ */
+ uint64_t offloads;
__extension__
+ /**
+ * Below bitfield API is obsolete. Application should
+ * enable per-port offloads using the offload field
+ * above.
+ */
uint16_t header_split : 1, /**< Header Split enable. */
hw_ip_checksum : 1, /**< IP/UDP/TCP checksum offload enable. */
hw_vlan_filter : 1, /**< VLAN filter enable. */
jumbo_frame : 1, /**< Jumbo Frame Receipt enable. */
hw_strip_crc : 1, /**< Enable CRC stripping by hardware. */
enable_scatter : 1, /**< Enable scatter packets rx handler */
- enable_lro : 1; /**< Enable LRO */
+ enable_lro : 1, /**< Enable LRO */
+ /**
+ * When set the offload bitfield should be ignored.
+ * Instead per-port Rx offloads should be set on offloads
+ * field above.
+ * Per-queue offloads shuold be set on rte_eth_rxq_conf
+ * structure.
+ * This bit is temporary till rxmode bitfield offloads API will
+ * be deprecated.
+ */
+ ignore_offload_bitfield : 1;
};
/**
uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+ /**
+ * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+ * Only offloads set on rx_queue_offload_capa or rx_offload_capa
+ * fields on rte_eth_dev_info structure are allowed to be set.
+ */
+ uint64_t offloads;
};
#define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
#define DEV_RX_OFFLOAD_QINQ_STRIP 0x00000020
#define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
#define DEV_RX_OFFLOAD_MACSEC_STRIP 0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT 0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER 0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND 0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME 0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP 0x00001000
+#define DEV_RX_OFFLOAD_SCATTER 0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+ DEV_RX_OFFLOAD_UDP_CKSUM | \
+ DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+ DEV_RX_OFFLOAD_VLAN_FILTER | \
+ DEV_RX_OFFLOAD_VLAN_EXTEND)
/**
* TX offload capabilities of a device.
/** Maximum number of hash MAC addresses for MTA and UTA. */
uint16_t max_vfs; /**< Maximum number of VFs. */
uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
- uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+ uint64_t rx_offload_capa;
+ /**< Device per port RX offload capabilities. */
uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+ uint64_t rx_queue_offload_capa;
+ /**< Device per queue RX offload capabilities. */
uint16_t reta_size;
/**< Device redirection table size, the total number of entries. */
uint8_t hash_key_size; /**< Hash key size in bytes */
* each statically configurable offload hardware feature provided by
* Ethernet devices, such as IP checksum or VLAN tag stripping for
* example.
+ * The Rx offload bitfield API is obsolete and will be deprecated.
+ * Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ * structure and use offloads field to set per-port offloads instead.
* - the Receive Side Scaling (RSS) configuration when using multiple RX
* queues per port.
*
* The *rx_conf* structure contains an *rx_thresh* structure with the values
* of the Prefetch, Host, and Write-Back threshold registers of the receive
* ring.
+ * In addition it contains the hardware offloads features to activate using
+ * the DEV_RX_OFFLOAD_* flags.
* @param mb_pool
* The pointer to the memory pool from which to allocate *rte_mbuf* network
* memory buffers to populate each descriptor of the receive ring.