Since PMD enqueues a single event at a time, fixing the issue by
passing 1 rather than nb_events to avoid any out of bound access as
reported by coverity.
Coverity issue: 358447
Fixes:
56a96aa42464 ("event/octeontx: add framework for Rx/Tx offloads")
Cc: stable@dpdk.org
Signed-off-by: Harman Kalra <hkalra@marvell.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
struct ssows *ws = port;
struct octeontx_txq *txq;
struct ssows *ws = port;
struct octeontx_txq *txq;
+ RTE_SET_USED(nb_events);
switch (ev->sched_type) {
case SSO_SYNC_ORDERED:
ssows_swtag_norm(ws, ev->event, SSO_SYNC_ATOMIC);
switch (ev->sched_type) {
case SSO_SYNC_ORDERED:
ssows_swtag_norm(ws, ev->event, SSO_SYNC_ATOMIC);
ethdev = &rte_eth_devices[port_id];
txq = ethdev->data->tx_queues[queue_id];
ethdev = &rte_eth_devices[port_id];
txq = ethdev->data->tx_queues[queue_id];
- return __octeontx_xmit_pkts(txq, &m, nb_events, cmd, flag);
+ return __octeontx_xmit_pkts(txq, &m, 1, cmd, flag);
}
#define T(name, f3, f2, f1, f0, sz, flags) \
}
#define T(name, f3, f2, f1, f0, sz, flags) \