net/af_xdp: avoid deadlock due to empty fill queue
authorRongQing Li <lirongqing@baidu.com>
Fri, 18 Sep 2020 11:32:31 +0000 (19:32 +0800)
committerFerruh Yigit <ferruh.yigit@intel.com>
Wed, 30 Sep 2020 17:19:09 +0000 (19:19 +0200)
While receiving packets, it is possible to fail to reserve
fill queue, since buffer ring is shared between tx and rx,
and maybe not available temporary. As a result both fill
queue and Rx queue will be empty.

Then kernel side will not be able to receive packets due to
empty fill queue, and dpdk will not be able to reserve fill
queue because dpdk doesn't have packets to receive, finally
deadlock will happen.

So move reserve fill queue before xsk_ring_cons__peek to fix it.

Cc: stable@dpdk.org
Signed-off-by: RongQing Li <lirongqing@baidu.com>
Signed-off-by: Dongsheng Rong <rongdongsheng@baidu.com>
Acked-by: Ciara Loftus <ciara.loftus@intel.com>
drivers/net/af_xdp/rte_eth_af_xdp.c

index b65ee44..00de671 100644 (file)
@@ -304,6 +304,10 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
        uint32_t free_thresh = fq->size >> 1;
        struct rte_mbuf *mbufs[ETH_AF_XDP_RX_BATCH_SIZE];
 
+       if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
+               (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL);
+
+
        if (unlikely(rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, nb_pkts) != 0))
                return 0;
 
@@ -317,9 +321,6 @@ af_xdp_rx_cp(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
                goto out;
        }
 
-       if (xsk_prod_nb_free(fq, free_thresh) >= free_thresh)
-               (void)reserve_fill_queue(umem, ETH_AF_XDP_RX_BATCH_SIZE, NULL);
-
        for (i = 0; i < rcvd; i++) {
                const struct xdp_desc *desc;
                uint64_t addr;