From: Matan Azrad Date: Mon, 24 Feb 2020 16:55:06 +0000 (+0000) Subject: vdpa/mlx5: fix guest notification timing X-Git-Url: http://git.droids-corp.org/?a=commitdiff_plain;h=d76a17f7b82085ca839c2ee757c87b7cfd0b8584;p=dpdk.git vdpa/mlx5: fix guest notification timing When the HW finishes to consume the guest Rx descriptors, it creates a CQE in the CQ. The mlx5 driver arms the CQ to get notifications when a specific CQE index is created - the index to be armed is the next CQE index which should be polled by the driver. The mlx5 driver configured the kernel driver to send notification to the guest callfd in the same time it arrives to the mlx5 driver. It means that the guest was notified only for each first CQE in a poll cycle, so if the driver polled CQEs of all the virtio queue available descriptors, the guest was not notified again for the rest because there was no any new cycle for polling. Hence, the Rx queues might be stuck when the guest didn't work with poll mode. Move the guest notification to be after the driver consumes all the SW own CQEs. By this way, guest will be notified only after all the SW CQEs are polled. Also init the CQ to be with HW owner in the start. Fixes: 8395927cdfaf ("vdpa/mlx5: prepare HW queues") Signed-off-by: Matan Azrad --- diff --git a/drivers/vdpa/mlx5/mlx5_vdpa.h b/drivers/vdpa/mlx5/mlx5_vdpa.h index faeb54af5b..3324c9de3b 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa.h +++ b/drivers/vdpa/mlx5/mlx5_vdpa.h @@ -39,6 +39,7 @@ struct mlx5_vdpa_cq { uint16_t log_desc_n; uint32_t cq_ci:24; uint32_t arm_sn:2; + int callfd; rte_spinlock_t sl; struct mlx5_devx_obj *cq; struct mlx5dv_devx_umem *umem_obj; diff --git a/drivers/vdpa/mlx5/mlx5_vdpa_event.c b/drivers/vdpa/mlx5/mlx5_vdpa_event.c index 677b56acdf..dd60150fee 100644 --- a/drivers/vdpa/mlx5/mlx5_vdpa_event.c +++ b/drivers/vdpa/mlx5/mlx5_vdpa_event.c @@ -4,6 +4,7 @@ #include #include #include +#include #include #include @@ -156,17 +157,9 @@ mlx5_vdpa_cq_create(struct mlx5_vdpa_priv *priv, uint16_t log_desc_n, rte_errno = errno; goto error; } - /* Subscribe CQ event to the guest FD only if it is not in poll mode. */ - if (callfd != -1) { - ret = mlx5_glue->devx_subscribe_devx_event_fd(priv->eventc, - callfd, - cq->cq->obj, 0); - if (ret) { - DRV_LOG(ERR, "Failed to subscribe CQE event fd."); - rte_errno = errno; - goto error; - } - } + cq->callfd = callfd; + /* Init CQ to ones to be in HW owner in the start. */ + memset((void *)(uintptr_t)cq->umem_buf, 0xFF, attr.db_umem_offset); /* First arming. */ mlx5_vdpa_cq_arm(priv, cq); return 0; @@ -231,6 +224,9 @@ mlx5_vdpa_interrupt_handler(void *cb_arg) rte_spinlock_lock(&cq->sl); mlx5_vdpa_cq_poll(priv, cq); mlx5_vdpa_cq_arm(priv, cq); + if (cq->callfd != -1) + /* Notify guest for descriptors consuming. */ + eventfd_write(cq->callfd, (eventfd_t)1); rte_spinlock_unlock(&cq->sl); DRV_LOG(DEBUG, "CQ %d event: new cq_ci = %u.", cq->cq->id, cq->cq_ci);