In kni_allocate_mbufs(), we alloc mbuf for alloc_q as this code.
allocq_free = (kni->alloc_q->read - kni->alloc_q->write - 1) \
& (MAX_MBUF_BURST_NUM - 1);
The value of allocq_free maybe zero, for example :
The ring size is 1024. After init, write = read = 0. Then we fill
kni->alloc_q to full. At this time, write = 1023, read = 0.
Then the kernel send 32 packets to userspace. At this time, write
= 1023, read = 32. And then the userspace receive this 32 packets.
Then fill the kni->alloc_q, (32 - 1023 - 1) & 31 = 0, fill nothing.
...
Then the kernel send 32 packets to userspace. At this time, write
= 1023, read = 992. And then the userspace receive this 32 packets.
Then fill the kni->alloc_q, (992 - 1023 - 1) & 31 = 0, fill nothing.
Then the kernel send 32 packets to userspace. The kni->alloc_q only
has 31 mbufs and will drop one packet.
Absolutely, this is a special scene. Normally, it will fill some
mbufs everytime, but may not enough for the kernel to use.
In this patch, we always keep the kni->alloc_q to full for the kernel
to use.
Fixes:
49da4e82cf94 ("kni: allocate no more mbuf than empty slots in queue")
Cc: stable@dpdk.org
Signed-off-by: Cheng Liu <liucheng11@huawei.com>
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>