Harman Kalra [Wed, 3 Nov 2021 07:59:14 +0000 (13:29 +0530)]
common/cnxk: fix device MSI-X greater than default value
Handling the case where number of MSIX interrupts are greater
than default value i.e. PLT_MAX_RXTX_INTR_VEC_ID. On PCI probe
device is queried for supported MSIX interrupts, and respective
interrupt resources are reallocated with this value. Same MSIX
count should be used while registering new interrupt vectors.
Fixes: fa8f86a14e2e ("common/cnxk: add build infrastructre and HW definition") Fixes: f6d567b03d28 ("common/cnxk: support NIX IRQ") Fixes: 5e076b609f2a ("common/cnxk: add SE set key for crypto") Cc: stable@dpdk.org Signed-off-by: Tomasz Duszynski <tduszynski@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
common/cnxk: enable TM to listen on Rx pause frames
Enable TM topology to listen on backpressure received when
Rx pause frame is enabled. Only one TM node in Tl3/TL2 per
channel can listen on backpressure on that channel.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
Chenbo Xia [Wed, 3 Nov 2021 05:00:26 +0000 (13:00 +0800)]
doc: remove deprecation notice for vhost
Ten vhost APIs were announced to be stable and promoted in below
commit, so remove the related deprecation notice.
Fixes: 945ef8a04098 ("vhost: promote some APIs to stable") Reported-by: Maxime Coquelin <maxime.coquelin@redhat.com> Signed-off-by: Chenbo Xia <chenbo.xia@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Junfeng Guo [Wed, 3 Nov 2021 04:40:03 +0000 (12:40 +0800)]
net/ice: enable protocol agnostic flow offloading in FDIR
Protocol agnostic flow offloading in Flow Director is enabled by this
patch based on the Parser Library, using existing rte_flow raw API.
Note that the raw flow requires:
1. byte string of raw target packet bits.
2. byte string of mask of target packet.
Here is an example:
FDIR matching ipv4 dst addr with 1.2.3.4 and redirect to queue 3:
flow create 0 ingress pattern raw \
pattern spec \
00000000000000000000000008004500001400004000401000000000000001020304 \
pattern mask \
000000000000000000000000000000000000000000000000000000000000ffffffff \
/ end actions queue index 3 / mark id 3 / end
Note that mask of some key bits (e.g., 0x0800 to indicate ipv4 proto)
is optional in our cases. To avoid redundancy, we just omit the mask
of 0x0800 (with 0xFFFF) in the mask byte string example. The prefix
'0x' for the spec and mask byte (hex) strings are also omitted here.
Also update the ice feature list with rte_flow item raw.
The firmware returns the version in bytes; and shifting a 8 bit
quantity here can lead to undefined behaviour or truncation.
The fix is to promote the bytes to 32 bit before shifting.
Bugzilla ID: 838 Fixes: 9a891c1764ea ("net/bnxt: update HWRM to version 1.9.2") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Ivan Malov [Mon, 25 Oct 2021 11:04:13 +0000 (14:04 +0300)]
net/sfc: support represented port flow item
Add support for item REPRESENTED_PORT to match on traffic entering
the embedded switch from the entity represented by the given
ethdev (network port or VF).
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Ivan Malov [Mon, 25 Oct 2021 11:04:12 +0000 (14:04 +0300)]
net/sfc: assign correct m-ports to independent switch ports
In accordance with patches [1-4], MAE admin ethdev represents a
network port and not the PF which it sits on. Rework the way
how "ethdev" and "entity" m-ports are assigned in SW switch
port entries of independent ethdevs. Explain in comments.
[1] commit 081e42dab11d ("ethdev: add port representor item to flow API")
[2] commit 49863ae2bf95 ("ethdev: add represented port item to flow API")
[3] commit 8edb6bc0263e ("ethdev: add port representor action to flow API")
[4] commit 88caad251c8d ("ethdev: add represented port action to flow API")
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Ivan Malov [Mon, 25 Oct 2021 11:04:10 +0000 (14:04 +0300)]
net/sfc: rename ethdev m-port retrieval helper
The function in question has an unfortunate name that reads
like finding a SW switch port entry. In fact just one of
the two m-ports is retrieved from that entry.
Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Ivan Malov [Mon, 25 Oct 2021 11:04:09 +0000 (14:04 +0300)]
net/sfc: do not allow flow rules to refer to VF representors
VF representors do not own dedicated m-ports and thus cannot
be referred to as traffic endpoints in flow items or actions.
Fixes: a62ec90522a6 ("net/sfc: add port representors infrastructure") Fixes: f55b61cec94a ("net/sfc: support port representor flow item") Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Dmitry Kozlyuk [Tue, 2 Nov 2021 17:01:35 +0000 (19:01 +0200)]
net/mlx5: preserve indirect actions on restart
MLX5 PMD uses reference counting to manage RX queue resources.
After port stop shared RSS actions kept references to RX queues,
preventing resource release. As a result, internal PMD mempool
for such queues had been exhausted after a number of port restarts.
Diagnostic message from rte_eth_dev_start():
Dereference RX queues used by indirect actions on port stop (detach)
and restore references on port start (attach) in order to allow RX queue
resource release, but keep indirect RSS across the port restart.
Replace queue IDs in HW by drop queue ID on detach and restore actual
queue IDs on attach.
When the port is stopped, create indirect RSS in the detached state.
As a result, MLX5 PMD is able to keep all its indirect actions
across port restart. Advertise this capability.
Dmitry Kozlyuk [Tue, 2 Nov 2021 17:01:34 +0000 (19:01 +0200)]
net/mlx5: create drop queue using DevX
Drop queue creation and destruction were not implemented for DevX
flow engine and Verbs engine methods were used as a workaround.
Implement these methods for DevX so that there is a valid queue ID
that can be used regardless of queue configuration via API.
Dmitry Kozlyuk [Tue, 2 Nov 2021 17:01:33 +0000 (19:01 +0200)]
net/mlx5: discover max flow priority using DevX
Maximum available flow priority was discovered using Verbs API
regardless of the selected flow engine. This required some Verbs
objects to be initialized in order to use DevX engine. Make priority
discovery an engine method and implement it for DevX using its API.
Dmitry Kozlyuk [Tue, 2 Nov 2021 17:01:32 +0000 (19:01 +0200)]
drivers/net: advertise no support for keeping flow rules
When RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP capability bit is zero,
the specified behavior is the same as it had been before
this bit was introduced. Explicitly reset it in all PMDs
supporting rte_flow API in order to attract the attention
of maintainers, who should eventually choose to advertise
the new capability or not. It is already known that
mlx4 and mlx5 will not support this capability.
For RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP
similar action is not performed,
because no PMD except mlx5 supports indirect actions.
Any PMD that starts doing so will anyway have to consider
all relevant API, including this capability.
Dmitry Kozlyuk [Tue, 2 Nov 2021 17:01:31 +0000 (19:01 +0200)]
ethdev: add capability to keep shared objects on restart
rte_flow_action_handle_create() did not mention what happens
with an indirect action when a device is stopped and started again.
It is natural for some indirect actions, like counter, to be persistent.
Keeping others at least saves application time and complexity.
However, not all PMDs can support it, or the support may be limited
by particular action kinds, that is, combinations of action type
and the value of the transfer bit in its configuration.
Add a device capability to indicate if at least some indirect actions
are kept across the above sequence. Without this capability the behavior
is still unspecified, and application is required to destroy
the indirect actions before stopping the device.
In the future, indirect actions may not be the only type of objects
shared between flow rules. The capability bit intends to cover all
possible types of such objects, hence its name.
Declare that the application can test for the persistence
of a particular indirect action kind by attempting to create
an indirect action of that kind when the device is stopped
and checking for the specific error type.
This is logical because if the PMD can to create an indirect action
when the device is not started and use it after the start happens,
it is natural that it can move its internal flow shared object
to the same state when the device is stopped and restore the state
when the device is started.
Indirect action persistence across a reconfigurations is not required.
In case a PMD cannot keep the indirect actions across reconfiguration,
it is allowed just to report an error.
Application must then flush the indirect actions before attempting it.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Dmitry Kozlyuk [Tue, 2 Nov 2021 17:01:30 +0000 (19:01 +0200)]
ethdev: add capability to keep flow rules on restart
Previously, it was not specified what happens to the flow rules
when the device is stopped, possibly reconfigured, then started.
If flow rules were kept, it could be convenient for application
developers, because they wouldn't need to save and restore them.
However, due to the number of flows and possible creation rate it is
impractical to save all flow rules in DPDK layer. This means that flow
rules persistence really depends on whether PMD and HW can implement it
efficiently. It can also be limited by the rule item and action types,
and its attributes transfer bit (a combination of an item/action type
and a value of the transfer bit is called a rule feature).
Add a device capability bit for PMDs that can keep at least some
of the flow rules across restart. Without this capability behavior
is still unspecified and it is declared that the application must
flush the rules before stopping the device.
Allow the application to test for persistence of rules using
a particular feature by attempting to create a flow rule
using that feature when the device is stopped
and checking for the specific error.
This is logical because if the PMD can to create the flow rule
when the device is not started and use it after the start happens,
it is natural that it can move its internal flow rule object
to the same state when the device is stopped and restore the state
when the device is started.
Rule persistence across a reconfigurations is not required,
because tracking all the rules and configuration-dependent resources
they use may be infeasible. In case a PMD cannot keep the rules
across reconfiguration, it is allowed just to report an error.
Application must then flush the rules before attempting it.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Ciara Loftus [Fri, 22 Oct 2021 10:42:53 +0000 (10:42 +0000)]
net/af_xdp: use BPF link for XDP programs
Since v0.4.0, if the underlying kernel supports it, libbpf uses 'bpf
link' to manage the programs on the interfaces of the xsks. This has two
repercussions for the PMD.
1. In the case where the PMD asks libbpf to load the default XDP
program, the PMD no longer needs to remove it on teardown. This is
because bpf link handles the unloading under the hood.
2. In the case where the PMD loads a custom program, libbpf expects this
program to be linked via bpf link prior to creating the socket.
This patch introduces probes for the libbpf version and kernel support
for bpf link and orchestrates the loading and unloading of
programs according to the capabilities of the kernel and libbpf. The
libbpf version is checked with meson and pkg-config. The probe for
kernel support mirrors how it is implemented in libbpf. A bpf_link is
created and looked up on loopback device. If successful, bpf_link will
be used for the AF_XDP netdev.
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com> Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com>
Lior Margalit [Mon, 1 Nov 2021 06:38:41 +0000 (08:38 +0200)]
net/mlx5: fix RSS expansion with EtherType
The RSS expansion algorithm is using a graph to find the possible
expansion paths. A graph node with the 'explicit' flag will be skipped,
if it is not found in the flow pattern.
The current implementation misses a check for the explicit flag when
expanding the pattern according to ETH item with EtherType.
For example:
testpmd> flow create 0 ingress pattern eth / ipv6 / udp / vxlan / eth
type is 2048 / end actions rss level 2 types udp end / end
The "eth type is 2048" item in the pattern may be expanded to "ETH IPv4".
The ETH node in the expansion graph is followed by VLAN node marked as
explicit. The fix is to skip the VLAN node and continue the expansion
with its next nodes, IPv4 and IPv6.
The expansion paths for the above example will be:
ETH IPV6 UDP VXLAN ETH END
ETH IPV6 UDP VXLAN ETH IPV4 UDP END
Jiawei Wang [Mon, 1 Nov 2021 06:30:40 +0000 (08:30 +0200)]
net/mlx5: fix meter action pool protection
The ASO meter action with flows creation could be supported on
multiple threads. The meter pools were created to manage the meter
object resources, if there is no room in the current meter pool then
resize the meter pool to the new pool size and free the old one.
There's a race condition while one thread resizes the meter pool and
the old pool resource be freed, and another thread query the meter
object by index on the old pool, the return value is invalid.
This patch adds a read-write lock to protect the pool resource while
resizing and query.
Jiawei Wang [Mon, 1 Nov 2021 06:30:39 +0000 (08:30 +0200)]
net/mlx5: fix age action pool protection
The age action with flows creation could be supported on the multiple
threads. The age pools were created to manage the age resources, if
there is no room in the current pool then resize the age pool to the new
pool size and free the old one.
There's a race condition while one thread resizes the age pool and the
old pool resource be freed, and another thread query the age action
value of the old pool so the queried value is invalid.
This patch uses the read-write lock to protect the pool resource while
resizing and query.
Huisong Li [Fri, 22 Oct 2021 09:20:04 +0000 (17:20 +0800)]
net/hns3: refactor multicast MAC address set for PF
Currently, when configuring a group of multicast MAC addresses, the PF
driver reorder mc_addr array in hw struct to remove multicast MAC
addresses that are not in mc_addr_set array from user and then adds new
multicast MAC addresses. Actually, it can be simplified by removing all
previous MAC addresses and then adding new MAC addresses.
Signed-off-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Huisong Li [Fri, 22 Oct 2021 09:20:02 +0000 (17:20 +0800)]
net/hns3: unify MAC address add and remove
The code logic of adding and removing MAC address in PF and VF is the
same.
This patch extracts two common interfaces to add and remove them
separately.
Signed-off-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Huisong Li [Fri, 22 Oct 2021 09:20:01 +0000 (17:20 +0800)]
net/hns3: unify MAC and multicast address configuration
Currently, the interface logic for adding and deleting all MAC address
and multicast address in PF and VF driver is the same. This patch
extracts two common interfaces to configure them separately.
Signed-off-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Huisong Li [Fri, 22 Oct 2021 09:19:55 +0000 (17:19 +0800)]
net/hns3: extract common interface to check duplicates
Extract a common interface for PF and VF to check whether the configured
multicast MAC address from rte_eth_dev_mac_addr_add() is the same as the
multicast MAC address from rte_eth_dev_set_mc_addr_list().
Signed-off-by: Huisong Li <lihuisong@huawei.com> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Kalesh AP [Sat, 30 Oct 2021 03:50:20 +0000 (09:20 +0530)]
net/bnxt: fix stat context allocation
stat_ctx_alloc is called within the context of each rx/tx ring.
i.e from bnxt_alloc_hwrm_rx_ring and bnxt_alloc_hwrm_tx_ring().
So, there is no need to invoke bnxt_alloc_all_hwrm_stat_ctxs()
from bnxt_start_nic().
Fixes: 657c2a7f1dd4 ("net/bnxt: create aggregation rings when needed") Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Kalesh AP [Sat, 30 Oct 2021 03:50:19 +0000 (09:20 +0530)]
net/bnxt: fix freeing aggregation rings
During port stop, we clear "eth_dev->data->scattered_rx" at the
beginning. As a result, in bnxt_free_hwrm_rx_ring() the check
bnxt_need_agg_ring() returns false and we end up not freeing
the Rx aggregation rings which results in resource leak in the FW.
Fixes: 657c2a7f1dd4 ("net/bnxt: create aggregation rings when needed") Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Simei Su [Fri, 29 Oct 2021 12:56:03 +0000 (20:56 +0800)]
net/ice: fix performance for Rx timestamp
In Rx data path, it reads hardware registers per packet, resulting in
big performance drop. This patch improves performance from two aspects:
(1) replace per packet hardware register read by per burst.
(2) reduce hardware register read time from 3 to 2 when the low value of
time is not close to overflow.
Meanwhile, this patch refines "ice_timesync_read_rx_timestamp" and
"ice_timesync_read_tx_timestamp" API in which
"ice_tstamp_convert_32b_64b" is also used.
Fixes: 953e74e6b73a ("net/ice: enable Rx timestamp on flex descriptor") Fixes: 646dcbe6c701 ("net/ice: support IEEE 1588 PTP") Suggested-by: Harry van Haaren <harry.van.haaren@intel.com> Signed-off-by: Simei Su <simei.su@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Build error:
In function ‘i40e_flow_parse_fdir_pattern’,
inlined from ‘i40e_flow_parse_fdir_filter’
at ../drivers/net/i40e/i40e_flow.c:3274:8:
../drivers/net/i40e/i40e_flow.c:3052:69:
error: writing 1 byte into a region of size 0
[-Werror=stringop-overflow=]
3052 | filter->input.flow_ext.flexbytes[j] =
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
3053 | raw_spec->pattern[i];
| ~~~~~~~~~~~~~~~~~~~~
In file included from ../drivers/net/i40e/i40e_flow.c:25:
../drivers/net/i40e/i40e_flow.c:
In function ‘i40e_flow_parse_fdir_filter’:
../drivers/net/i40e/i40e_ethdev.h:638:17:
note: at offset 16 into destination object ‘flexbytes’ of size 16
638 | uint8_t flexbytes[RTE_ETH_FDIR_MAX_FLEXLEN];
| ^~~~~~~~~
Fixing by adding range checks.
Fixes: 6ced3dd72f5f ("net/i40e: support flexible payload parsing for FDIR") Cc: stable@dpdk.org Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Qi Zhang <qi.z.zhang@intel.com>
David Marchand [Thu, 14 Oct 2021 11:37:18 +0000 (13:37 +0200)]
net/mlx5: do not close stdin on error
If for any reason, a socket could not be opened, mlx5_pmd_socket_init()
could close the 0 fd (which is valid, and has a fair chance to be stdin),
since server_socket == 0 from the variable being in .bss.
Fixes: e6cdc54cc0ef ("net/mlx5: add socket server for external tools") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Li Zhang [Wed, 27 Oct 2021 09:13:58 +0000 (12:13 +0300)]
doc: add metering limitation in mlx5 guide
A meter policy with RSS/Queue action is not supported
when dv_xmeta_en enabled.
When dv_xmeta_en enabled in legacy creating flow,
it will split into two flows
(one set_tag with jump flow and one RSS/queue action flow).
For meter policy as termination table,
it cannot split flow and
cannot support when dv_xmeta_en enabled.
Fixes: 51ec04dc7bcf ("net/mlx5: connect meter policy to created flows") Cc: stable@dpdk.org Signed-off-by: Li Zhang <lizh@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
The MODIFY_FIELD RTE action rejects copy to/from metadata
in case of the legacy mode extensive flow metadata support.
It is not consistent with SET_META action that has no such
restriction imposed. Registers A or B are used for META in
legacy mode. Allow meta modifications in legacy mode as well.
On other hand, SET_META rejects actions in case register C
is not available even though it is not needed in legacy mode.
Skip this check for legacy mode and allow setting META.
Fixes: edf325d421e8 ("net/mlx5: check extended metadata for meta modification") Cc: stable@dpdk.org Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
net/mlx5: fix Tx meta width for modify field flow rule
Register C is used for the metadata within NIC Rx domain.
And its width can vary from 0 to 32 bits depending on
its kernel usage. But it is not the case within NIC Tx domain,
register A is always 32 bits there. Fix metadata width detection
for the modify_field flow API within NIC Tx domain.
Fixes: 6d5735c1cba2 ("net/mlx5: fix meta register conversion for extensive mode") Cc: stable@dpdk.org Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
Min Hu (Connor) [Thu, 28 Oct 2021 11:52:30 +0000 (19:52 +0800)]
net/hns3: fix mailbox communication with HW
Mailbox is the communication mechanism between SW and HW. There exist
two approaches for SW to recognize mailbox message from HW. One way is
using match_id, the other is to compare the message code. The two
approaches are independent and used in different scenarios.
But for the second approach, "next_to_use" should be updated and written
to HW register. If it not done, HW do not know the position SW steps,
then, the communication between SW and HW will turn to be failed.
Fixes: dbbbad23e380 ("net/hns3: fix VF handling LSC event in secondary process") Cc: stable@dpdk.org Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Volodymyr Fialko [Thu, 28 Oct 2021 22:14:46 +0000 (00:14 +0200)]
mempool/cnxk: postpone devargs parsing
Use roc_npa_lf_init_cb_register() scheme to register
callback for max_pools argument parsing.
This will remove the dependency on the order of PCI
devices probed.
Signed-off-by: Volodymyr Fialko <vfialko@marvell.com> Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Maxime Coquelin [Tue, 26 Oct 2021 16:29:04 +0000 (18:29 +0200)]
vhost: increase number of async IO vectors
This patch increases the number of IO vectors for the
asynchronous data path from 512 to 2048. It has been
reported during testing the starvation of IO vectors
during iperf benchmark with 64KB packet size.
As there are no direct relationship between
VHOST_MAX_ASYNC_VEC and BUF_VECTOR_MAX, this patch also
assign VHOST_MAX_ASYNC_VEC value directly instead of being
a multiple of BUF_VECTOR_MAX.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Maxime Coquelin [Tue, 26 Oct 2021 16:29:03 +0000 (18:29 +0200)]
vhost: merge sync and async mbuf to descriptor filling
This patches merges copy_mbuf_to_desc() used by the sync
path with async_mbuf_to_desc() used by the async path.
Most of these complex functions are identical, so merging
them will make the maintenance easier.
In order not to degrade performance, the patch introduces
a boolean function parameter to specify whether it is called
in async context. This boolean is statically passed to this
always-inlined function, so the compiler will optimize this
out.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>
Maxime Coquelin [Tue, 26 Oct 2021 16:29:02 +0000 (18:29 +0200)]
vhost: prepare sync for mbuf to descriptor refactoring
This patch extracts the descriptors buffers filling
from copy_mbuf_to_desc() into a dedicated function as a
preliminary step of merging copy_mubf_to_desc() and
async_mbuf_to_desc().
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Jiayu Hu <jiayu.hu@intel.com>