Feifei Wang [Sun, 20 Sep 2020 11:48:54 +0000 (06:48 -0500)]
test/ring: validate single element enqueue/dequeue
Validate the return value of single element enqueue/dequeue operation in
the test.
Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Feifei Wang [Sun, 20 Sep 2020 11:48:53 +0000 (06:48 -0500)]
test/ring: check dequeued object for single element
Add check in test_ring_basic_ex and test_ring_with_exact_size for single
element enqueue and dequeue operations to validate the dequeued objects.
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Feifei Wang [Sun, 20 Sep 2020 11:48:52 +0000 (06:48 -0500)]
test/ring: fix dequeued object checks
When using memcmp function to check data, the third param should be the
size of all elements, rather than the number of the elements.
Fixes:
a9fe152363e2 ("test/ring: add custom element size functional tests")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Feifei Wang [Sun, 20 Sep 2020 11:48:51 +0000 (06:48 -0500)]
test/ring: fix number of single element enqueue/dequeue
The ring capacity is (RING_SIZE - 1), thus only (RING_SIZE - 1) number of
elements can be enqueued into the ring.
Fixes:
af75078fece3 ("first public release")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Feifei Wang [Sun, 20 Sep 2020 11:48:50 +0000 (06:48 -0500)]
test/ring: fix object reference for single element enqueue
When enqueue one element to ring in the performance test, a pointer
should be passed to rte_ring_[sp|mp]enqueue APIs, not the pointer
to a table of void *pointers.
Fixes:
a9fe152363e2 ("test/ring: add custom element size functional tests")
Cc: stable@dpdk.org
Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Hyong Youb Kim [Wed, 9 Sep 2020 14:00:06 +0000 (07:00 -0700)]
net/enic: support VXLAN decap action combined with VLAN pop
Flow Manager (flowman) provides DECAP_STRIP operation which
decapsulates VXLAN header and then removes VLAN header from the inner
packet. Use this operation to support vxlan_decap followed by
of_pop_vlan.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 14:00:05 +0000 (07:00 -0700)]
net/enic: generate VXLAN src port if it is zero in template
When VXLAN source port in the template is zero, the adapter is
expected to generate a value based on the inner packet flow, when it
performs encapsulation. Flow Manager in the VIC adapter currently
lacks such ability. So, generate a random port when creating a flow if
the port is zero, to avoid transmitting packets with source port 0.
Fixes:
ea7768b5bba8 ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 14:00:04 +0000 (07:00 -0700)]
net/enic: ignore VLAN inner type when it is zero
When a VLAN pattern is present, the flow handler always copies its
inner_type to the match buffer regardless of its value (i.e. HW
matches inner_type against packet's inner ethertype). When inner_type
spec and mask are both 0, adding it to the match buffer is usually
harmless but breaks the following pattern used in some applications
like OVS-DPDK.
flow create 0 ingress ... pattern eth ... type is 0x0800 /
vlan tci spec 0x2 tci mask 0xefff / ipv4 / end actions count /
of_pop_vlan / ...
The VLAN pattern's inner_type is 0. And the outer eth pattern's type
actually specifies the inner ethertype. The outer ethertype (0x0800)
is first copied to the match buffer. Then, the driver copies
inner_type (0) to the match buffer, which overwrites the existing
0x0800 with 0 and breaks the app usage above.
Simply ignore inner_type when it is 0, which is the correct
behavior. As a byproduct, the driver can support the usage like the
above.
Fixes:
ea7768b5bba8 ("net/enic: add flow implementation based on Flow Manager API")
Cc: stable@dpdk.org
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 14:00:03 +0000 (07:00 -0700)]
net/enic: support priorities for TCAM flows
Group 0 corresponds to TCAM which supports priorities. Accept non-zero
priorities for group 0 flows.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 14:00:02 +0000 (07:00 -0700)]
net/enic: support egress port id action
Use Flow Manager (flowman) to support egress PORT_ID action. It can
steer egress packets from PFs and VFs to any uplink port as long as
they are all on the same VIC adapter. It can also steer packets
between ports on the same VIC adapter (i.e. loopback).
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 14:00:01 +0000 (07:00 -0700)]
net/enic: remove obsolete code
The 'next' field in struct enic is unused. The comment in enic_cq_rq()
is out-of-date. Remove them.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 13:56:56 +0000 (06:56 -0700)]
net/enic: enable flow API for VF representor
Use Flow Manager (flowman) to support flow API for
representors. Representor's flow handlers simply invoke PF handlers
and pass the representor's flowman structure. The PF flowman handlers
are aware of representors and perform appropriate devcmds to create
flows on the NIC.
Also use flowman to create internal flows for implicit VF-representor
path. With that, representor Tx/Rx is now functional.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 13:56:55 +0000 (06:56 -0700)]
net/enic: extend flow handler to support VF representors
VF representor ports can create flows on VFs through the PF flowman
(Flow Manager) instance in the firmware. These flows match packets
egressing from VFs and apply flowman actions.
1. Make flow handler aware of VF representors
When a representor port invokes flow APIs, use the PF port's flowman
instance to perform flowman devcmd. If the port ID refers to a
representor, use VF handle instead of PF handle.
2. Serialize flow API calls
Multiple application thread may invoke flow APIs through PF and VF
representor ports simultaneously. This leads to races, as ports all
share the same PF flowman instance. Use a lock to serialize API
calls. Lock is used only when representors exist.
3. Add functions to create flows for implicit representor paths
There is an implicit path between VF and its representor. The
functions below create flow rules to implement that path.
- enic_fm_add_rep2vf_flow()
- enic_fm_add_vf2rep_flow()
The flows created for representor paths are marked as internal. These
are not visible to application, and the flush API does not destroy
them. They are automatically deleted when the representor port stops
(enic_fm_destroy).
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 13:56:54 +0000 (06:56 -0700)]
net/enic: add single queue Tx and Rx to VF representor
A VF representor allocates queues from PF's pool of queues and use
them for its Tx and Rx. It supports 1 Tx queue and 1 Rx queue.
Implicit packet forwarding between representor queues and VF does not
yet exist. It will be enabled in subsequent commits using flowman API.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 13:56:53 +0000 (06:56 -0700)]
net/enic: add minimal VF representor
Enable the minimal VF representor without Tx/Rx and flow API support.
1. Enable the standard devarg 'representor'
When the devarg is specified, create VF representor ports.
2. Initialize flowman early during PF probe
Representors require the flowman API from the firmware. Initialize it
before creating VF representors, so probe can detect the flowman
support and fail if not available.
3. Add enic_fm_allocate_switch_domain() to allocate switch domain ID
PFs and VFs on the same VIC adapter can forward packets to each other,
so the switch domain is the physical adapter.
4. Create a vnic_dev lock to serialize concurrent devcmd calls
PF and VF representor ports may invoke devcmd (e.g. dump stats)
simultaneously. As they all share a single PF devcmd instance in the
firmware, use a lock to serialize devcmd calls.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Hyong Youb Kim [Wed, 9 Sep 2020 13:56:52 +0000 (06:56 -0700)]
net/enic: extend VNIC dev API for VF representors
VF representors need to proxy devcmd through the PF vnic_dev
instance. Extend vnic_dev to accommodate them as follows.
1. Add vnic_vf_rep_register()
A VF representor creates its own vnic_dev instance via this function
and saves VF ID. When performing devcmd, vnic_dev uses the saved VF ID
to proxy devcmd through the PF vnic_dev instance.
2. Add vnic_register_lock()
As PF and VF representors appear as independent ports to the
application, its threads may invoke APIs on them simultaneously,
leading to race conditions on the PF vnic_dev. For example, thread A
can query stats on PF port, while thread B queries stats on a VF
representor.
The PF port invokes this function to provide a lock to vnic_dev. This
lock is used to serialize devcmd calls from PF and VF representors.
3. Add utility functions to assist VF representor settings
vnic_dev_mtu() and vnic_dev_uif() retrieve vnic MTU and UIF number
(uplink index), respectively.
Signed-off-by: Hyong Youb Kim <hyonkim@cisco.com>
Reviewed-by: John Daley <johndale@cisco.com>
Chengchang Tang [Mon, 21 Sep 2020 13:22:38 +0000 (21:22 +0800)]
net/hns3: add Rx buffer size to Rx queue info
Report hns3 PMD configured Rx buffer size in Rx queue information query.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Chengchang Tang [Mon, 21 Sep 2020 13:22:37 +0000 (21:22 +0800)]
ethdev: support getting Rx buffer size in Rx queue info
Add a field named rx_buf_size in rte_eth_rxq_info to indicate the buffer
size used in receiving packets for HW.
In this way, upper-layer users can get this information by calling
rte_eth_rx_queue_info_get.
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Reviewed-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Long Li [Fri, 18 Sep 2020 18:53:47 +0000 (11:53 -0700)]
net/netvsc: fix rndis packet addresses
The address should be calculated before type cast, not after.
Fixes:
cc0251813277 ("net/netvsc: split send buffers from Tx descriptors")
Cc: stable@dpdk.org
Reported-by: Souvik Dey <sodey@rbbn.com>
Signed-off-by: Long Li <longli@microsoft.com>
Qi Zhang [Mon, 21 Sep 2020 08:30:58 +0000 (16:30 +0800)]
net/iavf: fix iterator for RSS LUT
Change RSS LUT iterator from uint8_t to uint16_t since the
RSS LUT size could exceed 255.
Fixes:
69dd4c3d0898 ("net/avf: enable queue and device")
Cc: stable@dpdk.org
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Ting Xu <ting.xu@intel.com>
Phil Yang [Fri, 11 Sep 2020 05:38:19 +0000 (13:38 +0800)]
net/memif: relax barrier for zero copy path
Using 'rte_mb' to synchronize the shared ring head/tail between producer
and consumer will stall the pipeline and damage performance on the weak
memory model platforms, such like aarch64.
Relax the expensive barrier with c11 atomic with explicit memory
ordering can improve 3.6% performance on throughput.
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Jakub Grajciar <jgrajcia@cisco.com>
Chengchang Tang [Wed, 9 Sep 2020 09:23:39 +0000 (17:23 +0800)]
net/hns3: fix crash when Tx multiple buffer packets
Currently, there is a possibility that segment faults occur when sending
packets whose payloads are stored in multiple buffers based on hns3
network engine. The related core dump information as follows:
Program terminated with signal 11, Segmentation fault.
0 hns3_reassemble_tx_pkts
2512 temp = temp->next;
Missing separate debuginfos, use:
(gdb) bt
0 hns3_reassemble_tx_pkts
1 0x0000000000969c60 in hns3_check_non_tso_pkt
2 0x000000000096adbc in hns3_xmit_pkts
3 0x000000000050d4d0 in rte_eth_tx_burst
4 0x000000000050fca4 in pkt_burst_transmit
5 0x00000000004ca6b8 in run_pkt_fwd_on_lcore
6 0x00000000004ca7fc in start_pkt_forward_on_core
7 0x00000000006975a4 in eal_thread_loop
8 0x0000ffffa6f7fc48 in start_thread
9 0x0000ffffa6ed1600 in thread_start
The root cause is that hns3 PMD driver invokes the rte_pktmbuf_free_seg
API function to release the same rte_mbuf multiple times. The rte_mbuf
pointer is not set to NULL in the internal function
hns3_rx_queue_release_mbufs which is invoked during queue setup, stop
and close. As a result the rte_mbuf in Rx queues will be repeatedly
released when the user application setup queues or stop/start the dev
for multiple times.
Probably for performance reasons, DPDK mempool lib does not check for
the repeated rte_mbuf releases. The Address of released rte_mbuf are
directly stored into the per lcore cache of the mempool. This makes the
rte_mbufs obtained from mempool by calling rte_mempool_get_bulk API
function repetitively. ultimately, it causes to access to a NULL
pointer in PMD driver.
This patch fixes this problem by setting released mbuf pointer to NULL
in the internal function named hns3_rx_queue_release_mbuf. And the other
internal function named hns3_reassemble_tx_pkts is optimized to avoid a
similar problem.
Fixes:
bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:38 +0000 (17:23 +0800)]
net/hns3: add restriction on setting VF MTU
When Rx of scattered packets is off, we have some possibility of using
vector Rx process function or simple Rx functions in hns3 PMD driver.
If the input MTU is increased and the maximum length of received packets
is greater than the length of a buffer for Rx packets, the hardware
network engine needs to use multiple BDs and buffers to store these
packets. This will cause problems when still using vector Rx process
function or simple Rx function to receiving packets. So, when Rx of
scattered packets is off and device is started, it is not permitted to
increase MTU so that the maximum length of Rx packets is greater than Rx
buffer length.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:37 +0000 (17:23 +0800)]
net/hns3: support NEON Rx
This patch adds NEON vector instructions to optimize Rx burst process.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:36 +0000 (17:23 +0800)]
net/hns3: support NEON Tx
This patch adds NEON vector instructions to optimize Tx burst process.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:35 +0000 (17:23 +0800)]
net/hns3: add simple Tx path
This patch adds simple Tx process function. When multiple segment
packets are not needed, Which means that DEV_TX_OFFLOAD_MBUF_FAST_FREE
offload is not set, we can simple Tx process.
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:34 +0000 (17:23 +0800)]
net/hns3: add simple Rx path
This patch adds simple Rx process function and support chose Rx function
by real Rx offloads capability.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:33 +0000 (17:23 +0800)]
net/hns3: reduce address calculation in Rx
This patch adds the internal function named hns3_write_reg_opt to avoid
performance loss from address calculation during register access in the
'.rx_pkt_burst' ops implementation function named hns3_recv_pkts.
In addition, because hardware always access register in little-endian
mode based on hns3 network engine, so driver should also call
rte_cpu_to_le_32 to convert data in little-endian mode before writing
register and call rte_le_to_cpu_32 to convert data after reading from
register. Here the driver encapsulates the data conversion operation in
the register read/write operation function as below:
hns3_write_reg
hns3_write_reg_opt
hns3_read_reg
Therefore, when calling these functions, conversion is not required
again.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Wei Hu (Xavier) [Wed, 9 Sep 2020 09:23:32 +0000 (17:23 +0800)]
net/hns3: report Rx free threshold
This patch reports .rx_free_thresh value in the .dev_infos_get ops
implementation function named hns3_dev_infos_get and
hns3vf_dev_infos_get.
In addition, the name of the member variable of struct hns3_rx_queue is
modified and comments are added to improve code readability.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Ivan Dyukov [Tue, 15 Sep 2020 19:07:02 +0000 (22:07 +0300)]
examples: use new link status print format
Add usage of rte_eth_link_to_str function to example
applications.
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ivan Dyukov [Tue, 15 Sep 2020 19:06:58 +0000 (22:06 +0300)]
app: use new link status print format
Add usage of rte_eth_link_to_str function to applications and docs.
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ivan Dyukov [Tue, 15 Sep 2020 19:06:57 +0000 (22:06 +0300)]
ethdev: format link status text
There is new link_speed value introduced. It's INT_MAX value which
means that speed is unknown. To simplify processing of the value
in application, new function is added which convert link_speed to
string. Also dpdk examples have many duplicated code which format
entire link status structure to text.
This commit adds two functions:
* rte_eth_link_speed_to_str - format link_speed to string
* rte_eth_link_to_str - convert link status structure to string
Signed-off-by: Ivan Dyukov <i.dyukov@samsung.com>
Acked-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>
Ciara Loftus [Thu, 10 Sep 2020 09:06:47 +0000 (09:06 +0000)]
net/af_xdp: fix umem size
The kernel expects the start address of the UMEM to be page size
aligned.
Since the mempool is not guaranteed to have such alignment, we have been
aligning the address to the start of the page the mempool is on. However
when passing the 'size' of the UMEM during it's creation we did not take
this into account.
This commit adds the amount by which the address was aligned to the size
of the UMEM.
Bugzilla ID: 532
Fixes:
d8a210774e1d ("net/af_xdp: support unaligned umem chunks")
Cc: stable@dpdk.org
Signed-off-by: Ciara Loftus <ciara.loftus@intel.com>
Vipul Ashri [Fri, 18 Sep 2020 09:55:04 +0000 (15:25 +0530)]
net/virtio: fix variable assignment in helper macro
Inside Macro ASSIGN_UNLESS_EQUAL(var, val), assignment to var is always
failing as assignment done using var_ having local scope only.
This leads to TX packets not going out and found broken due to cleanup
malfunctioning. This patch fixes the wrong variable assignment.
Fixes:
57f90f894588 ("net/virtio: reuse packed ring functions")
Cc: stable@dpdk.org
Signed-off-by: Vipul Ashri <vipul.ashri@oracle.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Matan Azrad [Thu, 10 Sep 2020 07:20:34 +0000 (07:20 +0000)]
vdpa/mlx5: fix completion queue polling
The CQ polling is done in order to notify the guest about new traffic
bursts and to release FW resources for the next bursts management.
When HW is faster than SW, it may be that all the FW resources are busy
in SW due to late polling.
In this case, due to wrong WQE counter masking, the fullness
calculation of the completions number is 0 while the queue is full.
Change the WQE counter masking to 16-bit wideness instead of the CQ
size mask as defined by the CQE format.
Fixes:
c5f714e50b0e ("vdpa/mlx5: optimize completion queue poll")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Matan Azrad [Wed, 2 Sep 2020 08:34:59 +0000 (08:34 +0000)]
vdpa/mlx5: fix completion queue assertion
The CQ configuration enables the collapse feature in HW what cause HW to
write all the completions in the first CQE.
When this feature is enabled the HW doesn't switch the owner bit when it
starts a new cycle of the CQ, not like working without the collapse
feature.
The current SW CQ polling wrongly added an assertion to validate the
owner bit switch what causes a panic in debug mode.
Remove the aforementioned assertion.
Fixes:
c5f714e50b0e ("vdpa/mlx5: optimize completion queue poll")
Cc: stable@dpdk.org
Signed-off-by: Matan Azrad <matan@nvidia.com>
Acked-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Eugenio Pérez [Mon, 31 Aug 2020 07:59:22 +0000 (09:59 +0200)]
vhost: fix IOTLB mempool single-consumer flag
Control thread (which handles iotlb msg) and forwarding thread
both use iotlb to translate address. The former may modify the
same entry of mempool and may cause a loop in iotlb_pending_entries
list.
Bugzilla ID: 523
Fixes:
d012d1f293f4 ("vhost: add IOTLB helper functions")
Cc: stable@dpdk.org
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Xueming Li [Tue, 25 Aug 2020 09:17:28 +0000 (09:17 +0000)]
vdpa/mlx5: fix event channel setup
During vDPA device setup, if some error happens, event channel
release stucks at polling event channel.
Event channel fd is set to non-blocking in cqe setup, so if any
error happens before this function and after event channel created,
the pooling before releasing resources will stuck.
This patch moves event channel to non-blocking mode right after
creation.
Fixes:
8395927cdfaf ("vdpa/mlx5: prepare HW queues")
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Chenbo Xia [Mon, 10 Aug 2020 13:18:02 +0000 (13:18 +0000)]
vhost: add device reset status
vhost lib now does not have definition of reset status. This patch
adds the reset status definition and changes related log.
Signed-off-by: Chenbo Xia <chenbo.xia@intel.com>
Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Adrian Moreno [Wed, 5 Aug 2020 14:45:17 +0000 (16:45 +0200)]
net/virtio-user: enable feature checking
virtio 1.0 introduced a mechanism for the driver to verify that the
feature bits it sets are accepted by the device. This mechanism consists
in setting the VIRTIO_STATUS_FEATURE_OK status bit and re-reading it,
which gives a chance for the device to clear it if the features
were not accepted.
This is currently being done only in modern virtio-pci devices but since
the appropriate vhost-user messages have been added, it can also be done
in virtio-user (vhost-user only).
This patch activates this mechanism on virtio-user.
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Adrian Moreno [Wed, 5 Aug 2020 14:45:16 +0000 (16:45 +0200)]
net/virtio-user: support vhost status getting
This patch adds support for VHOST_USER_GET_STATUS request.
Only vhost-user backed is supported for now
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Maxime Coquelin [Wed, 5 Aug 2020 14:45:15 +0000 (16:45 +0200)]
net/virtio-user: support vhost status setting
This patch adds support for VHOST_USER_SET_STATUS
request. It is used to make the backend aware of
Virtio devices status update.
It is useful for the backend to know when the Virtio
driver is done with the Virtio device configuration.
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Adrian Moreno [Wed, 5 Aug 2020 14:45:14 +0000 (16:45 +0200)]
net/virtio: add device reset status bit
For the sake of completeness, add the definition of the missing status
bit in accordance with the virtio spec
Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
Signed-off-by: Adrian Moreno <amorenoz@redhat.com>
Rahul Lakkireddy [Fri, 11 Sep 2020 23:52:10 +0000 (05:22 +0530)]
net/cxgbe: support RSS redirection table update
Implement eth_dev_ops to manipulate RSS redirection table.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Rahul Lakkireddy [Fri, 11 Sep 2020 23:52:09 +0000 (05:22 +0530)]
net/cxgbe: improve Rx congestion control
Chelsio T6 NIC can support up to 8 priority channels to manage
congestion. So, increase to 8 congestion channels for T6. Also,
add Rxq state to avoid unnecessarily ringing doorbell and polling
the hardware for more traffic when the Rxq is stopped.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Rahul Lakkireddy [Fri, 11 Sep 2020 23:52:08 +0000 (05:22 +0530)]
net/cxgbe: rework queue allocation between ports
Firmware returns the max queues that can be allocated on the entire
PF. The driver evenly distributes them across all the ports belonging
to the PF. However, some ports may need more queues than others and
this equal distribution scheme prevents accessing these other ports
unused queues. So, remove the equal distribution scheme and allow the
ports to allocate as many queues as they need.
Also remove the hardcoded 64 max limit on queue allocation. Instead,
use the max limit given by firmware.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Rahul Lakkireddy [Tue, 1 Sep 2020 17:16:26 +0000 (22:46 +0530)]
net/cxgbe: release port resources during port close
Enable RTE_ETH_DEV_CLOSE_REMOVE during PCI probe for all ports
enumerated under the PF. Free up the underlying port Virtual
Identifier (VI) and associated resources during port close.
Once all the ports under the PF are closed, free up the PF-wide
shared resources. Invoke port close function of all ports under
the PF, in PCI remove too.
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Rahul Lakkireddy [Tue, 1 Sep 2020 17:16:25 +0000 (22:46 +0530)]
net/cxgbe: fix queue DMA ring leaks during port close
Free up the DMA memzones properly for all the port's queues during
port close. So, rework DMA ring allocation/free logic to use
rte_eth_dma_zone_reserve()/rte_eth_dma_zone_free() helper functions
for allocating/freeing the memzones.
The firmware event queue doesn't have an associated freelist queue.
So, remove check that tries to give memzone name for a non-existent
freelist queue.
Also, add a missing free for the control queue mempools.
Fixes:
0462d115441d ("cxgbe: add device related operations")
Cc: stable@dpdk.org
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Igor Romanov [Tue, 8 Sep 2020 09:20:22 +0000 (10:20 +0100)]
net/sfc/base: fix tunnel configuration
Tunnel configuration may fail because of insufficient access rights
on a virtual function. Ignore the failure if a tunnel configuration
with empty UDP ports is requested.
Fixes:
17551f6dffcc ("net/sfc/base: add API to control UDP tunnel ports")
Cc: stable@dpdk.org
Signed-off-by: Igor Romanov <igor.romanov@oktetlabs.ru>
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com>
Kiran Kumar K [Thu, 17 Sep 2020 02:07:35 +0000 (07:37 +0530)]
net/octeontx2: support RSS hash level
Add support to choose rss hash level from ethdev rss config.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Kiran Kumar K [Thu, 17 Sep 2020 02:07:34 +0000 (07:37 +0530)]
app/testpmd: support RSS level configuration
Adding support to set RSS level from ethdev config.
level-default will requests the default behavior.
level-outer will requests RSS to be performed on the outermost packet
encapsulation level.
level-inner will request RSS to be performed on the specified inner
packet encapsulation level, from outermost to innermost.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kiran Kumar K [Thu, 17 Sep 2020 02:07:33 +0000 (07:37 +0530)]
ethdev: support encapsulation level for RSS offload
This patch reserves 2 bits as input selection to select inner and outer
encapsulation level for RSS computation. It is combined with existing
ETH_RSS_* to choose inner or outer layers.
This functionality already exists in rte_flow through level parameter in
RSS action configuration rte_flow_action_rss.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
Acked-by: Andrew Rybchenko <arybchenko@solarflare.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Haiyue Wang [Mon, 7 Sep 2020 01:56:50 +0000 (09:56 +0800)]
net: adjust header length parse size
Enlarge the L3 and tunnel header length from 8-bit to 16-bit to handle
the bigger headers. And reorder the fields to avoid creating a structure
hole.
Signed-off-by: Haiyue Wang <haiyue.wang@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>
Heinrich Kuhn [Thu, 3 Sep 2020 11:23:51 +0000 (13:23 +0200)]
doc: improve multiport PF in nfp guide
The Agilio CX family of smartNIC's generally have a 1:many mapping of PF
to physical ports. Elaborate on this mapping in the PF multiport section
of the NFP PMD documentation.
Fixes:
d625beafc8be ("doc: update NFP with PF support information")
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Rahul Lakkireddy [Tue, 1 Sep 2020 17:20:09 +0000 (22:50 +0530)]
net/cxgbe: fix crash when accessing empty Tx mbuf list
Ensure packets are available before accessing the mbuf list in Tx
burst function. Otherwise, just reclaim completed Tx descriptors and
exit.
Fixes:
b1df19e43e1d ("net/cxgbe: fix prefetch for non-coalesced Tx packets")
Cc: stable@dpdk.org
Reported-by: Brian Poole <brian90013@gmail.com>
Signed-off-by: Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
Yi Yang [Thu, 17 Sep 2020 02:12:49 +0000 (10:12 +0800)]
gso: fix payload unit size for UDP
Fragment offset of IPv4 header is measured in units of
8 bytes. Fragment offset of UDP fragments will be wrong
after GSO if pyld_unit_size isn't multiple of 8. Say
pyld_unit_size is 1500, fragment offset of the second
UDP fragment will be 187 (i.e. 1500 / 8), which means 1496,
and it will result in 4-byte data loss (1500 - 1496 = 4).
So UDP GRO will reassemble out a wrong packet.
Fixes:
b166d4f30b66 ("gso: support UDP/IPv4 fragmentation")
Cc: stable@dpdk.org
Signed-off-by: Yi Yang <yangyi01@inspur.com>
Acked-by: Jiayu Hu <jiayu.hu@intel.com>
Qi Zhang [Thu, 17 Sep 2020 05:18:08 +0000 (13:18 +0800)]
net/ice: support new devices
Added support for below new devices:
ICE_DEV_ID_E822L_BACKPLANE 0x1897
ICE_DEV_ID_E822L_SFP 0x1898
ICE_DEV_ID_E822L_10G_BASE_T 0x1899
ICE_DEV_ID_E822L_SGMII 0x189A
The patch also reordered items in pci_id_ice_map to align with
ice_devids.h
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Qiming Yang <qiming.yang@intel.com>
Qi Zhang [Wed, 16 Sep 2020 06:26:33 +0000 (14:26 +0800)]
net/iavf: reject floating RSS attribute
For RSS attribute don't have an associated RSS type, we need
to reject it.
Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
Acked-by: Junfeng Guo <junfeng.guo@intel.com>
Somnath Kotur [Fri, 11 Sep 2020 01:56:03 +0000 (18:56 -0700)]
net/bnxt: add separate mutex for FW health check
def_cp_lock was added to sync race between dev_configure and
int_handler. It should not be used to synchronize scheduling of FW
health check between dev_start and async event handler as well,
use a separate mutex for the same.
Fixes:
a73b8e939f10 ("net/bnxt: fix race between start and interrupt handler")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Fri, 11 Sep 2020 01:56:02 +0000 (18:56 -0700)]
net/bnxt: fix checking VNIC in shutdown path
Add a couple of NULL pointer checks in bnxt_free_all_filters()
and bnxt_free_vnics() respectively to guard against certain error
injection/recovery scenarios where it was found that the application
was crashing with the bp->vnic_info pointer being NULL.
Fixes:
51fafb89a9a0 ("net/bnxt: get rid of ff pools and use VNIC info array")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:56:01 +0000 (18:56 -0700)]
net/bnxt: add locks in flow database
Added support for mutex protection for the flow database to prevent
simultaneous access to flow database and protect flow creation and
deletion.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Fri, 11 Sep 2020 01:56:00 +0000 (18:56 -0700)]
net/bnxt: fix representor data path
1.Representor Rx ring producer index was not getting reset in
the ring full case. Fix it by incrementing only in
success case.
2.Instead of calling the mbuf specific routine to free the mbuf when
representor ring is full rte_free was being called leading to
'invalid memory' errors being logged.
3. Do not account the pkt meant for the representor in the parent
Rx ring's array that is returned to the application.
Fixes:
6dc83230b43b ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Sriharsha Basavapatna [Fri, 11 Sep 2020 01:55:59 +0000 (18:55 -0700)]
net/bnxt: provide switch info if VFR are configured
Some applications need switch_info of the device to be returned
as a part of eth_dev_info_get(). The offload logic in such
applications could use this info. Pass this info to the when VF
representors are configured.
Fixes:
322bd6e70272 ("net/bnxt: add port representor infrastructure")
Cc: stable@dpdk.org
Signed-off-by: Sriharsha Basavapatna <sriharsha.basavapatna@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:58 +0000 (18:55 -0700)]
net/bnxt: fix out of bound access in bit handling
Fix out of bounds access in action bit handling.
The act_val is changed to be array to resolve out of bound access issue.
Fixes:
52799debdf1c ("net/bnxt: support action bitmap opcode")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:57 +0000 (18:55 -0700)]
net/bnxt: enable NAT action with tagged traffic
Added support for performing L3 or L4 rewrite for VLAN tagged
flows. The outer most DMAC, SMAC and VLAN are used to overwrite
when NAT operations are performed.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:56 +0000 (18:55 -0700)]
net/bnxt: enable VXLAN IPv6 encapsulation
Add code to support vxlan ipv6 tunnel encapsulation. The
ipv6 flow traffic class and flow label wild card match
can be ignored to support offload on some applications.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Fri, 11 Sep 2020 01:55:55 +0000 (18:55 -0700)]
net/bnxt: check and set initial counter ID
Instead of relying on value of Flow counter ID to determine validity
have an explicit boolean flag for the same to check and set.
Fixes:
306c2d28e247 ("net/bnxt: support count action in flow query")
Fixes:
9cf9c8385df7 ("net/bnxt: add ULP flow counter manager")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:54 +0000 (18:55 -0700)]
net/bnxt: increase counter support from 8K to 16K
The number of internal stats counter is increased to 16k
in both egress and ingress direction.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:53 +0000 (18:55 -0700)]
net/bnxt: remove VLAN pop action for egress flows
Whitney platform does not support VLAN pop action in the egress
direction. Hence the VLAN pop action is removed from the egress
action templates.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Randy Schacher [Fri, 11 Sep 2020 01:55:52 +0000 (18:55 -0700)]
net/bnxt: use direct HWRM message for interface table
Change interface tables to use direct or non-tunneled HWRM messaging
instead of tunneled messaging. Update HWRM API to a new version to
allow this change.
Signed-off-by: Randy Schacher <stuart.schacher@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Shahaji Bhosle [Fri, 11 Sep 2020 01:55:51 +0000 (18:55 -0700)]
net/bnxt: update resource settings
Update default resource configuration.
Resources include ENCAP records, TCAM, wild card, source property
functions and such.
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Fri, 11 Sep 2020 01:55:50 +0000 (18:55 -0700)]
net/bnxt: fix VFR cleanup during init failure
If VF-rep port add fails for some reason, code was rolling back
all ports added so far. With some applications, there is no need
to do that. Just log failure message for the VF rep port add and
continue.
Also include RTE_MAX_ETH_PORTS value in the bounds check as one port
will be taken by the uplink port anyway
Fixes:
6dc83230b43b ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Somnath Kotur [Fri, 11 Sep 2020 01:55:49 +0000 (18:55 -0700)]
net/bnxt: fix crash in VFR queue select
Instead of bounds checking against max possible rings while selecting
queue index for the VF representor, do it against the number of rings
configured.
Fixes:
6dc83230b43b ("net/bnxt: support port representor data path")
Cc: stable@dpdk.org
Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:48 +0000 (18:55 -0700)]
net/bnxt: refactor VFR port clean up
When parent VF or PF ports are cleaned up, the child VF representor
ports also need to be cleaned up. If not cleaned up, then deleting
the parent VF shall result in not cleaning up the hardware rules and
updating the firmware of VFR removal. The issue can occur even when
application is exited without deleting VFR ports.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:47 +0000 (18:55 -0700)]
net/bnxt: fix function id used in flow flush
The function id being used in the flush is incorrect, fixed the
flush of the flows.
Fixes:
74bcfc062489 ("net/bnxt: add session and function flow flush")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:46 +0000 (18:55 -0700)]
net/bnxt: modify default flow rule creation
Change default flow rule to use 8-byte encap.
The VFR conduit uses VLAN encap to send packets. So the encap record
is changed from 16B to 8B. That frees up 8B of encap records.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Shahaji Bhosle [Fri, 11 Sep 2020 01:55:45 +0000 (18:55 -0700)]
net/bnxt: add null check for resource manager
Verify the resource manager has been allocated prior to using it.
This can avoid potential segmentation faults.
Signed-off-by: Shahaji Bhosle <sbhosle@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Mike Baucom [Fri, 11 Sep 2020 01:55:44 +0000 (18:55 -0700)]
net/bnxt: free EM index on failure
When a Exact Match entry fails insertion, the allocated index needs to
be pushed back to the allocation stack. This patch takes care of that.
Signed-off-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:43 +0000 (18:55 -0700)]
net/bnxt: fix coexistence of IPv4 and IPv6 ingress rules
The ingress rule to match on ipv4 and ipv6 is now two rules to
make sure both rules can coexist at the same time. Added count
action only for ingress flows.
Fixes:
fe82f3e02701 ("net/bnxt: support exact match templates")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:42 +0000 (18:55 -0700)]
net/bnxt: reduce debug log messages
Removed the mark id log message since it is in the data path.
Also optimized the link status debug message.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:41 +0000 (18:55 -0700)]
net/bnxt: reject flow offload with invalid MAC
Reject offload flows that have broadcast or multicast
Ethernet addresses.
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:40 +0000 (18:55 -0700)]
net/bnxt: fix flow drop action to support count
Changed the action template to support count action in addition
to a flow that does drop action.
Fixes:
fe82f3e02701 ("net/bnxt: support exact match templates")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Kishore Padmanabha [Fri, 11 Sep 2020 01:55:39 +0000 (18:55 -0700)]
net/bnxt: fix port stop process and cleanup resources
The port deinitialization now cleans up all the resources
properly. If all the ports are stopped then ULP context is
freed.
Added fix to update the correct tfp pointer in the ULP context
with the changes to support multi control channels.
Fixes:
70e64b27af5b ("net/bnxt: support ULP session manager cleanup")
Cc: stable@dpdk.org
Signed-off-by: Kishore Padmanabha <kishore.padmanabha@broadcom.com>
Reviewed-by: Mike Baucom <michael.baucom@broadcom.com>
Reviewed-by: Shahaji Bhosle <sbhosle@broadcom.com>
Yunjian Wang [Tue, 15 Sep 2020 11:57:40 +0000 (19:57 +0800)]
bus/dpaa: fix fd check before close
The fd is possibly a negative value while it is passed as an
argument to function "close". Fix the check to the fd.
Fixes:
b9c94167904f ("bus/dpaa: decouple FQ portal alloc and init")
Cc: stable@dpdk.org
Signed-off-by: Yunjian Wang <wangyunjian@huawei.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Ting Xu [Wed, 16 Sep 2020 03:02:28 +0000 (11:02 +0800)]
net/ice: fix ptype parsing
The ptype mask for flexible descriptor in Rx function ice_recv_pkts_vec
has a reversed order, which leads to an incorrect value of the final
ptype. This patch fix the mask to parse the correct ptype of RX packets.
Fixes:
c68a52b8b38c ("net/ice: support vector SSE in Rx")
Cc: stable@dpdk.org
Signed-off-by: Ting Xu <ting.xu@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Guinan Sun [Tue, 15 Sep 2020 06:52:25 +0000 (06:52 +0000)]
net/i40e: fix recreating flexible flow director rule
This patch fixes the failure of recreate flexible fdir rule.
The root cause is that the flex_mask_flag is not reset during
flow destroy and flow flush.
Fixes:
6ced3dd72f5f ("net/i40e: support flexible payload parsing for FDIR")
Cc: stable@dpdk.org
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Guinan Sun [Wed, 16 Sep 2020 03:10:02 +0000 (03:10 +0000)]
net/ice: remove devargs for flow mark
Currently, all data paths already support flow mark, so remove devargs
"flow-mark-support". FDIR matched ID will display in verbose
when packets match the created rule.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Guinan Sun [Wed, 16 Sep 2020 03:10:01 +0000 (03:10 +0000)]
net/ice: support flow mark in SSE path
Support flow director mark ID parsing from flexible
Rx descriptor in SSE path.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Guinan Sun [Wed, 16 Sep 2020 03:10:00 +0000 (03:10 +0000)]
net/ice: support flow mark in AVX path
Support flow director mark ID parsing from flexible
Rx descriptor in AVX path.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Guinan Sun [Wed, 16 Sep 2020 03:09:59 +0000 (03:09 +0000)]
net/ice: add flow director enabled switch
The patch adds fdir_enabled flag to identify if parse flow director mark
ID from flexible Rx descriptor.
Signed-off-by: Guinan Sun <guinanx.sun@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Junyu Jiang [Wed, 16 Sep 2020 03:09:58 +0000 (03:09 +0000)]
net/ice: support flex Rx descriptor RxDID22
This patch supports RxDID #22 by the following changes:
- add structure and macro definition for RxDID #22.
- support RxDID #22 format in normal path.
- change RSS hash parsing from RxDID #22 in AVX/SSE data path.
Signed-off-by: Junyu Jiang <junyux.jiang@intel.com>
Acked-by: Leyi Rong <leyi.rong@intel.com>
Michael Baum [Sun, 13 Sep 2020 19:05:22 +0000 (19:05 +0000)]
net/mlx5: fix hairpin dependency on destination DevX TIR
The PMD supports hairpin only if DevX is supported and DV flow is
enabled.
When destination DevX TIR is not supported, the PMD tries to create TIR
action, and fails.
Avoid supporting hairpin when destination DevX TIR is not supported.
Fixes:
b6b3bf86bd1a ("net/mlx5: get hairpin capabilities")
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Michael Baum [Sun, 13 Sep 2020 19:05:21 +0000 (19:05 +0000)]
net/mlx5: fix Rx objects creator selection
There are 2 creators for Rx objects, DevX and Verbs.
There are supported DR versions when a DevX destination TIR flow action
creation cannot be supported, using this versions the TIR object should
be created by Verbs, what forces all the Rx objects to be created by
Verbs.
The selection of the Rx objects creator, wrongly, didn't take into
account the destination TIR action support what caused a failure in the
Rx flows creation.
Select Verbs creator when destination TIR action creation is not
supported by the DR version.
Fixes:
6deb19e1b2d2 ("net/mlx5: separate Rx queue object creations")
Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Wei Hu (Xavier) [Tue, 8 Sep 2020 12:28:07 +0000 (20:28 +0800)]
net/hns3: fix queue offload capability
Currently, offload capabilities are only enabled for all Rx/Tx queues in
hns3 PF/VF PMD driver, and offload capability only applied in a Rx/Tx
queue is not supported.
So this patch moves 'DEV_TX_OFFLOAD_MBUF_FAST_FREE' from
tx_queue_offload_capa to tx_offload_capa.
Fixes:
1f5ca0b460cd ("net/hns3: support some device operations")
Fixes:
a5475d61fa34 ("net/hns3: support VF")
Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Wei Hu (Xavier) <xavier.huwei@huawei.com>
Bruce Richardson [Wed, 2 Sep 2020 16:24:27 +0000 (17:24 +0100)]
app/testpmd: fix name of bitrate library in meson build
The bitrate library in DPDK is actually in a "bitratestats" directory,
so that is used by meson for the macro and library name.
Therefore, we need to update references to RTE_LIBRTE_BITRATE to
RTE_LIBRTE_BITRATESTATS in testpmd to have it found. Rather than
supporting both defines, since make is being removed, we can just
replace all instances of the former define with the latter.
To ensure testpmd links ok when this is done, we also need to add
bitratestats to the list of library dependencies.
Fixes:
5b9656b157d3 ("lib: build with meson")
Cc: stable@dpdk.org
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
Tested-by: Wei Ling <weix.ling@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Heinrich Kuhn [Wed, 2 Sep 2020 11:52:27 +0000 (13:52 +0200)]
net/nfp: expand device info get
Report Rx and Tx descriptor related limitations in the nfp dev_info_get
callback function. This commit also adds NFP_ALIGN_RING_DESC to replace
a static integer value used during rx/tx queue setups to validate
descriptor alignment.
Cc: stable@dpdk.org
Signed-off-by: Heinrich Kuhn <heinrich.kuhn@netronome.com>
Signed-off-by: Simon Horman <simon.horman@netronome.com>
Ophir Munk [Wed, 9 Sep 2020 08:43:17 +0000 (08:43 +0000)]
common/mlx5: fix aligned malloc
Before this commit system call memalign was used for aligned
allocations, however memalign is deprecated.
Based on (1) - POSIX requires that memory aligned allocations can be
freed using free. Some systems provide no way to reclaim memory
allocated with memalign (because one can only pass to free a pointer
gotten from malloc, while, memalign would call malloc and then align the
obtained value).
Another issue is that 64/32 bits architectures use a minimal alignment
size. So any requested alignment below the minimal system size can be
simplified by calling malloc.
The glibc implementation allows memory obtained from posix_memalign to
be reclaimed with free. This commit replaces system call memalign with
system call posix_memalign. It also calls malloc in case the requested
alignment is below the minimal system size.
(1) https://linux.die.net/man/3/memalign
Fixes:
d38e3d526657 ("common/mlx5: add memory management functions")
Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@nvidia.com>
Acked-by: Matan Azrad <matan@mellanox.com>
Maxime Leroy [Thu, 16 Jul 2020 10:43:20 +0000 (12:43 +0200)]
net/mlx5: fix RSS RETA reset on start
The following sequences was working fine on mlx5:
rte_eth_dev_configure(portid, ...);
for (queueid = 0; queueid < nb_txq; queueid++)
rte_eth_tx_queue_setup(portid, queueid, ...);
for (queueid = 0; queueid < nb_rxq; queueid++)
rte_eth_rx_queue_setup(portid, queueid, ...);
// use a custom reta configuration
rte_eth_dev_rss_reta_update(portid, reta_conf, reta_size);
rte_eth_dev_start(portid);
We were able to configure a custom reta before starting the port.
The commit "net/mlx5: support RSS on hairpin" breaks this logic by
moving the code initializing the RSS reta from rte_eth_dev_configure
into rte_eth_dev_start.
To fix the issue, the skip_default_rss_reta is always set to 1 in
rte_eth_dev_rss_reta to avoid reconfigure the rss reta when the device
is started.
Fixes:
63bd16292c3a ("net/mlx5: support RSS on hairpin")
Cc: stable@dpdk.org
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Acked-by: Ori Kam <orika@nvidia.com>
Junfeng Guo [Tue, 15 Sep 2020 08:17:59 +0000 (16:17 +0800)]
net/iavf: support RSS for IPv6 64-bit prefix
RSS for IPv6 prefix 64bit fields are supported in this patch, so that
we can use prefix instead of full IPv6 address for RSS. The prefix
here only includes the first 64 bits of both SRC and DST IPv6 address.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>
Junfeng Guo [Tue, 15 Sep 2020 08:17:58 +0000 (16:17 +0800)]
net/iavf: replace function name with macro
Replace some function name with macro to shrink coding characters.
VIRTCHNL_DEL_PROTO_HDR_FIELD, VIRTCHNL_ADD_PROTO_HDR_FIELD
--> REFINE_PROTO_FLD.
Signed-off-by: Junfeng Guo <junfeng.guo@intel.com>
Acked-by: Qi Zhang <qi.z.zhang@intel.com>