Kalesh AP [Wed, 17 Jul 2019 10:41:36 +0000 (16:11 +0530)]
net/bnxt: check invalid VNIC in cleanup path
The cleanup/rollback operation post rte_eth_dev_start failure might end
up invoking an HWRM cmd even on an invalid vNIC resulting in error
messages being logged needlessly.
Fix to check for the same before issuing the HWRM cmd.
Kalesh AP [Wed, 17 Jul 2019 10:41:35 +0000 (16:11 +0530)]
net/bnxt: fix enabling/disabling interrupts
1. Disable interrupts in dev_stop_op()
2. Enable interrupts in dev_start_op()
3. Clean queue intr-vector mapping in dev_stop_op() and thus
fix a possible memory leak.
Avoid null pointer dereference when allocating an insulated
completion ring by basing nq ring allocation on whether an
nq ring was requested instead of whether the device supports
nq rings.
net/bnxt: fix doorbell register offset for Tx ring
For Tx-ring # 104 and higher, the doorbell register was incorrectly
configured due to which FW was not able to receive the notification
of packet to transmit.
With this fix, user can run traffic upto 256 rings.
Initialize the state of the completion valid indicator
when a completion ring is freed, otherwise completions may
not be processed when a new ring is allocated.
Fixes: 5735eb241947 ("net/bnxt: support Tx batching") Cc: stable@dpdk.org Signed-off-by: Lance Richardson <lance.richardson@broadcom.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Reviewed-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Kalesh AP [Wed, 17 Jul 2019 10:41:27 +0000 (16:11 +0530)]
net/bnxt: fix VF probe when MAC address is zero
VF driver should not fail probe if the host PF driver has not assigned
any MAC address for the VF. It should generate a random MAC address and
configure the MAC and then continue probing the device.
Fixes: be160484a48d ("net/bnxt: check if MAC address is all zeros") Cc: stable@dpdk.org Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Kalesh AP [Wed, 17 Jul 2019 10:41:25 +0000 (16:11 +0530)]
net/bnxt: fix extended port counter statistics
1. refactor stats allocation code to new routine
2. check for extended statistics support depends on "hwrm_spec_code"
which is set in bnxt_hwrm_ver_get called later. Hence we were never
querying extended port stats as flags field was not updated. Fixed
this by moving the stats allocation after the call to
bnxt_hwrm_ver_get.
3. we were incorrectly passing the host address used for port
statistics to PORT_QSTATS_EXT command. Fixed this by passing the
correct extended stats address.
Fixes: f55e12f33416 ("net/bnxt: support extended port counters") Cc: stable@dpdk.org Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
With the latest published interface of
rte_eal_hotplug_[add,remove](), and rte_eth_dev_close(),
rte_eth_dev_close() would cleanup all the data structures of
port's eth dev leaving the device common resource intact
if RTE_ETH_DEV_CLOSE_REMOVE is set in dev flags.
So a new command "detach device" (~hotplug remove) to work,
with device identifier like "port attach" is added
to be able to detach closed devices.
Also to display currently probed devices, another command
"show device info <identifier>|all" is also added as a
part of this change.
Xiaoyu Min [Wed, 17 Jul 2019 12:27:08 +0000 (20:27 +0800)]
app/testpmd: support raw encap/decap actions
This patch intend to support
action_raw_encap/decap [1] in a generic and convenient way.
Two new commands - set raw_encap, set raw_decap are introduced just
like the other commands for encap/decap, i.e. set vxlan.
These two commands have corresponding global buffers
which can be used by PMD as the input buffer for raw encap/decap.
The commands use the rte_flow pattern syntax to help user build the
raw buffer in a convenient way.
A common way to use it:
- encap matched egress packet with VxLAN tunnel:
testpmd> set raw_encap eth src is 10:11:22:33:44:55 / vlan tci is 1
inner_type is 0x0800 / ipv4 / udp dst is 4789 / vxlan vni
is 2 / end_set
testpmd> flow create 0 egress pattern eth / ipv4 / end actions
raw_encap / end
- decap l2 header and encap GRE tunnel on matched egress packet:
testpmd> set raw_decap eth / end_set
testpmd> set raw_encap eth dst is 10:22:33:44:55:66 / ipv4 / gre
protocol is 0x0800 / end_set
testpmd> flow create 0 egress pattern eth / ipv4 / end actions
raw_decap / raw_encap / end
- decap VxLAN tunnel and encap l2 header on matched ingress packet:
testpmd> set raw_encap eth src is 10:11:22:33:44:55 type is 0x0800 /
end_set
testpmd> set raw_decap eth / ipv4 / udp / vxlan / end_set
testpmd> flow create 0 ingress pattern eth / ipv4 / udp dst is 250 /
vxlan vni is 0x1234 / ipv4 / end actions raw_decap /
raw_encap / queue index 1 / mark id 0x1234 / end
Xiaoyu Min [Wed, 17 Jul 2019 12:27:07 +0000 (20:27 +0800)]
app/testpmd: move VXLAN/NVGRE help in filters section
The help string of set vxlan*, set nvgre* are in "config" section.
But they actually do not alter NIC or testpmd's configuration and
they will be used by "flow" command later.
Put them in "filters" section along with "flow" command seems more
reasonable.
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
Previously in the PCAP PMD queues has to be defined as RxQ and TxQ
pairs, even if the need is only Rx or only Tx:
"--vdev net_pcap0,tx_pcap=tx.pcap,rx_pcap=rx.pcap"
Following commit enabled only providing Rx queue, and if Tx queue is
not provided PMD drops the Tx packets automatically:
Commit a3f5252e5cbd ("net/pcap: enable infinitely Rx a pcap file")
"--vdev net_pcap0,rx_pcap=rx.pcap"
This commit enables same thing for Rx queue, user no more have to
provide a Rx queue (rx_iface or rx_pcap), for this case a dummy Rx
burst function is used which doesn't return any packet at all:
"--vdev net_pcap0,tx_pcap=tx.pcap"
This makes only saving packets to a pcap file use case easy.
When both Rx and Tx queues are missing PMD will return an error.
(Single interface is still supported: "--vdev net_pcap0,iface=eth0")
David Harton [Fri, 12 Jul 2019 17:35:43 +0000 (13:35 -0400)]
net/ena: fix admin CQ polling for 32-bit
Recent modifications to admin command queue polling logic
did not support 32-bit applications. Updated the driver to
work for 32 or 64 bit applications
Fixes: 3adcba9a8987 ("net/ena: update HAL to the newer version") Cc: stable@dpdk.org Signed-off-by: David Harton <dharton@cisco.com> Acked-by: Michal Krawczyk <mk@semihalf.com>
This patch updates "show port info [port_id]" command to display
the tx_desc_lim.nb_seg_max and tx_desc_lim.nb_mtu_seg_max fields
of rte_eth_dev_info structure.
In case the asynchronous devx commands are not supported in RDMA core
fallback to use a basic counter management.
Here, the PMD counters cashe is redundant and the host thread doesn't
update it. hence, each counter operation will go to the FW and the
acceleration reduces.
All the DV counters are cashed in the PMD memory and are contained in
pools which are contained in containers according to the counters
allocation type - batch or single.
Currently, the flow counter query is done synchronously in pool
resolution means that on the user request a FW command is triggered to
read all the counters in the pool.
A new feature of devX to asynchronously read batch of flow counters
allows to accelerate the user query operation.
Using the DPDK host thread, the PMD periodically triggers asynchronous
query in pool resolution for all the counter pools and an interrupt is
triggered by the FW when the values are updated.
In the interrupt handler the pool counter values raw data is replaced
using a double buffer algorithm (very fast).
In the user query, the PMD just returns the last query values from the
PMD cache - no system-calls and FW commands are triggered from the user
control thread on query operation!
More synchronization is added with the host thread:
Container resize uses double buffer algorithm.
Pools growing in container uses atomic operation.
Pool query buffer replace uses a spinlock.
Pool minimum devX counter ID uses atomic operation.
The DevX interface exposes a new feature to the PMD that can allocate a
batch of counters by one FW command. It can improve the flow
transaction rate (with count action).
Add a new counter pools mechanism to manage HW counters in the PMD.
So, for each flow with counter creation the PMD will try to find a free
counter in the PMD pools container and only if there is no a free
counter, it will allocate a new DevX batch counters.
Currently we cannot support batch counter for a group 0 flow, so
create a 2 container types, one which allocates counters one by
one and one which allocates X counters by the batch feature.
The allocated counters objects are never released back to the HW
assuming the flows maximum number will be close to the actual value of
the flows number.
Later, it can be updated, and dynamic release mechanism can be added.
The counters are contained in pools, each pool with 512 counters.
The pools are contained in counter containers according to the
allocation resolution type - single or batch.
The cache memory of the counters statistics is saved as raw data per
pool.
All the raw data memory is allocated for all the container in one
memory allocation and is managed by counter_stats_mem_mng structure
which registers all the raw memory to the HW.
Each pool points to one raw data structure.
The query operation is in pool resolution which updates all the pool
counter raw data by one operation.
Michal Krawczyk [Tue, 16 Jul 2019 11:13:32 +0000 (13:13 +0200)]
net/ena: update version to 2.0.1
In 2.0.1 ENA, there were patches for:
* assigning NUMA node to the IO queue
commit 4217cb0b7d2c ("net/ena: fix assigning NUMA node to IO queue")
* statistics counters (Rx checksum errors and per-queue number of the
Tx packets)
commit ef74b5f7b69b ("net/ena: fix Rx checksum errors statistics")
commit 5673e285a633 ("net/ena: fix Tx statistics")
* SMP support
commit 117ba4a60488 ("net/ena: get device info statically")
* setting Rx checksum support
commit ef538c1a7f56 ("net/ena: fix checksum feature flag")
The iavf driver crashes when forwarding packets with TSO
enabled. The reason is that the tx context descriptor
configuration is not transferred to tx-ring. This step is
added in this patch.
Because of the commit mentioned below the default case was changed and
this broke single_iface support. This patch adds a check to fix
single_iface support.
In the eth_pcap_tx() and eth_pcap_tx_dumper() functions mbufs were freed
without incrementing num_tx.
This may lead application also try to free or use invalid mbuf.
net/bnxt: create ring group array only when needed
Fix an overrun of the ring group array with BCM5750X-based
adapters by ensuring that the ring group array is not allocated
or accessed for adapters that do not support ring groups.
The conditional used to determine whether freeing RSS
contexts for thor vs. non-thor controller was reversed.
Fix this, also reset number of active RSS contexts to
zero after release in the thor case.
John Daley [Tue, 16 Jul 2019 05:37:20 +0000 (22:37 -0700)]
net/enic: remove PMD log type references
Don't use RTE_LOGTYPE_PMD as it is too general.
Also, just use 1 log type for all of enic PMD (pmd.net.enic)
Signed-off-by: John Daley <johndale@cisco.com> Reviewed-by: Hyong Youb Kim <hyonkim@cisco.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
net/octeontx2: support flow API flags based extraction
Adding support for flags based extraction in octeontx2 Flow.
Patch supports extracting data greater than 32 bytes using lflags.
When flags based extraction is enabled, lower 4 bits will be
considered (16 flags) for indexing the flags, and will be used
for extraction.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com>
The rte_vdev_driver is declared twice.
The first one is not necessary.
Fixes: 740feaf349b1 ("ethdev: remove driver name from device private data") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
The rte_vdev_drivers are declared twice.
The first one is not necessary.
Fixes: 740feaf349b1 ("ethdev: remove driver name from device private data") Fixes: 204d026a3922 ("net/tap: support tun") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Keith Wiles <keith.wiles@intel.com>
Xiaoyu Min [Wed, 10 Jul 2019 14:59:45 +0000 (22:59 +0800)]
net/mlx5: support IP-in-IP tunnel
Enabled IP-in-IP tunnel type support on DV/DR flow engine.
This includes the following combination:
- IPv4 over IPv4
- IPv4 over IPv6
- IPv6 over IPv4
- IPv6 over IPv6
MLX5 NIC supports IP-in-IP tunnel via FLEX Parser so
need to make sure fw using FLEX Paser profile 0.
mlxconfig -d <mst device> -y set FLEX_PARSER_PROFILE_ENABLE=0
The example testpmd commands would be:
- Match on IPv4 over IPv4 packets and do inner RSS:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 0x04 /
ipv4 / udp / end actions rss level 2 queues 0 1 2 3 end / end
- Match on IPv6 over IPv4 packets and do inner RSS:
testpmd> flow create 0 ingress pattern eth / ipv4 proto is 0x29 /
ipv6 / udp / end actions rss level 2 queues 0 1 2 3 end / end
Signed-off-by: Xiaoyu Min <jackmin@mellanox.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
Qiming Yang [Mon, 15 Jul 2019 02:23:56 +0000 (10:23 +0800)]
net/ice: fix flow validation
ice_flow_valid_attr will return zero on success and a negative value
on error.
Current return value check logic is opposite of the expected behavior.
This patch fixes this issue.
Július Milan [Fri, 12 Jul 2019 07:55:46 +0000 (09:55 +0200)]
net/af_xdp: fix handling of not supported feature
Procedure xdp_get_channels_info was returning error code -1 in case of
ioctl command SIOCETHTOOL was not supported. This patch sets return
value back to 0 as it is valid case.
Fixes: 339b88c6a91f ("net/af_xdp: support multi-queue") Signed-off-by: Július Milan <jmilan.dev@gmail.com> Reviewed-by: Xiaolong Ye <xiaolong.ye@intel.com>
David Marchand [Thu, 11 Jul 2019 08:18:49 +0000 (10:18 +0200)]
doc: fix example in AF_XDP guide
queue= parameter does not exist.
It might have been the previous name of the queue_count parameter, but
anyway, the default value 1 for the number of queues works fine.
Fixes: f1debd77efaf ("net/af_xdp: introduce AF_XDP PMD") Cc: stable@dpdk.org Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Xiaolong Ye <xiaolong.ye@intel.com>
Ziyang Xuan [Fri, 5 Jul 2019 06:47:47 +0000 (14:47 +0800)]
net/hinic: replace spinlock with mutex
Using spin lock to protect critical resources
of sending mgmt messages. This will make high
CPU usage for rte_delay_ms when sending mgmt
messages frequently. We can use mutex to protect
the critical resources and usleep to reduce CPU
usage while keep functioning properly.
Signed-off-by: Ziyang Xuan <xuanziyang2@huawei.com>
Adding PF and VF action support for octeontx2 flow driver.
If RTE_FLOW_ACTION_TYPE_PF action is set from VF, then the packet
will be sent to the parent PF.
If RTE_FLOW_ACTION_TYPE_VF action is set and original is specified,
then the packet will be sent to the original VF, otherwise the packet
will be sent to the VF specified in the vf_id.
Signed-off-by: Kiran Kumar K <kirankumark@marvell.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
For SRIOV, fastpath status blocks are not allocated resulting in
segfault. Separate out fastpath DMA allocation/free from rest of
memory allocation/free. It is now done as part of NIC load/unload.
Comment indentation changes in bnx2x_alloc_hsi_mem() and
bnx2x_free_hsi_mem() APIs.
The logic to read vf_id used by ACQUIRE/TEARDOWN_Q/RELEASE TLVs,
multiplexed return value to convey vf_id value and status of read vf_id
API. This lets to segfault at dev_start() as resources are not properly
cleaned and re-allocated.
Fix read vf_id API to differentiate between vf_id value and return
status. Adjust the status checking accordingly.
Added bnx2x_vf_teardown_queue() API and moved relevant code from
bnx2x_vf_unload() to new API.
Replace rte_intr_enable() with rte_intr_ack() API
for acking an interrupt in interrupt handlers and
rx_queue_intr_enable() callbacks of PMD's.
This is inline with original intent of this change in PMDs
to ack interrupts after handling is completed if
device is backed by UIO, IGB_UIO or VFIO(with INTx).
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Jerin Jacob <jerinj@marvell.com> Acked-by: Shahed Shaikh <shshaikh@marvell.com> Tested-by: Shahed Shaikh <shshaikh@marvell.com> Signed-off-by: David Marchand <david.marchand@redhat.com>
Add new ack interrupt API to avoid using
VFIO_IRQ_SET_ACTION_TRIGGER(rte_intr_enable()) for
acking interrupt purpose for VFIO based interrupt handlers.
This implementation is specific to Linux.
Using rte_intr_enable() for acking interrupt has below issues
* Time consuming to do for every interrupt received as it will
free_irq() followed by request_irq() and all other initializations
* A race condition because of a window between free_irq() and
request_irq() with packet reception still on and device still
enabled and would throw warning messages like below.
[158764.159833] do_IRQ: 9.34 No irq handler for vector
In this patch, rte_intr_ack() is a no-op for VFIO_MSIX/VFIO_MSI interrupts
as they are edge triggered and kernel would not mask the interrupt before
delivering the event to userspace and we don't need to ack.
Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Jerin Jacob <jerinj@marvell.com> Tested-by: Shahed Shaikh <shshaikh@marvell.com> Signed-off-by: David Marchand <david.marchand@redhat.com>
The above mentioned commit moves the interrupt's eventfd setup
to probe time but only enables one interrupt for all types of
interrupt handles i.e VFIO_MSI, VFIO_LEGACY, VFIO_MSIX, UIO.
It works fine with default case but breaks below cases specifically
for MSIX based interrupt handles.
* Applications like l3fwd-power that request rxq interrupts
while ethdev setup.
* Drivers that need > 1 MSIx interrupts to be configured for
functionality to work.
VFIO PCI for MSIx expects all the possible vectors to be setup up
when using VFIO_IRQ_SET_ACTION_TRIGGER so that they can be
allocated from kernel pci subsystem. Only way to increase the number
of vectors later is first free all by using VFIO_IRQ_SET_DATA_NONE
with action trigger and then enable new vector count.
Above commit changes the behavior of rte_intr_[enable|disable] to
only mask and unmask unlike earlier behavior and thereby
breaking above two scenarios.
Fixes: 89aac60e0be9 ("vfio: fix interrupts race condition") Cc: stable@dpdk.org Signed-off-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Signed-off-by: Jerin Jacob <jerinj@marvell.com> Tested-by: Stephen Hemminger <stephen@networkplumber.org> Tested-by: Shahed Shaikh <shshaikh@marvell.com> Tested-by: Lei Yao <lei.a.yao@intel.com> Acked-by: David Marchand <david.marchand@redhat.com>
Marcin Zapolski [Mon, 22 Jul 2019 11:47:01 +0000 (13:47 +0200)]
examples/ip_frag: fix stale content of ethdev info
The eth_dev_info was used with content that was obsolete. Added update
of struct content prior to use.
Fixes: 6b7780bfebe4 ("examples/ip_frag: fix use of ethdev internal device array") Cc: stable@dpdk.org Signed-off-by: Marcin Zapolski <marcinx.a.zapolski@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Jerin Jacob [Mon, 22 Jul 2019 12:56:53 +0000 (14:56 +0200)]
eal/linux: select IOVA as VA mode for default case
When bus layer reports the preferred mode as RTE_IOVA_DC then
select the RTE_IOVA_VA mode:
- All drivers work in RTE_IOVA_VA mode, irrespective of physical
address availability.
- By default, a mempool asks for IOVA-contiguous memory using
RTE_MEMZONE_IOVA_CONTIG. This is slow in RTE_IOVA_PA mode and it
may affect the application boot time.
Signed-off-by: Jerin Jacob <jerinj@marvell.com> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com> Signed-off-by: David Marchand <david.marchand@redhat.com>
Jerin Jacob [Mon, 22 Jul 2019 12:56:52 +0000 (14:56 +0200)]
bus/pci: change IOVA as VA flag name
In order to align name with other PCI driver flag such as
RTE_PCI_DRV_NEED_MAPPING and to reflect its purpose, change
RTE_PCI_DRV_IOVA_AS_VA flag name as RTE_PCI_DRV_NEED_IOVA_AS_VA.
Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: David Marchand <david.marchand@redhat.com>
David Marchand [Mon, 22 Jul 2019 12:56:51 +0000 (14:56 +0200)]
eal: fix IOVA mode selection as VA for PCI drivers
The incriminated commit broke the use of RTE_PCI_DRV_IOVA_AS_VA which
was intended to mean "driver only supports VA" but had been understood
as "driver supports both PA and VA" by most net drivers and used to let
dpdk processes to run as non root (which do not have access to physical
addresses on recent kernels).
The check on physical addresses actually closed the gap for those
drivers. We don't need to mark them with RTE_PCI_DRV_IOVA_AS_VA and this
flag can retain its intended meaning.
Document explicitly its meaning.
We can check that a driver requirement wrt to IOVA mode is fulfilled
before trying to probe a device.
Finally, document the heuristic used to select the IOVA mode and hope
that we won't break it again.
Fixes: 703458e19c16 ("bus/pci: consider only usable devices for IOVA mode") Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Jerin Jacob <jerinj@marvell.com> Tested-by: Jerin Jacob <jerinj@marvell.com> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
The PCI bus now reports DC when faced with a device bound to an unknown
driver and, in such a case, the IOVA mode is selected against physical
address availability.
As a consequence, there is no reason for this special case for Mellanox
drivers.
Fixes: 703458e19c16 ("bus/pci: consider only usable devices for IOVA mode") Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Jerin Jacob <jerinj@marvell.com>
Add support for zero queue sizes of the traffic classes. The queues
which are not used can be set to zero size. This helps in reducing
memory footprint of the hierarchical scheduler.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com> Signed-off-by: Abraham Tovar <abrahamx.tovar@intel.com> Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
All higher priority traffic classes contain only one queue, thus
remove wrr function for them. The lowest priority best-effort
traffic class conitnue to have multiple queues and packet are
scheduled from its queues using wrr function.
Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com> Signed-off-by: Abraham Tovar <abrahamx.tovar@intel.com> Signed-off-by: Lukasz Krakowiak <lukaszx.krakowiak@intel.com>
When building dpdk with different kernel headers by specifying
RTE_KERNELDIR igb_uio is compiled to directory with a name of the
version of kernel thats running on the system instead of the one that
dpdk is actually compiled against. Fixed by replacing hardcoded value
with value from RTE_KERNELDIR.
test/crypto: fix session init failure for wireless case
This patch add the support to handle the failure in session
create for wireless related cases. Else it will cause
segment fault due to I/O on un-initialized sessions.
examples/ipsec-secgw: fix first packet with inline crypto
Inline crypto installs a flow rule in the NIC. This flow
rule must be installed before the first inbound packet is
received.
The create_session() function installs the flow rule,
create_session() has been refactored into create_inline_session()
and create_lookaside_session(). The create_inline_session() function
uses the socket_ctx data and is now called at initialisation in
sa_add_rules().
The max_session_size() function has been added to calculate memory
requirements.
The cryprodev_init() function has been refactored to drop calls to
rte_mempool_create() and to drop calculation of memory requirements.
The main() function has been refactored to call max_session_size() and
to call session_pool_init() and session_priv_pool_init() earlier.
The ports are started now before adding a flow rule in main().
The sa_init(), sp4_init(), sp6_init() and rt_init() functions are
now called after the ports have been started.
The rte_ipsec_session_prepare() function is called in fill_ipsec_session()
for inline which is called from the ipsec_sa_init() function.
Fixes: ec17993a145a ("examples/ipsec-secgw: support security offload") Fixes: d299106e8e31 ("examples/ipsec-secgw: add IPsec sample application") Cc: stable@dpdk.org Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <akhil.goyal@nxp.com>
Asymmetric nature of RSA algorithm suggest to use
additional field for output. In place operations
still can be done by setting cipher and message pointers
with the same memory address.
This patch adds checking if device support ZUC
algorithms before running ZUC test cases.
It also removes unnecessary checks of digest
appended space and fixes some comments wording.
Tomasz Jozwiak [Fri, 5 Jul 2019 17:15:51 +0000 (18:15 +0100)]
compress/qat: fix overflow status return
This patch fixes fail status returned from compression PMD
in case destination buffer size is not enough to store
all data.
Fixes: 3dc9ef2d23fe ("compress/qat: fix returned status on overflow") Cc: stable@dpdk.org Signed-off-by: Tomasz Jozwiak <tomaszx.jozwiak@intel.com> Signed-off-by: Adam Dybkowski <adamx.dybkowski@intel.com> Acked-by: Adam Dybkowski <adamx.dybkowski@intel.com>