From: Natalie Samsonov Date: Mon, 26 Mar 2018 14:38:50 +0000 (+0200) Subject: net/mrvl: rename PMD as mvpp2 X-Git-Url: http://git.droids-corp.org/?a=commitdiff_plain;h=fe93968722afe38d1bade07dcf5df0ebd35c5eb6;p=dpdk.git net/mrvl: rename PMD as mvpp2 The name "mrvl" for Marvell PMD driver for PPv2 Marvell PPv2 (Packet Processor v2) 1/10 Gbps adapter is too generic and causes problem for adding new PMD drivers for other Marvell devices. Changed to "mvpp2" for specific Marvell PPv2 PMD. This patch doesn't introduce any change except renaming. Signed-off-by: Natalie Samsonov Acked-by: Ferruh Yigit --- diff --git a/MAINTAINERS b/MAINTAINERS index 75d3e92c87..d4c0cc1bc7 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -486,15 +486,15 @@ F: drivers/net/mlx5/ F: doc/guides/nics/mlx5.rst F: doc/guides/nics/features/mlx5.ini -Marvell mrvl +Marvell mvpp2 M: Jacek Siuda M: Tomasz Duszynski M: Dmitri Epshtein M: Natalie Samsonov M: Jianbo Liu -F: drivers/net/mrvl/ -F: doc/guides/nics/mrvl.rst -F: doc/guides/nics/features/mrvl.ini +F: drivers/net/mvpp2/ +F: doc/guides/nics/mvpp2.rst +F: doc/guides/nics/features/mvpp2.ini Microsoft vdev_netvsc - EXPERIMENTAL M: Matan Azrad diff --git a/config/common_base b/config/common_base index ee10b449b3..7abf7c6fcf 100644 --- a/config/common_base +++ b/config/common_base @@ -383,7 +383,7 @@ CONFIG_RTE_LIBRTE_PMD_FAILSAFE=y # # Compile Marvell PMD driver # -CONFIG_RTE_LIBRTE_MRVL_PMD=n +CONFIG_RTE_LIBRTE_MVPP2_PMD=n # # Compile virtual device driver for NetVSC on Hyper-V/Azure diff --git a/doc/guides/cryptodevs/mrvl.rst b/doc/guides/cryptodevs/mrvl.rst index 6a0b08c580..443ebcd35a 100644 --- a/doc/guides/cryptodevs/mrvl.rst +++ b/doc/guides/cryptodevs/mrvl.rst @@ -86,7 +86,7 @@ Currently there are two driver specific compilation options in Toggle display of debugging messages. For a list of prerequisites please refer to `Prerequisites` section in -:ref:`MRVL Poll Mode Driver ` guide. +:ref:`MVPP2 Poll Mode Driver ` guide. MRVL CRYPTO PMD requires MUSDK built with EIP197 support thus following extra option must be passed to the library configuration script: @@ -123,7 +123,7 @@ operation: .. code-block:: console - ./l2fwd-crypto --vdev=net_mrvl,iface=eth0 --vdev=crypto_mrvl -- \ + ./l2fwd-crypto --vdev=eth_mvpp2,iface=eth0 --vdev=crypto_mrvl -- \ --cipher_op ENCRYPT --cipher_algo aes-cbc \ --cipher_key 00:01:02:03:04:05:06:07:08:09:0a:0b:0c:0d:0e:0f \ --auth_op GENERATE --auth_algo sha1-hmac \ diff --git a/doc/guides/nics/features/mrvl.ini b/doc/guides/nics/features/mrvl.ini deleted file mode 100644 index 8673a5678b..0000000000 --- a/doc/guides/nics/features/mrvl.ini +++ /dev/null @@ -1,25 +0,0 @@ -; -; Supported features of the 'mrvl' network poll mode driver. -; -; Refer to default.ini for the full list of available PMD features. -; -[Features] -Speed capabilities = Y -Link status = Y -MTU update = Y -Jumbo frame = Y -Promiscuous mode = Y -Allmulticast mode = Y -Unicast MAC filter = Y -Multicast MAC filter = Y -RSS hash = Y -Flow control = Y -VLAN filter = Y -CRC offload = Y -L3 checksum offload = Y -L4 checksum offload = Y -Packet type parsing = Y -Basic stats = Y -Extended stats = Y -ARMv8 = Y -Usage doc = Y diff --git a/doc/guides/nics/features/mvpp2.ini b/doc/guides/nics/features/mvpp2.ini new file mode 100644 index 0000000000..ef47546d1c --- /dev/null +++ b/doc/guides/nics/features/mvpp2.ini @@ -0,0 +1,25 @@ +; +; Supported features of the 'mvpp2' network poll mode driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Speed capabilities = Y +Link status = Y +MTU update = Y +Jumbo frame = Y +Promiscuous mode = Y +Allmulticast mode = Y +Unicast MAC filter = Y +Multicast MAC filter = Y +RSS hash = Y +Flow control = Y +VLAN filter = Y +CRC offload = Y +L3 checksum offload = Y +L4 checksum offload = Y +Packet type parsing = Y +Basic stats = Y +Extended stats = Y +ARMv8 = Y +Usage doc = Y diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst index 59419f432c..51c453d9ce 100644 --- a/doc/guides/nics/index.rst +++ b/doc/guides/nics/index.rst @@ -30,7 +30,7 @@ Network Interface Controller Drivers liquidio mlx4 mlx5 - mrvl + mvpp2 nfp octeontx qede diff --git a/doc/guides/nics/mrvl.rst b/doc/guides/nics/mrvl.rst deleted file mode 100644 index f9ec9d6839..0000000000 --- a/doc/guides/nics/mrvl.rst +++ /dev/null @@ -1,520 +0,0 @@ -.. BSD LICENSE - Copyright(c) 2017 Marvell International Ltd. - Copyright(c) 2017 Semihalf. - All rights reserved. - - Redistribution and use in source and binary forms, with or without - modification, are permitted provided that the following conditions - are met: - - * Redistributions of source code must retain the above copyright - notice, this list of conditions and the following disclaimer. - * Redistributions in binary form must reproduce the above copyright - notice, this list of conditions and the following disclaimer in - the documentation and/or other materials provided with the - distribution. - * Neither the name of the copyright holder nor the names of its - contributors may be used to endorse or promote products derived - from this software without specific prior written permission. - - THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -.. _mrvl_poll_mode_driver: - -MRVL Poll Mode Driver -====================== - -The MRVL PMD (librte_pmd_mrvl) provides poll mode driver support -for the Marvell PPv2 (Packet Processor v2) 1/10 Gbps adapter. - -Detailed information about SoCs that use PPv2 can be obtained here: - -* https://www.marvell.com/embedded-processors/armada-70xx/ -* https://www.marvell.com/embedded-processors/armada-80xx/ - -.. Note:: - - Due to external dependencies, this driver is disabled by default. It must - be enabled manually by setting relevant configuration option manually. - Please refer to `Config File Options`_ section for further details. - - -Features --------- - -Features of the MRVL PMD are: - -- Speed capabilities -- Link status -- Queue start/stop -- MTU update -- Jumbo frame -- Promiscuous mode -- Allmulticast mode -- Unicast MAC filter -- Multicast MAC filter -- RSS hash -- VLAN filter -- CRC offload -- L3 checksum offload -- L4 checksum offload -- Packet type parsing -- Basic stats -- Extended stats -- QoS -- RX flow control -- TX queue start/stop - - -Limitations ------------ - -- Number of lcores is limited to 9 by MUSDK internal design. If more lcores - need to be allocated, locking will have to be considered. Number of available - lcores can be changed via ``MRVL_MUSDK_HIFS_RESERVED`` define in - ``mrvl_ethdev.c`` source file. - -- Flushing vlans added for filtering is not possible due to MUSDK missing - functionality. Current workaround is to reset board so that PPv2 has a - chance to start in a sane state. - - -Prerequisites -------------- - -- Custom Linux Kernel sources - - .. code-block:: console - - git clone https://github.com/MarvellEmbeddedProcessors/linux-marvell.git -b linux-4.4.52-armada-17.10 - -- Out of tree `mvpp2x_sysfs` kernel module sources - - .. code-block:: console - - git clone https://github.com/MarvellEmbeddedProcessors/mvpp2x-marvell.git -b mvpp2x-armada-17.10 - -- MUSDK (Marvell User-Space SDK) sources - - .. code-block:: console - - git clone https://github.com/MarvellEmbeddedProcessors/musdk-marvell.git -b musdk-armada-17.10 - - MUSDK is a light-weight library that provides direct access to Marvell's - PPv2 (Packet Processor v2). Alternatively prebuilt MUSDK library can be - requested from `Marvell Extranet `_. Once - approval has been granted, library can be found by typing ``musdk`` in - the search box. - - To get better understanding of the library one can consult documentation - available in the ``doc`` top level directory of the MUSDK sources. - - MUSDK must be configured with the following features: - - .. code-block:: console - - --enable-bpool-dma=64 - -- DPDK environment - - Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup - DPDK environment. - - -Config File Options -------------------- - -The following options can be modified in the ``config`` file. - -- ``CONFIG_RTE_LIBRTE_MRVL_PMD`` (default ``n``) - - Toggle compilation of the librte_pmd_mrvl driver. - - -QoS Configuration ------------------ - -QoS configuration is done through external configuration file. Path to the -file must be given as `cfg` in driver's vdev parameter list. - -Configuration syntax -~~~~~~~~~~~~~~~~~~~~ - -.. code-block:: console - - [port default] - default_tc = - mapping_priority = - policer_enable = - token_unit = - color = - cir = - ebs = - cbs = - - rate_limit_enable = - rate_limit = - burst_size = - - [port tc ] - rxq = - pcp = - dscp = - default_color = - - [port tc ] - rxq = - pcp = - dscp = - - [port txq ] - sched_mode = - wrr_weight = - - rate_limit_enable = - rate_limit = - burst_size = - -Where: - -- ````: DPDK Port number (0..n). - -- ````: Default traffic class (e.g. 0) - -- ````: QoS priority for mapping (`ip`, `vlan`, `ip/vlan` or `vlan/ip`). - -- ````: Traffic Class to be configured. - -- ````: List of DPDK RX queues (e.g. 0 1 3-4) - -- ````: List of PCP values to handle in particular TC (e.g. 0 1 3-4 7). - -- ````: List of DSCP values to handle in particular TC (e.g. 0-12 32-48 63). - -- ````: Enable ingress policer. - -- ````: Policer token unit (`bytes` or `packets`). - -- ````: Policer color mode (`aware` or `blind`). - -- ````: Committed information rate in unit of kilo bits per second (data rate) or packets per second. - -- ````: Committed burst size in unit of kilo bytes or number of packets. - -- ````: Excess burst size in unit of kilo bytes or number of packets. - -- ````: Default color for specific tc. - -- ````: Enables per port or per txq rate limiting. - -- ````: Committed information rate, in kilo bits per second. - -- ````: Committed burst size, in kilo bytes. - -- ````: Egress scheduler mode (`wrr` or `sp`). - -- ````: Txq weight. - -Setting PCP/DSCP values for the default TC is not required. All PCP/DSCP -values not assigned explicitly to particular TC will be handled by the -default TC. - -Configuration file example -^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. code-block:: console - - [port 0 default] - default_tc = 0 - mapping_priority = ip - - rate_limit_enable = 1 - rate_limit = 1000 - burst_size = 2000 - - [port 0 tc 0] - rxq = 0 1 - - [port 0 txq 0] - sched_mode = wrr - wrr_weight = 10 - - [port 0 txq 1] - sched_mode = wrr - wrr_weight = 100 - - [port 0 txq 2] - sched_mode = sp - - [port 0 tc 1] - rxq = 2 - pcp = 5 6 7 - dscp = 26-38 - - [port 1 default] - default_tc = 0 - mapping_priority = vlan/ip - - policer_enable = 1 - token_unit = bytes - color = blind - cir = 100000 - ebs = 64 - cbs = 64 - - [port 1 tc 0] - rxq = 0 - dscp = 10 - - [port 1 tc 1] - rxq = 1 - dscp = 11-20 - - [port 1 tc 2] - rxq = 2 - dscp = 30 - - [port 1 txq 0] - rate_limit_enable = 1 - rate_limit = 10000 - burst_size = 2000 - -Usage example -^^^^^^^^^^^^^ - -.. code-block:: console - - ./testpmd --vdev=eth_mrvl,iface=eth0,iface=eth2,cfg=/home/user/mrvl.conf \ - -c 7 -- -i -a --disable-hw-vlan-strip --rxq=3 --txq=3 - - -Building DPDK -------------- - -Driver needs precompiled MUSDK library during compilation. - -.. code-block:: console - - export CROSS_COMPILE=/bin/aarch64-linux-gnu- - ./bootstrap - ./configure --host=aarch64-linux-gnu --enable-bpool-dma=64 - make install - -MUSDK will be installed to `usr/local` under current directory. -For the detailed build instructions please consult ``doc/musdk_get_started.txt``. - -Before the DPDK build process the environmental variable ``LIBMUSDK_PATH`` with -the path to the MUSDK installation directory needs to be exported. - -.. code-block:: console - - export LIBMUSDK_PATH=/usr/local - export CROSS=aarch64-linux-gnu- - make config T=arm64-armv8a-linuxapp-gcc - sed -ri 's,(MRVL_PMD=)n,\1y,' build/.config - make - -Flow API --------- - -PPv2 offers packet classification capabilities via classifier engine which -can be configured via generic flow API offered by DPDK. - -Supported flow actions -~~~~~~~~~~~~~~~~~~~~~~ - -Following flow action items are supported by the driver: - -* DROP -* QUEUE - -Supported flow items -~~~~~~~~~~~~~~~~~~~~ - -Following flow items and their respective fields are supported by the driver: - -* ETH - - * source MAC - * destination MAC - * ethertype - -* VLAN - - * PCP - * VID - -* IPV4 - - * DSCP - * protocol - * source address - * destination address - -* IPV6 - - * flow label - * next header - * source address - * destination address - -* UDP - - * source port - * destination port - -* TCP - - * source port - * destination port - -Classifier match engine -~~~~~~~~~~~~~~~~~~~~~~~ - -Classifier has an internal match engine which can be configured to -operate in either exact or maskable mode. - -Mode is selected upon creation of the first unique flow rule as follows: - -* maskable, if key size is up to 8 bytes. -* exact, otherwise, i.e for keys bigger than 8 bytes. - -Where the key size equals the number of bytes of all fields specified -in the flow items. - -.. table:: Examples of key size calculation - - +----------------------------------------------------------------------------+-------------------+-------------+ - | Flow pattern | Key size in bytes | Used engine | - +============================================================================+===================+=============+ - | ETH (destination MAC) / VLAN (VID) | 6 + 2 = 8 | Maskable | - +----------------------------------------------------------------------------+-------------------+-------------+ - | VLAN (VID) / IPV4 (source address) | 2 + 4 = 6 | Maskable | - +----------------------------------------------------------------------------+-------------------+-------------+ - | TCP (source port, destination port) | 2 + 2 = 4 | Maskable | - +----------------------------------------------------------------------------+-------------------+-------------+ - | VLAN (priority) / IPV4 (source address) | 1 + 4 = 5 | Maskable | - +----------------------------------------------------------------------------+-------------------+-------------+ - | IPV4 (destination address) / UDP (source port, destination port) | 6 + 2 + 2 = 10 | Exact | - +----------------------------------------------------------------------------+-------------------+-------------+ - | VLAN (VID) / IPV6 (flow label, destination address) | 2 + 3 + 16 = 21 | Exact | - +----------------------------------------------------------------------------+-------------------+-------------+ - | IPV4 (DSCP, source address, destination address) | 1 + 4 + 4 = 9 | Exact | - +----------------------------------------------------------------------------+-------------------+-------------+ - | IPV6 (flow label, source address, destination address) | 3 + 16 + 16 = 35 | Exact | - +----------------------------------------------------------------------------+-------------------+-------------+ - -From the user perspective maskable mode means that masks specified -via flow rules are respected. In case of exact match mode, masks -which do not provide exact matching (all bits masked) are ignored. - -If the flow matches more than one classifier rule the first -(with the lowest index) matched takes precedence. - -Flow rules usage example -~~~~~~~~~~~~~~~~~~~~~~~~ - -Before proceeding run testpmd user application: - -.. code-block:: console - - ./testpmd --vdev=net_mrvl,iface=eth0,iface=eth2 -c 3 -- -i --p 3 -a --disable-hw-vlan-strip - -Example #1 -^^^^^^^^^^ - -.. code-block:: console - - testpmd> flow create 0 ingress pattern eth src is 10:11:12:13:14:15 / end actions drop / end - -In this case key size is 6 bytes thus maskable type is selected. Testpmd -will set mask to ff:ff:ff:ff:ff:ff i.e traffic explicitly matching -above rule will be dropped. - -Example #2 -^^^^^^^^^^ - -.. code-block:: console - - testpmd> flow create 0 ingress pattern ipv4 src spec 10.10.10.0 src mask 255.255.255.0 / tcp src spec 0x10 src mask 0x10 / end action drop / end - -In this case key size is 8 bytes thus maskable type is selected. -Flows which have IPv4 source addresses ranging from 10.10.10.0 to 10.10.10.255 -and tcp source port set to 16 will be dropped. - -Example #3 -^^^^^^^^^^ - -.. code-block:: console - - testpmd> flow create 0 ingress pattern vlan vid spec 0x10 vid mask 0x10 / ipv4 src spec 10.10.1.1 src mask 255.255.0.0 dst spec 11.11.11.1 dst mask 255.255.255.0 / end actions drop / end - -In this case key size is 10 bytes thus exact type is selected. -Even though each item has partial mask set, masks will be ignored. -As a result only flows with VID set to 16 and IPv4 source and destination -addresses set to 10.10.1.1 and 11.11.11.1 respectively will be dropped. - -Limitations -~~~~~~~~~~~ - -Following limitations need to be taken into account while creating flow rules: - -* For IPv4 exact match type the key size must be up to 12 bytes. -* For IPv6 exact match type the key size must be up to 36 bytes. -* Following fields cannot be partially masked (all masks are treated as - if they were exact): - - * ETH: ethertype - * VLAN: PCP, VID - * IPv4: protocol - * IPv6: next header - * TCP/UDP: source port, destination port - -* Only one classifier table can be created thus all rules in the table - have to match table format. Table format is set during creation of - the first unique flow rule. -* Up to 5 fields can be specified per flow rule. -* Up to 20 flow rules can be added. - -For additional information about classifier please consult -``doc/musdk_cls_user_guide.txt``. - -Usage Example -------------- - -MRVL PMD requires extra out of tree kernel modules to function properly. -`musdk_uio` and `mv_pp_uio` sources are part of the MUSDK. Please consult -``doc/musdk_get_started.txt`` for the detailed build instructions. -For `mvpp2x_sysfs` please consult ``Documentation/pp22_sysfs.txt`` for the -detailed build instructions. - -.. code-block:: console - - insmod musdk_uio.ko - insmod mv_pp_uio.ko - insmod mvpp2x_sysfs.ko - -Additionally interfaces used by DPDK application need to be put up: - -.. code-block:: console - - ip link set eth0 up - ip link set eth2 up - -In order to run testpmd example application following command can be used: - -.. code-block:: console - - ./testpmd --vdev=eth_mrvl,iface=eth0,iface=eth2 -c 7 -- \ - --burst=128 --txd=2048 --rxd=1024 --rxq=2 --txq=2 --nb-cores=2 \ - -i -a --rss-udp diff --git a/doc/guides/nics/mvpp2.rst b/doc/guides/nics/mvpp2.rst new file mode 100644 index 0000000000..0408752c41 --- /dev/null +++ b/doc/guides/nics/mvpp2.rst @@ -0,0 +1,520 @@ +.. BSD LICENSE + Copyright(c) 2017 Marvell International Ltd. + Copyright(c) 2017 Semihalf. + All rights reserved. + + Redistribution and use in source and binary forms, with or without + modification, are permitted provided that the following conditions + are met: + + * Redistributions of source code must retain the above copyright + notice, this list of conditions and the following disclaimer. + * Redistributions in binary form must reproduce the above copyright + notice, this list of conditions and the following disclaimer in + the documentation and/or other materials provided with the + distribution. + * Neither the name of the copyright holder nor the names of its + contributors may be used to endorse or promote products derived + from this software without specific prior written permission. + + THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +.. _mvpp2_poll_mode_driver: + +MVPP2 Poll Mode Driver +====================== + +The MVPP2 PMD (librte_pmd_mvpp2) provides poll mode driver support +for the Marvell PPv2 (Packet Processor v2) 1/10 Gbps adapter. + +Detailed information about SoCs that use PPv2 can be obtained here: + +* https://www.marvell.com/embedded-processors/armada-70xx/ +* https://www.marvell.com/embedded-processors/armada-80xx/ + +.. Note:: + + Due to external dependencies, this driver is disabled by default. It must + be enabled manually by setting relevant configuration option manually. + Please refer to `Config File Options`_ section for further details. + + +Features +-------- + +Features of the MVPP2 PMD are: + +- Speed capabilities +- Link status +- Queue start/stop +- MTU update +- Jumbo frame +- Promiscuous mode +- Allmulticast mode +- Unicast MAC filter +- Multicast MAC filter +- RSS hash +- VLAN filter +- CRC offload +- L3 checksum offload +- L4 checksum offload +- Packet type parsing +- Basic stats +- Extended stats +- QoS +- RX flow control +- TX queue start/stop + + +Limitations +----------- + +- Number of lcores is limited to 9 by MUSDK internal design. If more lcores + need to be allocated, locking will have to be considered. Number of available + lcores can be changed via ``MRVL_MUSDK_HIFS_RESERVED`` define in + ``mrvl_ethdev.c`` source file. + +- Flushing vlans added for filtering is not possible due to MUSDK missing + functionality. Current workaround is to reset board so that PPv2 has a + chance to start in a sane state. + + +Prerequisites +------------- + +- Custom Linux Kernel sources + + .. code-block:: console + + git clone https://github.com/MarvellEmbeddedProcessors/linux-marvell.git -b linux-4.4.52-armada-17.10 + +- Out of tree `mvpp2x_sysfs` kernel module sources + + .. code-block:: console + + git clone https://github.com/MarvellEmbeddedProcessors/mvpp2x-marvell.git -b mvpp2x-armada-17.10 + +- MUSDK (Marvell User-Space SDK) sources + + .. code-block:: console + + git clone https://github.com/MarvellEmbeddedProcessors/musdk-marvell.git -b musdk-armada-17.10 + + MUSDK is a light-weight library that provides direct access to Marvell's + PPv2 (Packet Processor v2). Alternatively prebuilt MUSDK library can be + requested from `Marvell Extranet `_. Once + approval has been granted, library can be found by typing ``musdk`` in + the search box. + + To get better understanding of the library one can consult documentation + available in the ``doc`` top level directory of the MUSDK sources. + + MUSDK must be configured with the following features: + + .. code-block:: console + + --enable-bpool-dma=64 + +- DPDK environment + + Follow the DPDK :ref:`Getting Started Guide for Linux ` to setup + DPDK environment. + + +Config File Options +------------------- + +The following options can be modified in the ``config`` file. + +- ``CONFIG_RTE_LIBRTE_MVPP2_PMD`` (default ``n``) + + Toggle compilation of the librte mvpp2 driver. + + +QoS Configuration +----------------- + +QoS configuration is done through external configuration file. Path to the +file must be given as `cfg` in driver's vdev parameter list. + +Configuration syntax +~~~~~~~~~~~~~~~~~~~~ + +.. code-block:: console + + [port default] + default_tc = + mapping_priority = + policer_enable = + token_unit = + color = + cir = + ebs = + cbs = + + rate_limit_enable = + rate_limit = + burst_size = + + [port tc ] + rxq = + pcp = + dscp = + default_color = + + [port tc ] + rxq = + pcp = + dscp = + + [port txq ] + sched_mode = + wrr_weight = + + rate_limit_enable = + rate_limit = + burst_size = + +Where: + +- ````: DPDK Port number (0..n). + +- ````: Default traffic class (e.g. 0) + +- ````: QoS priority for mapping (`ip`, `vlan`, `ip/vlan` or `vlan/ip`). + +- ````: Traffic Class to be configured. + +- ````: List of DPDK RX queues (e.g. 0 1 3-4) + +- ````: List of PCP values to handle in particular TC (e.g. 0 1 3-4 7). + +- ````: List of DSCP values to handle in particular TC (e.g. 0-12 32-48 63). + +- ````: Enable ingress policer. + +- ````: Policer token unit (`bytes` or `packets`). + +- ````: Policer color mode (`aware` or `blind`). + +- ````: Committed information rate in unit of kilo bits per second (data rate) or packets per second. + +- ````: Committed burst size in unit of kilo bytes or number of packets. + +- ````: Excess burst size in unit of kilo bytes or number of packets. + +- ````: Default color for specific tc. + +- ````: Enables per port or per txq rate limiting. + +- ````: Committed information rate, in kilo bits per second. + +- ````: Committed burst size, in kilo bytes. + +- ````: Egress scheduler mode (`wrr` or `sp`). + +- ````: Txq weight. + +Setting PCP/DSCP values for the default TC is not required. All PCP/DSCP +values not assigned explicitly to particular TC will be handled by the +default TC. + +Configuration file example +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. code-block:: console + + [port 0 default] + default_tc = 0 + mapping_priority = ip + + rate_limit_enable = 1 + rate_limit = 1000 + burst_size = 2000 + + [port 0 tc 0] + rxq = 0 1 + + [port 0 txq 0] + sched_mode = wrr + wrr_weight = 10 + + [port 0 txq 1] + sched_mode = wrr + wrr_weight = 100 + + [port 0 txq 2] + sched_mode = sp + + [port 0 tc 1] + rxq = 2 + pcp = 5 6 7 + dscp = 26-38 + + [port 1 default] + default_tc = 0 + mapping_priority = vlan/ip + + policer_enable = 1 + token_unit = bytes + color = blind + cir = 100000 + ebs = 64 + cbs = 64 + + [port 1 tc 0] + rxq = 0 + dscp = 10 + + [port 1 tc 1] + rxq = 1 + dscp = 11-20 + + [port 1 tc 2] + rxq = 2 + dscp = 30 + + [port 1 txq 0] + rate_limit_enable = 1 + rate_limit = 10000 + burst_size = 2000 + +Usage example +^^^^^^^^^^^^^ + +.. code-block:: console + + ./testpmd --vdev=eth_mvpp2,iface=eth0,iface=eth2,cfg=/home/user/mrvl.conf \ + -c 7 -- -i -a --disable-hw-vlan-strip --rxq=3 --txq=3 + + +Building DPDK +------------- + +Driver needs precompiled MUSDK library during compilation. + +.. code-block:: console + + export CROSS_COMPILE=/bin/aarch64-linux-gnu- + ./bootstrap + ./configure --host=aarch64-linux-gnu --enable-bpool-dma=64 + make install + +MUSDK will be installed to `usr/local` under current directory. +For the detailed build instructions please consult ``doc/musdk_get_started.txt``. + +Before the DPDK build process the environmental variable ``LIBMUSDK_PATH`` with +the path to the MUSDK installation directory needs to be exported. + +.. code-block:: console + + export LIBMUSDK_PATH=/usr/local + export CROSS=aarch64-linux-gnu- + make config T=arm64-armv8a-linuxapp-gcc + sed -ri 's,(MVPP2_PMD=)n,\1y,' build/.config + make + +Flow API +-------- + +PPv2 offers packet classification capabilities via classifier engine which +can be configured via generic flow API offered by DPDK. + +Supported flow actions +~~~~~~~~~~~~~~~~~~~~~~ + +Following flow action items are supported by the driver: + +* DROP +* QUEUE + +Supported flow items +~~~~~~~~~~~~~~~~~~~~ + +Following flow items and their respective fields are supported by the driver: + +* ETH + + * source MAC + * destination MAC + * ethertype + +* VLAN + + * PCP + * VID + +* IPV4 + + * DSCP + * protocol + * source address + * destination address + +* IPV6 + + * flow label + * next header + * source address + * destination address + +* UDP + + * source port + * destination port + +* TCP + + * source port + * destination port + +Classifier match engine +~~~~~~~~~~~~~~~~~~~~~~~ + +Classifier has an internal match engine which can be configured to +operate in either exact or maskable mode. + +Mode is selected upon creation of the first unique flow rule as follows: + +* maskable, if key size is up to 8 bytes. +* exact, otherwise, i.e for keys bigger than 8 bytes. + +Where the key size equals the number of bytes of all fields specified +in the flow items. + +.. table:: Examples of key size calculation + + +----------------------------------------------------------------------------+-------------------+-------------+ + | Flow pattern | Key size in bytes | Used engine | + +============================================================================+===================+=============+ + | ETH (destination MAC) / VLAN (VID) | 6 + 2 = 8 | Maskable | + +----------------------------------------------------------------------------+-------------------+-------------+ + | VLAN (VID) / IPV4 (source address) | 2 + 4 = 6 | Maskable | + +----------------------------------------------------------------------------+-------------------+-------------+ + | TCP (source port, destination port) | 2 + 2 = 4 | Maskable | + +----------------------------------------------------------------------------+-------------------+-------------+ + | VLAN (priority) / IPV4 (source address) | 1 + 4 = 5 | Maskable | + +----------------------------------------------------------------------------+-------------------+-------------+ + | IPV4 (destination address) / UDP (source port, destination port) | 6 + 2 + 2 = 10 | Exact | + +----------------------------------------------------------------------------+-------------------+-------------+ + | VLAN (VID) / IPV6 (flow label, destination address) | 2 + 3 + 16 = 21 | Exact | + +----------------------------------------------------------------------------+-------------------+-------------+ + | IPV4 (DSCP, source address, destination address) | 1 + 4 + 4 = 9 | Exact | + +----------------------------------------------------------------------------+-------------------+-------------+ + | IPV6 (flow label, source address, destination address) | 3 + 16 + 16 = 35 | Exact | + +----------------------------------------------------------------------------+-------------------+-------------+ + +From the user perspective maskable mode means that masks specified +via flow rules are respected. In case of exact match mode, masks +which do not provide exact matching (all bits masked) are ignored. + +If the flow matches more than one classifier rule the first +(with the lowest index) matched takes precedence. + +Flow rules usage example +~~~~~~~~~~~~~~~~~~~~~~~~ + +Before proceeding run testpmd user application: + +.. code-block:: console + + ./testpmd --vdev=eth_mvpp2,iface=eth0,iface=eth2 -c 3 -- -i --p 3 -a --disable-hw-vlan-strip + +Example #1 +^^^^^^^^^^ + +.. code-block:: console + + testpmd> flow create 0 ingress pattern eth src is 10:11:12:13:14:15 / end actions drop / end + +In this case key size is 6 bytes thus maskable type is selected. Testpmd +will set mask to ff:ff:ff:ff:ff:ff i.e traffic explicitly matching +above rule will be dropped. + +Example #2 +^^^^^^^^^^ + +.. code-block:: console + + testpmd> flow create 0 ingress pattern ipv4 src spec 10.10.10.0 src mask 255.255.255.0 / tcp src spec 0x10 src mask 0x10 / end action drop / end + +In this case key size is 8 bytes thus maskable type is selected. +Flows which have IPv4 source addresses ranging from 10.10.10.0 to 10.10.10.255 +and tcp source port set to 16 will be dropped. + +Example #3 +^^^^^^^^^^ + +.. code-block:: console + + testpmd> flow create 0 ingress pattern vlan vid spec 0x10 vid mask 0x10 / ipv4 src spec 10.10.1.1 src mask 255.255.0.0 dst spec 11.11.11.1 dst mask 255.255.255.0 / end actions drop / end + +In this case key size is 10 bytes thus exact type is selected. +Even though each item has partial mask set, masks will be ignored. +As a result only flows with VID set to 16 and IPv4 source and destination +addresses set to 10.10.1.1 and 11.11.11.1 respectively will be dropped. + +Limitations +~~~~~~~~~~~ + +Following limitations need to be taken into account while creating flow rules: + +* For IPv4 exact match type the key size must be up to 12 bytes. +* For IPv6 exact match type the key size must be up to 36 bytes. +* Following fields cannot be partially masked (all masks are treated as + if they were exact): + + * ETH: ethertype + * VLAN: PCP, VID + * IPv4: protocol + * IPv6: next header + * TCP/UDP: source port, destination port + +* Only one classifier table can be created thus all rules in the table + have to match table format. Table format is set during creation of + the first unique flow rule. +* Up to 5 fields can be specified per flow rule. +* Up to 20 flow rules can be added. + +For additional information about classifier please consult +``doc/musdk_cls_user_guide.txt``. + +Usage Example +------------- + +MVPP2 PMD requires extra out of tree kernel modules to function properly. +`musdk_uio` and `mv_pp_uio` sources are part of the MUSDK. Please consult +``doc/musdk_get_started.txt`` for the detailed build instructions. +For `mvpp2x_sysfs` please consult ``Documentation/pp22_sysfs.txt`` for the +detailed build instructions. + +.. code-block:: console + + insmod musdk_uio.ko + insmod mv_pp_uio.ko + insmod mvpp2x_sysfs.ko + +Additionally interfaces used by DPDK application need to be put up: + +.. code-block:: console + + ip link set eth0 up + ip link set eth2 up + +In order to run testpmd example application following command can be used: + +.. code-block:: console + + ./testpmd --vdev=eth_mvpp2,iface=eth0,iface=eth2 -c 7 -- \ + --burst=128 --txd=2048 --rxd=1024 --rxq=2 --txq=2 --nb-cores=2 \ + -i -a --rss-udp diff --git a/drivers/net/Makefile b/drivers/net/Makefile index 39eb5501ee..37ca19aa7c 100644 --- a/drivers/net/Makefile +++ b/drivers/net/Makefile @@ -31,7 +31,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_IXGBE_PMD) += ixgbe DIRS-$(CONFIG_RTE_LIBRTE_LIO_PMD) += liquidio DIRS-$(CONFIG_RTE_LIBRTE_MLX4_PMD) += mlx4 DIRS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += mlx5 -DIRS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl +DIRS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += mvpp2 DIRS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += nfp DIRS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += bnxt DIRS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += null @@ -59,7 +59,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_VHOST),y) DIRS-$(CONFIG_RTE_LIBRTE_PMD_VHOST) += vhost endif # $(CONFIG_RTE_LIBRTE_VHOST) -ifeq ($(CONFIG_RTE_LIBRTE_MRVL_PMD),y) +ifeq ($(CONFIG_RTE_LIBRTE_MVPP2_PMD),y) ifeq ($(CONFIG_RTE_LIBRTE_CFGFILE),n) $(error "RTE_LIBRTE_CFGFILE must be enabled in configuration!") endif diff --git a/drivers/net/mrvl/Makefile b/drivers/net/mrvl/Makefile deleted file mode 100644 index 31a8fda36e..0000000000 --- a/drivers/net/mrvl/Makefile +++ /dev/null @@ -1,42 +0,0 @@ -# SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2017 Marvell International Ltd. -# Copyright(c) 2017 Semihalf. -# All rights reserved. - -include $(RTE_SDK)/mk/rte.vars.mk - -ifneq ($(MAKECMDGOALS),clean) -ifneq ($(MAKECMDGOALS),config) -ifeq ($(LIBMUSDK_PATH),) -$(error "Please define LIBMUSDK_PATH environment variable") -endif -endif -endif - -# library name -LIB = librte_pmd_mrvl.a - -# library version -LIBABIVER := 1 - -# versioning export map -EXPORT_MAP := rte_pmd_mrvl_version.map - -# external library dependencies -CFLAGS += -I$(LIBMUSDK_PATH)/include -CFLAGS += -DMVCONF_TYPES_PUBLIC -CFLAGS += -DMVCONF_DMA_PHYS_ADDR_T_PUBLIC -CFLAGS += $(WERROR_FLAGS) -CFLAGS += -O3 -LDLIBS += -L$(LIBMUSDK_PATH)/lib -LDLIBS += -lmusdk -LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring -LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_cfgfile -LDLIBS += -lrte_bus_vdev - -# library source files -SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_ethdev.c -SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_qos.c -SRCS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += mrvl_flow.c - -include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/mrvl/mrvl_ethdev.c b/drivers/net/mrvl/mrvl_ethdev.c deleted file mode 100644 index c0483b9123..0000000000 --- a/drivers/net/mrvl/mrvl_ethdev.c +++ /dev/null @@ -1,2832 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2017 Marvell International Ltd. - * Copyright(c) 2017 Semihalf. - * All rights reserved. - */ - -#include -#include -#include -#include -#include - -/* Unluckily, container_of is defined by both DPDK and MUSDK, - * we'll declare only one version. - * - * Note that it is not used in this PMD anyway. - */ -#ifdef container_of -#undef container_of -#endif - -#include -#include -#include -#include -#include -#include -#include -#include -#include - -#include "mrvl_ethdev.h" -#include "mrvl_qos.h" - -/* bitmask with reserved hifs */ -#define MRVL_MUSDK_HIFS_RESERVED 0x0F -/* bitmask with reserved bpools */ -#define MRVL_MUSDK_BPOOLS_RESERVED 0x07 -/* bitmask with reserved kernel RSS tables */ -#define MRVL_MUSDK_RSS_RESERVED 0x01 -/* maximum number of available hifs */ -#define MRVL_MUSDK_HIFS_MAX 9 - -/* prefetch shift */ -#define MRVL_MUSDK_PREFETCH_SHIFT 2 - -/* TCAM has 25 entries reserved for uc/mc filter entries */ -#define MRVL_MAC_ADDRS_MAX 25 -#define MRVL_MATCH_LEN 16 -#define MRVL_PKT_EFFEC_OFFS (MRVL_PKT_OFFS + MV_MH_SIZE) -/* Maximum allowable packet size */ -#define MRVL_PKT_SIZE_MAX (10240 - MV_MH_SIZE) - -#define MRVL_IFACE_NAME_ARG "iface" -#define MRVL_CFG_ARG "cfg" - -#define MRVL_BURST_SIZE 64 - -#define MRVL_ARP_LENGTH 28 - -#define MRVL_COOKIE_ADDR_INVALID ~0ULL - -#define MRVL_COOKIE_HIGH_ADDR_SHIFT (sizeof(pp2_cookie_t) * 8) -#define MRVL_COOKIE_HIGH_ADDR_MASK (~0ULL << MRVL_COOKIE_HIGH_ADDR_SHIFT) - -/* Memory size (in bytes) for MUSDK dma buffers */ -#define MRVL_MUSDK_DMA_MEMSIZE 41943040 - -/** Port Rx offload capabilities */ -#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \ - DEV_RX_OFFLOAD_JUMBO_FRAME | \ - DEV_RX_OFFLOAD_CRC_STRIP | \ - DEV_RX_OFFLOAD_CHECKSUM) - -/** Port Tx offloads capabilities */ -#define MRVL_TX_OFFLOADS (DEV_TX_OFFLOAD_IPV4_CKSUM | \ - DEV_TX_OFFLOAD_UDP_CKSUM | \ - DEV_TX_OFFLOAD_TCP_CKSUM) - -static const char * const valid_args[] = { - MRVL_IFACE_NAME_ARG, - MRVL_CFG_ARG, - NULL -}; - -static int used_hifs = MRVL_MUSDK_HIFS_RESERVED; -static struct pp2_hif *hifs[RTE_MAX_LCORE]; -static int used_bpools[PP2_NUM_PKT_PROC] = { - MRVL_MUSDK_BPOOLS_RESERVED, - MRVL_MUSDK_BPOOLS_RESERVED -}; - -struct pp2_bpool *mrvl_port_to_bpool_lookup[RTE_MAX_ETHPORTS]; -int mrvl_port_bpool_size[PP2_NUM_PKT_PROC][PP2_BPOOL_NUM_POOLS][RTE_MAX_LCORE]; -uint64_t cookie_addr_high = MRVL_COOKIE_ADDR_INVALID; - -struct mrvl_ifnames { - const char *names[PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC]; - int idx; -}; - -/* - * To use buffer harvesting based on loopback port shadow queue structure - * was introduced for buffers information bookkeeping. - * - * Before sending the packet, related buffer information (pp2_buff_inf) is - * stored in shadow queue. After packet is transmitted no longer used - * packet buffer is released back to it's original hardware pool, - * on condition it originated from interface. - * In case it was generated by application itself i.e: mbuf->port field is - * 0xff then its released to software mempool. - */ -struct mrvl_shadow_txq { - int head; /* write index - used when sending buffers */ - int tail; /* read index - used when releasing buffers */ - u16 size; /* queue occupied size */ - u16 num_to_release; /* number of buffers sent, that can be released */ - struct buff_release_entry ent[MRVL_PP2_TX_SHADOWQ_SIZE]; /* q entries */ -}; - -struct mrvl_rxq { - struct mrvl_priv *priv; - struct rte_mempool *mp; - int queue_id; - int port_id; - int cksum_enabled; - uint64_t bytes_recv; - uint64_t drop_mac; -}; - -struct mrvl_txq { - struct mrvl_priv *priv; - int queue_id; - int port_id; - uint64_t bytes_sent; - struct mrvl_shadow_txq shadow_txqs[RTE_MAX_LCORE]; - int tx_deferred_start; -}; - -static int mrvl_lcore_first; -static int mrvl_lcore_last; -static int mrvl_dev_num; - -static int mrvl_fill_bpool(struct mrvl_rxq *rxq, int num); -static inline void mrvl_free_sent_buffers(struct pp2_ppio *ppio, - struct pp2_hif *hif, unsigned int core_id, - struct mrvl_shadow_txq *sq, int qid, int force); - -#define MRVL_XSTATS_TBL_ENTRY(name) { \ - #name, offsetof(struct pp2_ppio_statistics, name), \ - sizeof(((struct pp2_ppio_statistics *)0)->name) \ -} - -/* Table with xstats data */ -static struct { - const char *name; - unsigned int offset; - unsigned int size; -} mrvl_xstats_tbl[] = { - MRVL_XSTATS_TBL_ENTRY(rx_bytes), - MRVL_XSTATS_TBL_ENTRY(rx_packets), - MRVL_XSTATS_TBL_ENTRY(rx_unicast_packets), - MRVL_XSTATS_TBL_ENTRY(rx_errors), - MRVL_XSTATS_TBL_ENTRY(rx_fullq_dropped), - MRVL_XSTATS_TBL_ENTRY(rx_bm_dropped), - MRVL_XSTATS_TBL_ENTRY(rx_early_dropped), - MRVL_XSTATS_TBL_ENTRY(rx_fifo_dropped), - MRVL_XSTATS_TBL_ENTRY(rx_cls_dropped), - MRVL_XSTATS_TBL_ENTRY(tx_bytes), - MRVL_XSTATS_TBL_ENTRY(tx_packets), - MRVL_XSTATS_TBL_ENTRY(tx_unicast_packets), - MRVL_XSTATS_TBL_ENTRY(tx_errors) -}; - -static inline int -mrvl_get_bpool_size(int pp2_id, int pool_id) -{ - int i; - int size = 0; - - for (i = mrvl_lcore_first; i <= mrvl_lcore_last; i++) - size += mrvl_port_bpool_size[pp2_id][pool_id][i]; - - return size; -} - -static inline int -mrvl_reserve_bit(int *bitmap, int max) -{ - int n = sizeof(*bitmap) * 8 - __builtin_clz(*bitmap); - - if (n >= max) - return -1; - - *bitmap |= 1 << n; - - return n; -} - -static int -mrvl_init_hif(int core_id) -{ - struct pp2_hif_params params; - char match[MRVL_MATCH_LEN]; - int ret; - - ret = mrvl_reserve_bit(&used_hifs, MRVL_MUSDK_HIFS_MAX); - if (ret < 0) { - RTE_LOG(ERR, PMD, "Failed to allocate hif %d\n", core_id); - return ret; - } - - snprintf(match, sizeof(match), "hif-%d", ret); - memset(¶ms, 0, sizeof(params)); - params.match = match; - params.out_size = MRVL_PP2_AGGR_TXQD_MAX; - ret = pp2_hif_init(¶ms, &hifs[core_id]); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to initialize hif %d\n", core_id); - return ret; - } - - return 0; -} - -static inline struct pp2_hif* -mrvl_get_hif(struct mrvl_priv *priv, int core_id) -{ - int ret; - - if (likely(hifs[core_id] != NULL)) - return hifs[core_id]; - - rte_spinlock_lock(&priv->lock); - - ret = mrvl_init_hif(core_id); - if (ret < 0) { - RTE_LOG(ERR, PMD, "Failed to allocate hif %d\n", core_id); - goto out; - } - - if (core_id < mrvl_lcore_first) - mrvl_lcore_first = core_id; - - if (core_id > mrvl_lcore_last) - mrvl_lcore_last = core_id; -out: - rte_spinlock_unlock(&priv->lock); - - return hifs[core_id]; -} - -/** - * Configure rss based on dpdk rss configuration. - * - * @param priv - * Pointer to private structure. - * @param rss_conf - * Pointer to RSS configuration. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf) -{ - if (rss_conf->rss_key) - RTE_LOG(WARNING, PMD, "Changing hash key is not supported\n"); - - if (rss_conf->rss_hf == 0) { - priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE; - } else if (rss_conf->rss_hf & ETH_RSS_IPV4) { - priv->ppio_params.inqs_params.hash_type = - PP2_PPIO_HASH_T_2_TUPLE; - } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) { - priv->ppio_params.inqs_params.hash_type = - PP2_PPIO_HASH_T_5_TUPLE; - priv->rss_hf_tcp = 1; - } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) { - priv->ppio_params.inqs_params.hash_type = - PP2_PPIO_HASH_T_5_TUPLE; - priv->rss_hf_tcp = 0; - } else { - return -EINVAL; - } - - return 0; -} - -/** - * Ethernet device configuration. - * - * Prepare the driver for a given number of TX and RX queues and - * configure RSS. - * - * @param dev - * Pointer to Ethernet device structure. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_dev_configure(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE && - dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { - RTE_LOG(INFO, PMD, "Unsupported rx multi queue mode %d\n", - dev->data->dev_conf.rxmode.mq_mode); - return -EINVAL; - } - - if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CRC_STRIP)) { - RTE_LOG(INFO, PMD, - "L2 CRC stripping is always enabled in hw\n"); - dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CRC_STRIP; - } - - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { - RTE_LOG(INFO, PMD, "VLAN stripping not supported\n"); - return -EINVAL; - } - - if (dev->data->dev_conf.rxmode.split_hdr_size) { - RTE_LOG(INFO, PMD, "Split headers not supported\n"); - return -EINVAL; - } - - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) { - RTE_LOG(INFO, PMD, "RX Scatter/Gather not supported\n"); - return -EINVAL; - } - - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) { - RTE_LOG(INFO, PMD, "LRO not supported\n"); - return -EINVAL; - } - - if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) - dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len - - ETHER_HDR_LEN - ETHER_CRC_LEN; - - ret = mrvl_configure_rxqs(priv, dev->data->port_id, - dev->data->nb_rx_queues); - if (ret < 0) - return ret; - - ret = mrvl_configure_txqs(priv, dev->data->port_id, - dev->data->nb_tx_queues); - if (ret < 0) - return ret; - - priv->ppio_params.outqs_params.num_outqs = dev->data->nb_tx_queues; - priv->ppio_params.maintain_stats = 1; - priv->nb_rx_queues = dev->data->nb_rx_queues; - - if (dev->data->nb_rx_queues == 1 && - dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) { - RTE_LOG(WARNING, PMD, "Disabling hash for 1 rx queue\n"); - priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE; - - return 0; - } - - return mrvl_configure_rss(priv, - &dev->data->dev_conf.rx_adv_conf.rss_conf); -} - -/** - * DPDK callback to change the MTU. - * - * Setting the MTU affects hardware MRU (packets larger than the MRU - * will be dropped). - * - * @param dev - * Pointer to Ethernet device structure. - * @param mtu - * New MTU. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) -{ - struct mrvl_priv *priv = dev->data->dev_private; - /* extra MV_MH_SIZE bytes are required for Marvell tag */ - uint16_t mru = mtu + MV_MH_SIZE + ETHER_HDR_LEN + ETHER_CRC_LEN; - int ret; - - if (mtu < ETHER_MIN_MTU || mru > MRVL_PKT_SIZE_MAX) - return -EINVAL; - - if (!priv->ppio) - return 0; - - ret = pp2_ppio_set_mru(priv->ppio, mru); - if (ret) - return ret; - - return pp2_ppio_set_mtu(priv->ppio, mtu); -} - -/** - * DPDK callback to bring the link up. - * - * @param dev - * Pointer to Ethernet device structure. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_dev_set_link_up(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return -EPERM; - - ret = pp2_ppio_enable(priv->ppio); - if (ret) - return ret; - - /* - * mtu/mru can be updated if pp2_ppio_enable() was called at least once - * as pp2_ppio_enable() changes port->t_mode from default 0 to - * PP2_TRAFFIC_INGRESS_EGRESS. - * - * Set mtu to default DPDK value here. - */ - ret = mrvl_mtu_set(dev, dev->data->mtu); - if (ret) - pp2_ppio_disable(priv->ppio); - - return ret; -} - -/** - * DPDK callback to bring the link down. - * - * @param dev - * Pointer to Ethernet device structure. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_dev_set_link_down(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - if (!priv->ppio) - return -EPERM; - - return pp2_ppio_disable(priv->ppio); -} - -/** - * DPDK callback to start tx queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param queue_id - * Transmit queue index. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv) - return -EPERM; - - /* passing 1 enables given tx queue */ - ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 1); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to start txq %d\n", queue_id); - return ret; - } - - dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; - - return 0; -} - -/** - * DPDK callback to stop tx queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param queue_id - * Transmit queue index. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return -EPERM; - - /* passing 0 disables given tx queue */ - ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 0); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to stop txq %d\n", queue_id); - return ret; - } - - dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; - - return 0; -} - -/** - * DPDK callback to start the device. - * - * @param dev - * Pointer to Ethernet device structure. - * - * @return - * 0 on success, negative errno value on failure. - */ -static int -mrvl_dev_start(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - char match[MRVL_MATCH_LEN]; - int ret = 0, i, def_init_size; - - snprintf(match, sizeof(match), "ppio-%d:%d", - priv->pp_id, priv->ppio_id); - priv->ppio_params.match = match; - - /* - * Calculate the minimum bpool size for refill feature as follows: - * 2 default burst sizes multiply by number of rx queues. - * If the bpool size will be below this value, new buffers will - * be added to the pool. - */ - priv->bpool_min_size = priv->nb_rx_queues * MRVL_BURST_SIZE * 2; - - /* In case initial bpool size configured in queues setup is - * smaller than minimum size add more buffers - */ - def_init_size = priv->bpool_min_size + MRVL_BURST_SIZE * 2; - if (priv->bpool_init_size < def_init_size) { - int buffs_to_add = def_init_size - priv->bpool_init_size; - - priv->bpool_init_size += buffs_to_add; - ret = mrvl_fill_bpool(dev->data->rx_queues[0], buffs_to_add); - if (ret) - RTE_LOG(ERR, PMD, "Failed to add buffers to bpool\n"); - } - - /* - * Calculate the maximum bpool size for refill feature as follows: - * maximum number of descriptors in rx queue multiply by number - * of rx queues plus minimum bpool size. - * In case the bpool size will exceed this value, superfluous buffers - * will be removed - */ - priv->bpool_max_size = (priv->nb_rx_queues * MRVL_PP2_RXD_MAX) + - priv->bpool_min_size; - - ret = pp2_ppio_init(&priv->ppio_params, &priv->ppio); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to init ppio\n"); - return ret; - } - - /* - * In case there are some some stale uc/mc mac addresses flush them - * here. It cannot be done during mrvl_dev_close() as port information - * is already gone at that point (due to pp2_ppio_deinit() in - * mrvl_dev_stop()). - */ - if (!priv->uc_mc_flushed) { - ret = pp2_ppio_flush_mac_addrs(priv->ppio, 1, 1); - if (ret) { - RTE_LOG(ERR, PMD, - "Failed to flush uc/mc filter list\n"); - goto out; - } - priv->uc_mc_flushed = 1; - } - - if (!priv->vlan_flushed) { - ret = pp2_ppio_flush_vlan(priv->ppio); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to flush vlan list\n"); - /* - * TODO - * once pp2_ppio_flush_vlan() is supported jump to out - * goto out; - */ - } - priv->vlan_flushed = 1; - } - - /* For default QoS config, don't start classifier. */ - if (mrvl_qos_cfg) { - ret = mrvl_start_qos_mapping(priv); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to setup QoS mapping\n"); - goto out; - } - } - - ret = mrvl_dev_set_link_up(dev); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to set link up\n"); - goto out; - } - - /* start tx queues */ - for (i = 0; i < dev->data->nb_tx_queues; i++) { - struct mrvl_txq *txq = dev->data->tx_queues[i]; - - dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; - - if (!txq->tx_deferred_start) - continue; - - /* - * All txqs are started by default. Stop them - * so that tx_deferred_start works as expected. - */ - ret = mrvl_tx_queue_stop(dev, i); - if (ret) - goto out; - } - - return 0; -out: - RTE_LOG(ERR, PMD, "Failed to start device\n"); - pp2_ppio_deinit(priv->ppio); - return ret; -} - -/** - * Flush receive queues. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_flush_rx_queues(struct rte_eth_dev *dev) -{ - int i; - - RTE_LOG(INFO, PMD, "Flushing rx queues\n"); - for (i = 0; i < dev->data->nb_rx_queues; i++) { - int ret, num; - - do { - struct mrvl_rxq *q = dev->data->rx_queues[i]; - struct pp2_ppio_desc descs[MRVL_PP2_RXD_MAX]; - - num = MRVL_PP2_RXD_MAX; - ret = pp2_ppio_recv(q->priv->ppio, - q->priv->rxq_map[q->queue_id].tc, - q->priv->rxq_map[q->queue_id].inq, - descs, (uint16_t *)&num); - } while (ret == 0 && num); - } -} - -/** - * Flush transmit shadow queues. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_flush_tx_shadow_queues(struct rte_eth_dev *dev) -{ - int i, j; - struct mrvl_txq *txq; - - RTE_LOG(INFO, PMD, "Flushing tx shadow queues\n"); - for (i = 0; i < dev->data->nb_tx_queues; i++) { - txq = (struct mrvl_txq *)dev->data->tx_queues[i]; - - for (j = 0; j < RTE_MAX_LCORE; j++) { - struct mrvl_shadow_txq *sq; - - if (!hifs[j]) - continue; - - sq = &txq->shadow_txqs[j]; - mrvl_free_sent_buffers(txq->priv->ppio, - hifs[j], j, sq, txq->queue_id, 1); - while (sq->tail != sq->head) { - uint64_t addr = cookie_addr_high | - sq->ent[sq->tail].buff.cookie; - rte_pktmbuf_free( - (struct rte_mbuf *)addr); - sq->tail = (sq->tail + 1) & - MRVL_PP2_TX_SHADOWQ_MASK; - } - memset(sq, 0, sizeof(*sq)); - } - } -} - -/** - * Flush hardware bpool (buffer-pool). - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_flush_bpool(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct pp2_hif *hif; - uint32_t num; - int ret; - unsigned int core_id = rte_lcore_id(); - - if (core_id == LCORE_ID_ANY) - core_id = 0; - - hif = mrvl_get_hif(priv, core_id); - - ret = pp2_bpool_get_num_buffs(priv->bpool, &num); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to get bpool buffers number\n"); - return; - } - - while (num--) { - struct pp2_buff_inf inf; - uint64_t addr; - - ret = pp2_bpool_get_buff(hif, priv->bpool, &inf); - if (ret) - break; - - addr = cookie_addr_high | inf.cookie; - rte_pktmbuf_free((struct rte_mbuf *)addr); - } -} - -/** - * DPDK callback to stop the device. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_dev_stop(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - mrvl_dev_set_link_down(dev); - mrvl_flush_rx_queues(dev); - mrvl_flush_tx_shadow_queues(dev); - if (priv->cls_tbl) { - pp2_cls_tbl_deinit(priv->cls_tbl); - priv->cls_tbl = NULL; - } - if (priv->qos_tbl) { - pp2_cls_qos_tbl_deinit(priv->qos_tbl); - priv->qos_tbl = NULL; - } - if (priv->ppio) - pp2_ppio_deinit(priv->ppio); - priv->ppio = NULL; - - /* policer must be released after ppio deinitialization */ - if (priv->policer) { - pp2_cls_plcr_deinit(priv->policer); - priv->policer = NULL; - } -} - -/** - * DPDK callback to close the device. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_dev_close(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - size_t i; - - for (i = 0; i < priv->ppio_params.inqs_params.num_tcs; ++i) { - struct pp2_ppio_tc_params *tc_params = - &priv->ppio_params.inqs_params.tcs_params[i]; - - if (tc_params->inqs_params) { - rte_free(tc_params->inqs_params); - tc_params->inqs_params = NULL; - } - } - - mrvl_flush_bpool(dev); -} - -/** - * DPDK callback to retrieve physical link information. - * - * @param dev - * Pointer to Ethernet device structure. - * @param wait_to_complete - * Wait for request completion (ignored). - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused) -{ - /* - * TODO - * once MUSDK provides necessary API use it here - */ - struct mrvl_priv *priv = dev->data->dev_private; - struct ethtool_cmd edata; - struct ifreq req; - int ret, fd, link_up; - - if (!priv->ppio) - return -EPERM; - - edata.cmd = ETHTOOL_GSET; - - strcpy(req.ifr_name, dev->data->name); - req.ifr_data = (void *)&edata; - - fd = socket(AF_INET, SOCK_DGRAM, 0); - if (fd == -1) - return -EFAULT; - - ret = ioctl(fd, SIOCETHTOOL, &req); - if (ret == -1) { - close(fd); - return -EFAULT; - } - - close(fd); - - switch (ethtool_cmd_speed(&edata)) { - case SPEED_10: - dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M; - break; - case SPEED_100: - dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M; - break; - case SPEED_1000: - dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G; - break; - case SPEED_10000: - dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G; - break; - default: - dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE; - } - - dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX : - ETH_LINK_HALF_DUPLEX; - dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG : - ETH_LINK_FIXED; - pp2_ppio_get_link_state(priv->ppio, &link_up); - dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN; - - return 0; -} - -/** - * DPDK callback to enable promiscuous mode. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_promiscuous_enable(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return; - - if (priv->isolated) - return; - - ret = pp2_ppio_set_promisc(priv->ppio, 1); - if (ret) - RTE_LOG(ERR, PMD, "Failed to enable promiscuous mode\n"); -} - -/** - * DPDK callback to enable allmulti mode. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_allmulticast_enable(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return; - - if (priv->isolated) - return; - - ret = pp2_ppio_set_mc_promisc(priv->ppio, 1); - if (ret) - RTE_LOG(ERR, PMD, "Failed enable all-multicast mode\n"); -} - -/** - * DPDK callback to disable promiscuous mode. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_promiscuous_disable(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return; - - ret = pp2_ppio_set_promisc(priv->ppio, 0); - if (ret) - RTE_LOG(ERR, PMD, "Failed to disable promiscuous mode\n"); -} - -/** - * DPDK callback to disable allmulticast mode. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_allmulticast_disable(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return; - - ret = pp2_ppio_set_mc_promisc(priv->ppio, 0); - if (ret) - RTE_LOG(ERR, PMD, "Failed to disable all-multicast mode\n"); -} - -/** - * DPDK callback to remove a MAC address. - * - * @param dev - * Pointer to Ethernet device structure. - * @param index - * MAC address index. - */ -static void -mrvl_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) -{ - struct mrvl_priv *priv = dev->data->dev_private; - char buf[ETHER_ADDR_FMT_SIZE]; - int ret; - - if (!priv->ppio) - return; - - if (priv->isolated) - return; - - ret = pp2_ppio_remove_mac_addr(priv->ppio, - dev->data->mac_addrs[index].addr_bytes); - if (ret) { - ether_format_addr(buf, sizeof(buf), - &dev->data->mac_addrs[index]); - RTE_LOG(ERR, PMD, "Failed to remove mac %s\n", buf); - } -} - -/** - * DPDK callback to add a MAC address. - * - * @param dev - * Pointer to Ethernet device structure. - * @param mac_addr - * MAC address to register. - * @param index - * MAC address index. - * @param vmdq - * VMDq pool index to associate address with (unused). - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr, - uint32_t index, uint32_t vmdq __rte_unused) -{ - struct mrvl_priv *priv = dev->data->dev_private; - char buf[ETHER_ADDR_FMT_SIZE]; - int ret; - - if (priv->isolated) - return -ENOTSUP; - - if (index == 0) - /* For setting index 0, mrvl_mac_addr_set() should be used.*/ - return -1; - - if (!priv->ppio) - return 0; - - /* - * Maximum number of uc addresses can be tuned via kernel module mvpp2x - * parameter uc_filter_max. Maximum number of mc addresses is then - * MRVL_MAC_ADDRS_MAX - uc_filter_max. Currently it defaults to 4 and - * 21 respectively. - * - * If more than uc_filter_max uc addresses were added to filter list - * then NIC will switch to promiscuous mode automatically. - * - * If more than MRVL_MAC_ADDRS_MAX - uc_filter_max number mc addresses - * were added to filter list then NIC will switch to all-multicast mode - * automatically. - */ - ret = pp2_ppio_add_mac_addr(priv->ppio, mac_addr->addr_bytes); - if (ret) { - ether_format_addr(buf, sizeof(buf), mac_addr); - RTE_LOG(ERR, PMD, "Failed to add mac %s\n", buf); - return -1; - } - - return 0; -} - -/** - * DPDK callback to set the primary MAC address. - * - * @param dev - * Pointer to Ethernet device structure. - * @param mac_addr - * MAC address to register. - */ -static void -mrvl_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret; - - if (!priv->ppio) - return; - - if (priv->isolated) - return; - - ret = pp2_ppio_set_mac_addr(priv->ppio, mac_addr->addr_bytes); - if (ret) { - char buf[ETHER_ADDR_FMT_SIZE]; - ether_format_addr(buf, sizeof(buf), mac_addr); - RTE_LOG(ERR, PMD, "Failed to set mac to %s\n", buf); - } -} - -/** - * DPDK callback to get device statistics. - * - * @param dev - * Pointer to Ethernet device structure. - * @param stats - * Stats structure output buffer. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct pp2_ppio_statistics ppio_stats; - uint64_t drop_mac = 0; - unsigned int i, idx, ret; - - if (!priv->ppio) - return -EPERM; - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - struct mrvl_rxq *rxq = dev->data->rx_queues[i]; - struct pp2_ppio_inq_statistics rx_stats; - - if (!rxq) - continue; - - idx = rxq->queue_id; - if (unlikely(idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)) { - RTE_LOG(ERR, PMD, - "rx queue %d stats out of range (0 - %d)\n", - idx, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); - continue; - } - - ret = pp2_ppio_inq_get_statistics(priv->ppio, - priv->rxq_map[idx].tc, - priv->rxq_map[idx].inq, - &rx_stats, 0); - if (unlikely(ret)) { - RTE_LOG(ERR, PMD, - "Failed to update rx queue %d stats\n", idx); - break; - } - - stats->q_ibytes[idx] = rxq->bytes_recv; - stats->q_ipackets[idx] = rx_stats.enq_desc - rxq->drop_mac; - stats->q_errors[idx] = rx_stats.drop_early + - rx_stats.drop_fullq + - rx_stats.drop_bm + - rxq->drop_mac; - stats->ibytes += rxq->bytes_recv; - drop_mac += rxq->drop_mac; - } - - for (i = 0; i < dev->data->nb_tx_queues; i++) { - struct mrvl_txq *txq = dev->data->tx_queues[i]; - struct pp2_ppio_outq_statistics tx_stats; - - if (!txq) - continue; - - idx = txq->queue_id; - if (unlikely(idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)) { - RTE_LOG(ERR, PMD, - "tx queue %d stats out of range (0 - %d)\n", - idx, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); - } - - ret = pp2_ppio_outq_get_statistics(priv->ppio, idx, - &tx_stats, 0); - if (unlikely(ret)) { - RTE_LOG(ERR, PMD, - "Failed to update tx queue %d stats\n", idx); - break; - } - - stats->q_opackets[idx] = tx_stats.deq_desc; - stats->q_obytes[idx] = txq->bytes_sent; - stats->obytes += txq->bytes_sent; - } - - ret = pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0); - if (unlikely(ret)) { - RTE_LOG(ERR, PMD, "Failed to update port statistics\n"); - return ret; - } - - stats->ipackets += ppio_stats.rx_packets - drop_mac; - stats->opackets += ppio_stats.tx_packets; - stats->imissed += ppio_stats.rx_fullq_dropped + - ppio_stats.rx_bm_dropped + - ppio_stats.rx_early_dropped + - ppio_stats.rx_fifo_dropped + - ppio_stats.rx_cls_dropped; - stats->ierrors = drop_mac; - - return 0; -} - -/** - * DPDK callback to clear device statistics. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_stats_reset(struct rte_eth_dev *dev) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int i; - - if (!priv->ppio) - return; - - for (i = 0; i < dev->data->nb_rx_queues; i++) { - struct mrvl_rxq *rxq = dev->data->rx_queues[i]; - - pp2_ppio_inq_get_statistics(priv->ppio, priv->rxq_map[i].tc, - priv->rxq_map[i].inq, NULL, 1); - rxq->bytes_recv = 0; - rxq->drop_mac = 0; - } - - for (i = 0; i < dev->data->nb_tx_queues; i++) { - struct mrvl_txq *txq = dev->data->tx_queues[i]; - - pp2_ppio_outq_get_statistics(priv->ppio, i, NULL, 1); - txq->bytes_sent = 0; - } - - pp2_ppio_get_statistics(priv->ppio, NULL, 1); -} - -/** - * DPDK callback to get extended statistics. - * - * @param dev - * Pointer to Ethernet device structure. - * @param stats - * Pointer to xstats table. - * @param n - * Number of entries in xstats table. - * @return - * Negative value on error, number of read xstats otherwise. - */ -static int -mrvl_xstats_get(struct rte_eth_dev *dev, - struct rte_eth_xstat *stats, unsigned int n) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct pp2_ppio_statistics ppio_stats; - unsigned int i; - - if (!stats) - return 0; - - pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0); - for (i = 0; i < n && i < RTE_DIM(mrvl_xstats_tbl); i++) { - uint64_t val; - - if (mrvl_xstats_tbl[i].size == sizeof(uint32_t)) - val = *(uint32_t *)((uint8_t *)&ppio_stats + - mrvl_xstats_tbl[i].offset); - else if (mrvl_xstats_tbl[i].size == sizeof(uint64_t)) - val = *(uint64_t *)((uint8_t *)&ppio_stats + - mrvl_xstats_tbl[i].offset); - else - return -EINVAL; - - stats[i].id = i; - stats[i].value = val; - } - - return n; -} - -/** - * DPDK callback to reset extended statistics. - * - * @param dev - * Pointer to Ethernet device structure. - */ -static void -mrvl_xstats_reset(struct rte_eth_dev *dev) -{ - mrvl_stats_reset(dev); -} - -/** - * DPDK callback to get extended statistics names. - * - * @param dev (unused) - * Pointer to Ethernet device structure. - * @param xstats_names - * Pointer to xstats names table. - * @param size - * Size of the xstats names table. - * @return - * Number of read names. - */ -static int -mrvl_xstats_get_names(struct rte_eth_dev *dev __rte_unused, - struct rte_eth_xstat_name *xstats_names, - unsigned int size) -{ - unsigned int i; - - if (!xstats_names) - return RTE_DIM(mrvl_xstats_tbl); - - for (i = 0; i < size && i < RTE_DIM(mrvl_xstats_tbl); i++) - snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE, "%s", - mrvl_xstats_tbl[i].name); - - return size; -} - -/** - * DPDK callback to get information about the device. - * - * @param dev - * Pointer to Ethernet device structure (unused). - * @param info - * Info structure output buffer. - */ -static void -mrvl_dev_infos_get(struct rte_eth_dev *dev __rte_unused, - struct rte_eth_dev_info *info) -{ - info->speed_capa = ETH_LINK_SPEED_10M | - ETH_LINK_SPEED_100M | - ETH_LINK_SPEED_1G | - ETH_LINK_SPEED_10G; - - info->max_rx_queues = MRVL_PP2_RXQ_MAX; - info->max_tx_queues = MRVL_PP2_TXQ_MAX; - info->max_mac_addrs = MRVL_MAC_ADDRS_MAX; - - info->rx_desc_lim.nb_max = MRVL_PP2_RXD_MAX; - info->rx_desc_lim.nb_min = MRVL_PP2_RXD_MIN; - info->rx_desc_lim.nb_align = MRVL_PP2_RXD_ALIGN; - - info->tx_desc_lim.nb_max = MRVL_PP2_TXD_MAX; - info->tx_desc_lim.nb_min = MRVL_PP2_TXD_MIN; - info->tx_desc_lim.nb_align = MRVL_PP2_TXD_ALIGN; - - info->rx_offload_capa = MRVL_RX_OFFLOADS; - info->rx_queue_offload_capa = MRVL_RX_OFFLOADS; - - info->tx_offload_capa = MRVL_TX_OFFLOADS; - info->tx_queue_offload_capa = MRVL_TX_OFFLOADS; - - info->flow_type_rss_offloads = ETH_RSS_IPV4 | - ETH_RSS_NONFRAG_IPV4_TCP | - ETH_RSS_NONFRAG_IPV4_UDP; - - /* By default packets are dropped if no descriptors are available */ - info->default_rxconf.rx_drop_en = 1; - info->default_rxconf.offloads = DEV_RX_OFFLOAD_CRC_STRIP; - - info->max_rx_pktlen = MRVL_PKT_SIZE_MAX; -} - -/** - * Return supported packet types. - * - * @param dev - * Pointer to Ethernet device structure (unused). - * - * @return - * Const pointer to the table with supported packet types. - */ -static const uint32_t * -mrvl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) -{ - static const uint32_t ptypes[] = { - RTE_PTYPE_L2_ETHER, - RTE_PTYPE_L3_IPV4, - RTE_PTYPE_L3_IPV4_EXT, - RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, - RTE_PTYPE_L3_IPV6, - RTE_PTYPE_L3_IPV6_EXT, - RTE_PTYPE_L2_ETHER_ARP, - RTE_PTYPE_L4_TCP, - RTE_PTYPE_L4_UDP - }; - - return ptypes; -} - -/** - * DPDK callback to get information about specific receive queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param rx_queue_id - * Receive queue index. - * @param qinfo - * Receive queue information structure. - */ -static void mrvl_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, - struct rte_eth_rxq_info *qinfo) -{ - struct mrvl_rxq *q = dev->data->rx_queues[rx_queue_id]; - struct mrvl_priv *priv = dev->data->dev_private; - int inq = priv->rxq_map[rx_queue_id].inq; - int tc = priv->rxq_map[rx_queue_id].tc; - struct pp2_ppio_tc_params *tc_params = - &priv->ppio_params.inqs_params.tcs_params[tc]; - - qinfo->mp = q->mp; - qinfo->nb_desc = tc_params->inqs_params[inq].size; -} - -/** - * DPDK callback to get information about specific transmit queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param tx_queue_id - * Transmit queue index. - * @param qinfo - * Transmit queue information structure. - */ -static void mrvl_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id, - struct rte_eth_txq_info *qinfo) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct mrvl_txq *txq = dev->data->tx_queues[tx_queue_id]; - - qinfo->nb_desc = - priv->ppio_params.outqs_params.outqs_params[tx_queue_id].size; - qinfo->conf.tx_deferred_start = txq->tx_deferred_start; -} - -/** - * DPDK callback to Configure a VLAN filter. - * - * @param dev - * Pointer to Ethernet device structure. - * @param vlan_id - * VLAN ID to filter. - * @param on - * Toggle filter. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - if (!priv->ppio) - return -EPERM; - - if (priv->isolated) - return -ENOTSUP; - - return on ? pp2_ppio_add_vlan(priv->ppio, vlan_id) : - pp2_ppio_remove_vlan(priv->ppio, vlan_id); -} - -/** - * Release buffers to hardware bpool (buffer-pool) - * - * @param rxq - * Receive queue pointer. - * @param num - * Number of buffers to release to bpool. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_fill_bpool(struct mrvl_rxq *rxq, int num) -{ - struct buff_release_entry entries[MRVL_PP2_RXD_MAX]; - struct rte_mbuf *mbufs[MRVL_PP2_RXD_MAX]; - int i, ret; - unsigned int core_id; - struct pp2_hif *hif; - struct pp2_bpool *bpool; - - core_id = rte_lcore_id(); - if (core_id == LCORE_ID_ANY) - core_id = 0; - - hif = mrvl_get_hif(rxq->priv, core_id); - if (!hif) - return -1; - - bpool = rxq->priv->bpool; - - ret = rte_pktmbuf_alloc_bulk(rxq->mp, mbufs, num); - if (ret) - return ret; - - if (cookie_addr_high == MRVL_COOKIE_ADDR_INVALID) - cookie_addr_high = - (uint64_t)mbufs[0] & MRVL_COOKIE_HIGH_ADDR_MASK; - - for (i = 0; i < num; i++) { - if (((uint64_t)mbufs[i] & MRVL_COOKIE_HIGH_ADDR_MASK) - != cookie_addr_high) { - RTE_LOG(ERR, PMD, - "mbuf virtual addr high 0x%lx out of range\n", - (uint64_t)mbufs[i] >> 32); - goto out; - } - - entries[i].buff.addr = - rte_mbuf_data_iova_default(mbufs[i]); - entries[i].buff.cookie = (pp2_cookie_t)(uint64_t)mbufs[i]; - entries[i].bpool = bpool; - } - - pp2_bpool_put_buffs(hif, entries, (uint16_t *)&i); - mrvl_port_bpool_size[bpool->pp2_id][bpool->id][core_id] += i; - - if (i != num) - goto out; - - return 0; -out: - for (; i < num; i++) - rte_pktmbuf_free(mbufs[i]); - - return -1; -} - -/** - * Check whether requested rx queue offloads match port offloads. - * - * @param - * dev Pointer to the device. - * @param - * requested Bitmap of the requested offloads. - * - * @return - * 1 if requested offloads are okay, 0 otherwise. - */ -static int -mrvl_rx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested) -{ - uint64_t mandatory = dev->data->dev_conf.rxmode.offloads; - uint64_t supported = MRVL_RX_OFFLOADS; - uint64_t unsupported = requested & ~supported; - uint64_t missing = mandatory & ~requested; - - if (unsupported) { - RTE_LOG(ERR, PMD, "Some Rx offloads are not supported. " - "Requested 0x%" PRIx64 " supported 0x%" PRIx64 ".\n", - requested, supported); - return 0; - } - - if (missing) { - RTE_LOG(ERR, PMD, "Some Rx offloads are missing. " - "Requested 0x%" PRIx64 " missing 0x%" PRIx64 ".\n", - requested, missing); - return 0; - } - - return 1; -} - -/** - * DPDK callback to configure the receive queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param idx - * RX queue index. - * @param desc - * Number of descriptors to configure in queue. - * @param socket - * NUMA socket on which memory must be allocated. - * @param conf - * Thresholds parameters. - * @param mp - * Memory pool for buffer allocations. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, - unsigned int socket, - const struct rte_eth_rxconf *conf, - struct rte_mempool *mp) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct mrvl_rxq *rxq; - uint32_t min_size, - max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; - int ret, tc, inq; - - if (!mrvl_rx_queue_offloads_okay(dev, conf->offloads)) - return -ENOTSUP; - - if (priv->rxq_map[idx].tc == MRVL_UNKNOWN_TC) { - /* - * Unknown TC mapping, mapping will not have a correct queue. - */ - RTE_LOG(ERR, PMD, "Unknown TC mapping for queue %hu eth%hhu\n", - idx, priv->ppio_id); - return -EFAULT; - } - - min_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM - - MRVL_PKT_EFFEC_OFFS; - if (min_size < max_rx_pkt_len) { - RTE_LOG(ERR, PMD, - "Mbuf size must be increased to %u bytes to hold up to %u bytes of data.\n", - max_rx_pkt_len + RTE_PKTMBUF_HEADROOM + - MRVL_PKT_EFFEC_OFFS, - max_rx_pkt_len); - return -EINVAL; - } - - if (dev->data->rx_queues[idx]) { - rte_free(dev->data->rx_queues[idx]); - dev->data->rx_queues[idx] = NULL; - } - - rxq = rte_zmalloc_socket("rxq", sizeof(*rxq), 0, socket); - if (!rxq) - return -ENOMEM; - - rxq->priv = priv; - rxq->mp = mp; - rxq->cksum_enabled = - dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_IPV4_CKSUM; - rxq->queue_id = idx; - rxq->port_id = dev->data->port_id; - mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool; - - tc = priv->rxq_map[rxq->queue_id].tc, - inq = priv->rxq_map[rxq->queue_id].inq; - priv->ppio_params.inqs_params.tcs_params[tc].inqs_params[inq].size = - desc; - - ret = mrvl_fill_bpool(rxq, desc); - if (ret) { - rte_free(rxq); - return ret; - } - - priv->bpool_init_size += desc; - - dev->data->rx_queues[idx] = rxq; - - return 0; -} - -/** - * DPDK callback to release the receive queue. - * - * @param rxq - * Generic receive queue pointer. - */ -static void -mrvl_rx_queue_release(void *rxq) -{ - struct mrvl_rxq *q = rxq; - struct pp2_ppio_tc_params *tc_params; - int i, num, tc, inq; - struct pp2_hif *hif; - unsigned int core_id = rte_lcore_id(); - - if (core_id == LCORE_ID_ANY) - core_id = 0; - - hif = mrvl_get_hif(q->priv, core_id); - - if (!q || !hif) - return; - - tc = q->priv->rxq_map[q->queue_id].tc; - inq = q->priv->rxq_map[q->queue_id].inq; - tc_params = &q->priv->ppio_params.inqs_params.tcs_params[tc]; - num = tc_params->inqs_params[inq].size; - for (i = 0; i < num; i++) { - struct pp2_buff_inf inf; - uint64_t addr; - - pp2_bpool_get_buff(hif, q->priv->bpool, &inf); - addr = cookie_addr_high | inf.cookie; - rte_pktmbuf_free((struct rte_mbuf *)addr); - } - - rte_free(q); -} - -/** - * Check whether requested tx queue offloads match port offloads. - * - * @param - * dev Pointer to the device. - * @param - * requested Bitmap of the requested offloads. - * - * @return - * 1 if requested offloads are okay, 0 otherwise. - */ -static int -mrvl_tx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested) -{ - uint64_t mandatory = dev->data->dev_conf.txmode.offloads; - uint64_t supported = MRVL_TX_OFFLOADS; - uint64_t unsupported = requested & ~supported; - uint64_t missing = mandatory & ~requested; - - if (unsupported) { - RTE_LOG(ERR, PMD, "Some Tx offloads are not supported. " - "Requested 0x%" PRIx64 " supported 0x%" PRIx64 ".\n", - requested, supported); - return 0; - } - - if (missing) { - RTE_LOG(ERR, PMD, "Some Tx offloads are missing. " - "Requested 0x%" PRIx64 " missing 0x%" PRIx64 ".\n", - requested, missing); - return 0; - } - - return 1; -} - -/** - * DPDK callback to configure the transmit queue. - * - * @param dev - * Pointer to Ethernet device structure. - * @param idx - * Transmit queue index. - * @param desc - * Number of descriptors to configure in the queue. - * @param socket - * NUMA socket on which memory must be allocated. - * @param conf - * Tx queue configuration parameters. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, - unsigned int socket, - const struct rte_eth_txconf *conf) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct mrvl_txq *txq; - - if (!mrvl_tx_queue_offloads_okay(dev, conf->offloads)) - return -ENOTSUP; - - if (dev->data->tx_queues[idx]) { - rte_free(dev->data->tx_queues[idx]); - dev->data->tx_queues[idx] = NULL; - } - - txq = rte_zmalloc_socket("txq", sizeof(*txq), 0, socket); - if (!txq) - return -ENOMEM; - - txq->priv = priv; - txq->queue_id = idx; - txq->port_id = dev->data->port_id; - txq->tx_deferred_start = conf->tx_deferred_start; - dev->data->tx_queues[idx] = txq; - - priv->ppio_params.outqs_params.outqs_params[idx].size = desc; - - return 0; -} - -/** - * DPDK callback to release the transmit queue. - * - * @param txq - * Generic transmit queue pointer. - */ -static void -mrvl_tx_queue_release(void *txq) -{ - struct mrvl_txq *q = txq; - - if (!q) - return; - - rte_free(q); -} - -/** - * DPDK callback to get flow control configuration. - * - * @param dev - * Pointer to Ethernet device structure. - * @param fc_conf - * Pointer to the flow control configuration. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) -{ - struct mrvl_priv *priv = dev->data->dev_private; - int ret, en; - - if (!priv) - return -EPERM; - - ret = pp2_ppio_get_rx_pause(priv->ppio, &en); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to read rx pause state\n"); - return ret; - } - - fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE; - - return 0; -} - -/** - * DPDK callback to set flow control configuration. - * - * @param dev - * Pointer to Ethernet device structure. - * @param fc_conf - * Pointer to the flow control configuration. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - if (!priv) - return -EPERM; - - if (fc_conf->high_water || - fc_conf->low_water || - fc_conf->pause_time || - fc_conf->mac_ctrl_frame_fwd || - fc_conf->autoneg) { - RTE_LOG(ERR, PMD, "Flowctrl parameter is not supported\n"); - - return -EINVAL; - } - - if (fc_conf->mode == RTE_FC_NONE || - fc_conf->mode == RTE_FC_RX_PAUSE) { - int ret, en; - - en = fc_conf->mode == RTE_FC_NONE ? 0 : 1; - ret = pp2_ppio_set_rx_pause(priv->ppio, en); - if (ret) - RTE_LOG(ERR, PMD, - "Failed to change flowctrl on RX side\n"); - - return ret; - } - - return 0; -} - -/** - * Update RSS hash configuration - * - * @param dev - * Pointer to Ethernet device structure. - * @param rss_conf - * Pointer to RSS configuration. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - if (priv->isolated) - return -ENOTSUP; - - return mrvl_configure_rss(priv, rss_conf); -} - -/** - * DPDK callback to get RSS hash configuration. - * - * @param dev - * Pointer to Ethernet device structure. - * @rss_conf - * Pointer to RSS configuration. - * - * @return - * Always 0. - */ -static int -mrvl_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) -{ - struct mrvl_priv *priv = dev->data->dev_private; - enum pp2_ppio_hash_type hash_type = - priv->ppio_params.inqs_params.hash_type; - - rss_conf->rss_key = NULL; - - if (hash_type == PP2_PPIO_HASH_T_NONE) - rss_conf->rss_hf = 0; - else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE) - rss_conf->rss_hf = ETH_RSS_IPV4; - else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp) - rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP; - else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp) - rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP; - - return 0; -} - -/** - * DPDK callback to get rte_flow callbacks. - * - * @param dev - * Pointer to the device structure. - * @param filer_type - * Flow filter type. - * @param filter_op - * Flow filter operation. - * @param arg - * Pointer to pass the flow ops. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused, - enum rte_filter_type filter_type, - enum rte_filter_op filter_op, void *arg) -{ - switch (filter_type) { - case RTE_ETH_FILTER_GENERIC: - if (filter_op != RTE_ETH_FILTER_GET) - return -EINVAL; - *(const void **)arg = &mrvl_flow_ops; - return 0; - default: - RTE_LOG(WARNING, PMD, "Filter type (%d) not supported", - filter_type); - return -EINVAL; - } -} - -static const struct eth_dev_ops mrvl_ops = { - .dev_configure = mrvl_dev_configure, - .dev_start = mrvl_dev_start, - .dev_stop = mrvl_dev_stop, - .dev_set_link_up = mrvl_dev_set_link_up, - .dev_set_link_down = mrvl_dev_set_link_down, - .dev_close = mrvl_dev_close, - .link_update = mrvl_link_update, - .promiscuous_enable = mrvl_promiscuous_enable, - .allmulticast_enable = mrvl_allmulticast_enable, - .promiscuous_disable = mrvl_promiscuous_disable, - .allmulticast_disable = mrvl_allmulticast_disable, - .mac_addr_remove = mrvl_mac_addr_remove, - .mac_addr_add = mrvl_mac_addr_add, - .mac_addr_set = mrvl_mac_addr_set, - .mtu_set = mrvl_mtu_set, - .stats_get = mrvl_stats_get, - .stats_reset = mrvl_stats_reset, - .xstats_get = mrvl_xstats_get, - .xstats_reset = mrvl_xstats_reset, - .xstats_get_names = mrvl_xstats_get_names, - .dev_infos_get = mrvl_dev_infos_get, - .dev_supported_ptypes_get = mrvl_dev_supported_ptypes_get, - .rxq_info_get = mrvl_rxq_info_get, - .txq_info_get = mrvl_txq_info_get, - .vlan_filter_set = mrvl_vlan_filter_set, - .tx_queue_start = mrvl_tx_queue_start, - .tx_queue_stop = mrvl_tx_queue_stop, - .rx_queue_setup = mrvl_rx_queue_setup, - .rx_queue_release = mrvl_rx_queue_release, - .tx_queue_setup = mrvl_tx_queue_setup, - .tx_queue_release = mrvl_tx_queue_release, - .flow_ctrl_get = mrvl_flow_ctrl_get, - .flow_ctrl_set = mrvl_flow_ctrl_set, - .rss_hash_update = mrvl_rss_hash_update, - .rss_hash_conf_get = mrvl_rss_hash_conf_get, - .filter_ctrl = mrvl_eth_filter_ctrl, -}; - -/** - * Return packet type information and l3/l4 offsets. - * - * @param desc - * Pointer to the received packet descriptor. - * @param l3_offset - * l3 packet offset. - * @param l4_offset - * l4 packet offset. - * - * @return - * Packet type information. - */ -static inline uint64_t -mrvl_desc_to_packet_type_and_offset(struct pp2_ppio_desc *desc, - uint8_t *l3_offset, uint8_t *l4_offset) -{ - enum pp2_inq_l3_type l3_type; - enum pp2_inq_l4_type l4_type; - uint64_t packet_type; - - pp2_ppio_inq_desc_get_l3_info(desc, &l3_type, l3_offset); - pp2_ppio_inq_desc_get_l4_info(desc, &l4_type, l4_offset); - - packet_type = RTE_PTYPE_L2_ETHER; - - switch (l3_type) { - case PP2_INQ_L3_TYPE_IPV4_NO_OPTS: - packet_type |= RTE_PTYPE_L3_IPV4; - break; - case PP2_INQ_L3_TYPE_IPV4_OK: - packet_type |= RTE_PTYPE_L3_IPV4_EXT; - break; - case PP2_INQ_L3_TYPE_IPV4_TTL_ZERO: - packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; - break; - case PP2_INQ_L3_TYPE_IPV6_NO_EXT: - packet_type |= RTE_PTYPE_L3_IPV6; - break; - case PP2_INQ_L3_TYPE_IPV6_EXT: - packet_type |= RTE_PTYPE_L3_IPV6_EXT; - break; - case PP2_INQ_L3_TYPE_ARP: - packet_type |= RTE_PTYPE_L2_ETHER_ARP; - /* - * In case of ARP l4_offset is set to wrong value. - * Set it to proper one so that later on mbuf->l3_len can be - * calculated subtracting l4_offset and l3_offset. - */ - *l4_offset = *l3_offset + MRVL_ARP_LENGTH; - break; - default: - RTE_LOG(DEBUG, PMD, "Failed to recognise l3 packet type\n"); - break; - } - - switch (l4_type) { - case PP2_INQ_L4_TYPE_TCP: - packet_type |= RTE_PTYPE_L4_TCP; - break; - case PP2_INQ_L4_TYPE_UDP: - packet_type |= RTE_PTYPE_L4_UDP; - break; - default: - RTE_LOG(DEBUG, PMD, "Failed to recognise l4 packet type\n"); - break; - } - - return packet_type; -} - -/** - * Get offload information from the received packet descriptor. - * - * @param desc - * Pointer to the received packet descriptor. - * - * @return - * Mbuf offload flags. - */ -static inline uint64_t -mrvl_desc_to_ol_flags(struct pp2_ppio_desc *desc) -{ - uint64_t flags; - enum pp2_inq_desc_status status; - - status = pp2_ppio_inq_desc_get_l3_pkt_error(desc); - if (unlikely(status != PP2_DESC_ERR_OK)) - flags = PKT_RX_IP_CKSUM_BAD; - else - flags = PKT_RX_IP_CKSUM_GOOD; - - status = pp2_ppio_inq_desc_get_l4_pkt_error(desc); - if (unlikely(status != PP2_DESC_ERR_OK)) - flags |= PKT_RX_L4_CKSUM_BAD; - else - flags |= PKT_RX_L4_CKSUM_GOOD; - - return flags; -} - -/** - * DPDK callback for receive. - * - * @param rxq - * Generic pointer to the receive queue. - * @param rx_pkts - * Array to store received packets. - * @param nb_pkts - * Maximum number of packets in array. - * - * @return - * Number of packets successfully received. - */ -static uint16_t -mrvl_rx_pkt_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) -{ - struct mrvl_rxq *q = rxq; - struct pp2_ppio_desc descs[nb_pkts]; - struct pp2_bpool *bpool; - int i, ret, rx_done = 0; - int num; - struct pp2_hif *hif; - unsigned int core_id = rte_lcore_id(); - - hif = mrvl_get_hif(q->priv, core_id); - - if (unlikely(!q->priv->ppio || !hif)) - return 0; - - bpool = q->priv->bpool; - - ret = pp2_ppio_recv(q->priv->ppio, q->priv->rxq_map[q->queue_id].tc, - q->priv->rxq_map[q->queue_id].inq, descs, &nb_pkts); - if (unlikely(ret < 0)) { - RTE_LOG(ERR, PMD, "Failed to receive packets\n"); - return 0; - } - mrvl_port_bpool_size[bpool->pp2_id][bpool->id][core_id] -= nb_pkts; - - for (i = 0; i < nb_pkts; i++) { - struct rte_mbuf *mbuf; - uint8_t l3_offset, l4_offset; - enum pp2_inq_desc_status status; - uint64_t addr; - - if (likely(nb_pkts - i > MRVL_MUSDK_PREFETCH_SHIFT)) { - struct pp2_ppio_desc *pref_desc; - u64 pref_addr; - - pref_desc = &descs[i + MRVL_MUSDK_PREFETCH_SHIFT]; - pref_addr = cookie_addr_high | - pp2_ppio_inq_desc_get_cookie(pref_desc); - rte_mbuf_prefetch_part1((struct rte_mbuf *)(pref_addr)); - rte_mbuf_prefetch_part2((struct rte_mbuf *)(pref_addr)); - } - - addr = cookie_addr_high | - pp2_ppio_inq_desc_get_cookie(&descs[i]); - mbuf = (struct rte_mbuf *)addr; - rte_pktmbuf_reset(mbuf); - - /* drop packet in case of mac, overrun or resource error */ - status = pp2_ppio_inq_desc_get_l2_pkt_error(&descs[i]); - if (unlikely(status != PP2_DESC_ERR_OK)) { - struct pp2_buff_inf binf = { - .addr = rte_mbuf_data_iova_default(mbuf), - .cookie = (pp2_cookie_t)(uint64_t)mbuf, - }; - - pp2_bpool_put_buff(hif, bpool, &binf); - mrvl_port_bpool_size - [bpool->pp2_id][bpool->id][core_id]++; - q->drop_mac++; - continue; - } - - mbuf->data_off += MRVL_PKT_EFFEC_OFFS; - mbuf->pkt_len = pp2_ppio_inq_desc_get_pkt_len(&descs[i]); - mbuf->data_len = mbuf->pkt_len; - mbuf->port = q->port_id; - mbuf->packet_type = - mrvl_desc_to_packet_type_and_offset(&descs[i], - &l3_offset, - &l4_offset); - mbuf->l2_len = l3_offset; - mbuf->l3_len = l4_offset - l3_offset; - - if (likely(q->cksum_enabled)) - mbuf->ol_flags = mrvl_desc_to_ol_flags(&descs[i]); - - rx_pkts[rx_done++] = mbuf; - q->bytes_recv += mbuf->pkt_len; - } - - if (rte_spinlock_trylock(&q->priv->lock) == 1) { - num = mrvl_get_bpool_size(bpool->pp2_id, bpool->id); - - if (unlikely(num <= q->priv->bpool_min_size || - (!rx_done && num < q->priv->bpool_init_size))) { - ret = mrvl_fill_bpool(q, MRVL_BURST_SIZE); - if (ret) - RTE_LOG(ERR, PMD, "Failed to fill bpool\n"); - } else if (unlikely(num > q->priv->bpool_max_size)) { - int i; - int pkt_to_remove = num - q->priv->bpool_init_size; - struct rte_mbuf *mbuf; - struct pp2_buff_inf buff; - - RTE_LOG(DEBUG, PMD, - "\nport-%d:%d: bpool %d oversize - remove %d buffers (pool size: %d -> %d)\n", - bpool->pp2_id, q->priv->ppio->port_id, - bpool->id, pkt_to_remove, num, - q->priv->bpool_init_size); - - for (i = 0; i < pkt_to_remove; i++) { - ret = pp2_bpool_get_buff(hif, bpool, &buff); - if (ret) - break; - mbuf = (struct rte_mbuf *) - (cookie_addr_high | buff.cookie); - rte_pktmbuf_free(mbuf); - } - mrvl_port_bpool_size - [bpool->pp2_id][bpool->id][core_id] -= i; - } - rte_spinlock_unlock(&q->priv->lock); - } - - return rx_done; -} - -/** - * Prepare offload information. - * - * @param ol_flags - * Offload flags. - * @param packet_type - * Packet type bitfield. - * @param l3_type - * Pointer to the pp2_ouq_l3_type structure. - * @param l4_type - * Pointer to the pp2_outq_l4_type structure. - * @param gen_l3_cksum - * Will be set to 1 in case l3 checksum is computed. - * @param l4_cksum - * Will be set to 1 in case l4 checksum is computed. - * - * @return - * 0 on success, negative error value otherwise. - */ -static inline int -mrvl_prepare_proto_info(uint64_t ol_flags, uint32_t packet_type, - enum pp2_outq_l3_type *l3_type, - enum pp2_outq_l4_type *l4_type, - int *gen_l3_cksum, - int *gen_l4_cksum) -{ - /* - * Based on ol_flags prepare information - * for pp2_ppio_outq_desc_set_proto_info() which setups descriptor - * for offloading. - */ - if (ol_flags & PKT_TX_IPV4) { - *l3_type = PP2_OUTQ_L3_TYPE_IPV4; - *gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0; - } else if (ol_flags & PKT_TX_IPV6) { - *l3_type = PP2_OUTQ_L3_TYPE_IPV6; - /* no checksum for ipv6 header */ - *gen_l3_cksum = 0; - } else { - /* if something different then stop processing */ - return -1; - } - - ol_flags &= PKT_TX_L4_MASK; - if ((packet_type & RTE_PTYPE_L4_TCP) && - ol_flags == PKT_TX_TCP_CKSUM) { - *l4_type = PP2_OUTQ_L4_TYPE_TCP; - *gen_l4_cksum = 1; - } else if ((packet_type & RTE_PTYPE_L4_UDP) && - ol_flags == PKT_TX_UDP_CKSUM) { - *l4_type = PP2_OUTQ_L4_TYPE_UDP; - *gen_l4_cksum = 1; - } else { - *l4_type = PP2_OUTQ_L4_TYPE_OTHER; - /* no checksum for other type */ - *gen_l4_cksum = 0; - } - - return 0; -} - -/** - * Release already sent buffers to bpool (buffer-pool). - * - * @param ppio - * Pointer to the port structure. - * @param hif - * Pointer to the MUSDK hardware interface. - * @param sq - * Pointer to the shadow queue. - * @param qid - * Queue id number. - * @param force - * Force releasing packets. - */ -static inline void -mrvl_free_sent_buffers(struct pp2_ppio *ppio, struct pp2_hif *hif, - unsigned int core_id, struct mrvl_shadow_txq *sq, - int qid, int force) -{ - struct buff_release_entry *entry; - uint16_t nb_done = 0, num = 0, skip_bufs = 0; - int i; - - pp2_ppio_get_num_outq_done(ppio, hif, qid, &nb_done); - - sq->num_to_release += nb_done; - - if (likely(!force && - sq->num_to_release < MRVL_PP2_BUF_RELEASE_BURST_SIZE)) - return; - - nb_done = sq->num_to_release; - sq->num_to_release = 0; - - for (i = 0; i < nb_done; i++) { - entry = &sq->ent[sq->tail + num]; - if (unlikely(!entry->buff.addr)) { - RTE_LOG(ERR, PMD, - "Shadow memory @%d: cookie(%lx), pa(%lx)!\n", - sq->tail, (u64)entry->buff.cookie, - (u64)entry->buff.addr); - skip_bufs = 1; - goto skip; - } - - if (unlikely(!entry->bpool)) { - struct rte_mbuf *mbuf; - - mbuf = (struct rte_mbuf *) - (cookie_addr_high | entry->buff.cookie); - rte_pktmbuf_free(mbuf); - skip_bufs = 1; - goto skip; - } - - mrvl_port_bpool_size - [entry->bpool->pp2_id][entry->bpool->id][core_id]++; - num++; - if (unlikely(sq->tail + num == MRVL_PP2_TX_SHADOWQ_SIZE)) - goto skip; - continue; -skip: - if (likely(num)) - pp2_bpool_put_buffs(hif, &sq->ent[sq->tail], &num); - num += skip_bufs; - sq->tail = (sq->tail + num) & MRVL_PP2_TX_SHADOWQ_MASK; - sq->size -= num; - num = 0; - skip_bufs = 0; - } - - if (likely(num)) { - pp2_bpool_put_buffs(hif, &sq->ent[sq->tail], &num); - sq->tail = (sq->tail + num) & MRVL_PP2_TX_SHADOWQ_MASK; - sq->size -= num; - } -} - -/** - * DPDK callback for transmit. - * - * @param txq - * Generic pointer transmit queue. - * @param tx_pkts - * Packets to transmit. - * @param nb_pkts - * Number of packets in array. - * - * @return - * Number of packets successfully transmitted. - */ -static uint16_t -mrvl_tx_pkt_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) -{ - struct mrvl_txq *q = txq; - struct mrvl_shadow_txq *sq; - struct pp2_hif *hif; - struct pp2_ppio_desc descs[nb_pkts]; - unsigned int core_id = rte_lcore_id(); - int i, ret, bytes_sent = 0; - uint16_t num, sq_free_size; - uint64_t addr; - - hif = mrvl_get_hif(q->priv, core_id); - sq = &q->shadow_txqs[core_id]; - - if (unlikely(!q->priv->ppio || !hif)) - return 0; - - if (sq->size) - mrvl_free_sent_buffers(q->priv->ppio, hif, core_id, - sq, q->queue_id, 0); - - sq_free_size = MRVL_PP2_TX_SHADOWQ_SIZE - sq->size - 1; - if (unlikely(nb_pkts > sq_free_size)) { - RTE_LOG(DEBUG, PMD, - "No room in shadow queue for %d packets! %d packets will be sent.\n", - nb_pkts, sq_free_size); - nb_pkts = sq_free_size; - } - - for (i = 0; i < nb_pkts; i++) { - struct rte_mbuf *mbuf = tx_pkts[i]; - int gen_l3_cksum, gen_l4_cksum; - enum pp2_outq_l3_type l3_type; - enum pp2_outq_l4_type l4_type; - - if (likely(nb_pkts - i > MRVL_MUSDK_PREFETCH_SHIFT)) { - struct rte_mbuf *pref_pkt_hdr; - - pref_pkt_hdr = tx_pkts[i + MRVL_MUSDK_PREFETCH_SHIFT]; - rte_mbuf_prefetch_part1(pref_pkt_hdr); - rte_mbuf_prefetch_part2(pref_pkt_hdr); - } - - sq->ent[sq->head].buff.cookie = (pp2_cookie_t)(uint64_t)mbuf; - sq->ent[sq->head].buff.addr = - rte_mbuf_data_iova_default(mbuf); - sq->ent[sq->head].bpool = - (unlikely(mbuf->port >= RTE_MAX_ETHPORTS || - mbuf->refcnt > 1)) ? NULL : - mrvl_port_to_bpool_lookup[mbuf->port]; - sq->head = (sq->head + 1) & MRVL_PP2_TX_SHADOWQ_MASK; - sq->size++; - - pp2_ppio_outq_desc_reset(&descs[i]); - pp2_ppio_outq_desc_set_phys_addr(&descs[i], - rte_pktmbuf_iova(mbuf)); - pp2_ppio_outq_desc_set_pkt_offset(&descs[i], 0); - pp2_ppio_outq_desc_set_pkt_len(&descs[i], - rte_pktmbuf_pkt_len(mbuf)); - - bytes_sent += rte_pktmbuf_pkt_len(mbuf); - /* - * in case unsupported ol_flags were passed - * do not update descriptor offload information - */ - ret = mrvl_prepare_proto_info(mbuf->ol_flags, mbuf->packet_type, - &l3_type, &l4_type, &gen_l3_cksum, - &gen_l4_cksum); - if (unlikely(ret)) - continue; - - pp2_ppio_outq_desc_set_proto_info(&descs[i], l3_type, l4_type, - mbuf->l2_len, - mbuf->l2_len + mbuf->l3_len, - gen_l3_cksum, gen_l4_cksum); - } - - num = nb_pkts; - pp2_ppio_send(q->priv->ppio, hif, q->queue_id, descs, &nb_pkts); - /* number of packets that were not sent */ - if (unlikely(num > nb_pkts)) { - for (i = nb_pkts; i < num; i++) { - sq->head = (MRVL_PP2_TX_SHADOWQ_SIZE + sq->head - 1) & - MRVL_PP2_TX_SHADOWQ_MASK; - addr = cookie_addr_high | sq->ent[sq->head].buff.cookie; - bytes_sent -= - rte_pktmbuf_pkt_len((struct rte_mbuf *)addr); - } - sq->size -= num - nb_pkts; - } - - q->bytes_sent += bytes_sent; - - return nb_pkts; -} - -/** - * Initialize packet processor. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_init_pp2(void) -{ - struct pp2_init_params init_params; - - memset(&init_params, 0, sizeof(init_params)); - init_params.hif_reserved_map = MRVL_MUSDK_HIFS_RESERVED; - init_params.bm_pool_reserved_map = MRVL_MUSDK_BPOOLS_RESERVED; - init_params.rss_tbl_reserved_map = MRVL_MUSDK_RSS_RESERVED; - - return pp2_init(&init_params); -} - -/** - * Deinitialize packet processor. - * - * @return - * 0 on success, negative error value otherwise. - */ -static void -mrvl_deinit_pp2(void) -{ - pp2_deinit(); -} - -/** - * Create private device structure. - * - * @param dev_name - * Pointer to the port name passed in the initialization parameters. - * - * @return - * Pointer to the newly allocated private device structure. - */ -static struct mrvl_priv * -mrvl_priv_create(const char *dev_name) -{ - struct pp2_bpool_params bpool_params; - char match[MRVL_MATCH_LEN]; - struct mrvl_priv *priv; - int ret, bpool_bit; - - priv = rte_zmalloc_socket(dev_name, sizeof(*priv), 0, rte_socket_id()); - if (!priv) - return NULL; - - ret = pp2_netdev_get_ppio_info((char *)(uintptr_t)dev_name, - &priv->pp_id, &priv->ppio_id); - if (ret) - goto out_free_priv; - - bpool_bit = mrvl_reserve_bit(&used_bpools[priv->pp_id], - PP2_BPOOL_NUM_POOLS); - if (bpool_bit < 0) - goto out_free_priv; - priv->bpool_bit = bpool_bit; - - snprintf(match, sizeof(match), "pool-%d:%d", priv->pp_id, - priv->bpool_bit); - memset(&bpool_params, 0, sizeof(bpool_params)); - bpool_params.match = match; - bpool_params.buff_len = MRVL_PKT_SIZE_MAX + MRVL_PKT_EFFEC_OFFS; - ret = pp2_bpool_init(&bpool_params, &priv->bpool); - if (ret) - goto out_clear_bpool_bit; - - priv->ppio_params.type = PP2_PPIO_T_NIC; - rte_spinlock_init(&priv->lock); - - return priv; -out_clear_bpool_bit: - used_bpools[priv->pp_id] &= ~(1 << priv->bpool_bit); -out_free_priv: - rte_free(priv); - return NULL; -} - -/** - * Create device representing Ethernet port. - * - * @param name - * Pointer to the port's name. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name) -{ - int ret, fd = socket(AF_INET, SOCK_DGRAM, 0); - struct rte_eth_dev *eth_dev; - struct mrvl_priv *priv; - struct ifreq req; - - eth_dev = rte_eth_dev_allocate(name); - if (!eth_dev) - return -ENOMEM; - - priv = mrvl_priv_create(name); - if (!priv) { - ret = -ENOMEM; - goto out_free_dev; - } - - eth_dev->data->mac_addrs = - rte_zmalloc("mac_addrs", - ETHER_ADDR_LEN * MRVL_MAC_ADDRS_MAX, 0); - if (!eth_dev->data->mac_addrs) { - RTE_LOG(ERR, PMD, "Failed to allocate space for eth addrs\n"); - ret = -ENOMEM; - goto out_free_priv; - } - - memset(&req, 0, sizeof(req)); - strcpy(req.ifr_name, name); - ret = ioctl(fd, SIOCGIFHWADDR, &req); - if (ret) - goto out_free_mac; - - memcpy(eth_dev->data->mac_addrs[0].addr_bytes, - req.ifr_addr.sa_data, ETHER_ADDR_LEN); - - eth_dev->rx_pkt_burst = mrvl_rx_pkt_burst; - eth_dev->tx_pkt_burst = mrvl_tx_pkt_burst; - eth_dev->data->kdrv = RTE_KDRV_NONE; - eth_dev->data->dev_private = priv; - eth_dev->device = &vdev->device; - eth_dev->dev_ops = &mrvl_ops; - - return 0; -out_free_mac: - rte_free(eth_dev->data->mac_addrs); -out_free_dev: - rte_eth_dev_release_port(eth_dev); -out_free_priv: - rte_free(priv); - - return ret; -} - -/** - * Cleanup previously created device representing Ethernet port. - * - * @param name - * Pointer to the port name. - */ -static void -mrvl_eth_dev_destroy(const char *name) -{ - struct rte_eth_dev *eth_dev; - struct mrvl_priv *priv; - - eth_dev = rte_eth_dev_allocated(name); - if (!eth_dev) - return; - - priv = eth_dev->data->dev_private; - pp2_bpool_deinit(priv->bpool); - used_bpools[priv->pp_id] &= ~(1 << priv->bpool_bit); - rte_free(priv); - rte_free(eth_dev->data->mac_addrs); - rte_eth_dev_release_port(eth_dev); -} - -/** - * Callback used by rte_kvargs_process() during argument parsing. - * - * @param key - * Pointer to the parsed key (unused). - * @param value - * Pointer to the parsed value. - * @param extra_args - * Pointer to the extra arguments which contains address of the - * table of pointers to parsed interface names. - * - * @return - * Always 0. - */ -static int -mrvl_get_ifnames(const char *key __rte_unused, const char *value, - void *extra_args) -{ - struct mrvl_ifnames *ifnames = extra_args; - - ifnames->names[ifnames->idx++] = value; - - return 0; -} - -/** - * Deinitialize per-lcore MUSDK hardware interfaces (hifs). - */ -static void -mrvl_deinit_hifs(void) -{ - int i; - - for (i = mrvl_lcore_first; i <= mrvl_lcore_last; i++) { - if (hifs[i]) - pp2_hif_deinit(hifs[i]); - } - used_hifs = MRVL_MUSDK_HIFS_RESERVED; - memset(hifs, 0, sizeof(hifs)); -} - -/** - * DPDK callback to register the virtual device. - * - * @param vdev - * Pointer to the virtual device. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -rte_pmd_mrvl_probe(struct rte_vdev_device *vdev) -{ - struct rte_kvargs *kvlist; - struct mrvl_ifnames ifnames; - int ret = -EINVAL; - uint32_t i, ifnum, cfgnum; - const char *params; - - params = rte_vdev_device_args(vdev); - if (!params) - return -EINVAL; - - kvlist = rte_kvargs_parse(params, valid_args); - if (!kvlist) - return -EINVAL; - - ifnum = rte_kvargs_count(kvlist, MRVL_IFACE_NAME_ARG); - if (ifnum > RTE_DIM(ifnames.names)) - goto out_free_kvlist; - - ifnames.idx = 0; - rte_kvargs_process(kvlist, MRVL_IFACE_NAME_ARG, - mrvl_get_ifnames, &ifnames); - - - /* - * The below system initialization should be done only once, - * on the first provided configuration file - */ - if (!mrvl_qos_cfg) { - cfgnum = rte_kvargs_count(kvlist, MRVL_CFG_ARG); - RTE_LOG(INFO, PMD, "Parsing config file!\n"); - if (cfgnum > 1) { - RTE_LOG(ERR, PMD, "Cannot handle more than one config file!\n"); - goto out_free_kvlist; - } else if (cfgnum == 1) { - rte_kvargs_process(kvlist, MRVL_CFG_ARG, - mrvl_get_qoscfg, &mrvl_qos_cfg); - } - } - - if (mrvl_dev_num) - goto init_devices; - - RTE_LOG(INFO, PMD, "Perform MUSDK initializations\n"); - /* - * ret == -EEXIST is correct, it means DMA - * has been already initialized (by another PMD). - */ - ret = mv_sys_dma_mem_init(MRVL_MUSDK_DMA_MEMSIZE); - if (ret < 0) { - if (ret != -EEXIST) - goto out_free_kvlist; - else - RTE_LOG(INFO, PMD, - "DMA memory has been already initialized by a different driver.\n"); - } - - ret = mrvl_init_pp2(); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to init PP!\n"); - goto out_deinit_dma; - } - - memset(mrvl_port_bpool_size, 0, sizeof(mrvl_port_bpool_size)); - memset(mrvl_port_to_bpool_lookup, 0, sizeof(mrvl_port_to_bpool_lookup)); - - mrvl_lcore_first = RTE_MAX_LCORE; - mrvl_lcore_last = 0; - -init_devices: - for (i = 0; i < ifnum; i++) { - RTE_LOG(INFO, PMD, "Creating %s\n", ifnames.names[i]); - ret = mrvl_eth_dev_create(vdev, ifnames.names[i]); - if (ret) - goto out_cleanup; - } - mrvl_dev_num += ifnum; - - rte_kvargs_free(kvlist); - - return 0; -out_cleanup: - for (; i > 0; i--) - mrvl_eth_dev_destroy(ifnames.names[i]); - - if (mrvl_dev_num == 0) - mrvl_deinit_pp2(); -out_deinit_dma: - if (mrvl_dev_num == 0) - mv_sys_dma_mem_destroy(); -out_free_kvlist: - rte_kvargs_free(kvlist); - - return ret; -} - -/** - * DPDK callback to remove virtual device. - * - * @param vdev - * Pointer to the removed virtual device. - * - * @return - * 0 on success, negative error value otherwise. - */ -static int -rte_pmd_mrvl_remove(struct rte_vdev_device *vdev) -{ - int i; - const char *name; - - name = rte_vdev_device_name(vdev); - if (!name) - return -EINVAL; - - RTE_LOG(INFO, PMD, "Removing %s\n", name); - - for (i = 0; i < rte_eth_dev_count(); i++) { - char ifname[RTE_ETH_NAME_MAX_LEN]; - - rte_eth_dev_get_name_by_port(i, ifname); - mrvl_eth_dev_destroy(ifname); - mrvl_dev_num--; - } - - if (mrvl_dev_num == 0) { - RTE_LOG(INFO, PMD, "Perform MUSDK deinit\n"); - mrvl_deinit_hifs(); - mrvl_deinit_pp2(); - mv_sys_dma_mem_destroy(); - } - - return 0; -} - -static struct rte_vdev_driver pmd_mrvl_drv = { - .probe = rte_pmd_mrvl_probe, - .remove = rte_pmd_mrvl_remove, -}; - -RTE_PMD_REGISTER_VDEV(net_mrvl, pmd_mrvl_drv); -RTE_PMD_REGISTER_ALIAS(net_mrvl, eth_mrvl); diff --git a/drivers/net/mrvl/mrvl_ethdev.h b/drivers/net/mrvl/mrvl_ethdev.h deleted file mode 100644 index 3a428092df..0000000000 --- a/drivers/net/mrvl/mrvl_ethdev.h +++ /dev/null @@ -1,101 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2017 Marvell International Ltd. - * Copyright(c) 2017 Semihalf. - * All rights reserved. - */ - -#ifndef _MRVL_ETHDEV_H_ -#define _MRVL_ETHDEV_H_ - -#include -#include - -#include -#include -#include -#include -#include -#include - -/** Maximum number of rx queues per port */ -#define MRVL_PP2_RXQ_MAX 32 - -/** Maximum number of tx queues per port */ -#define MRVL_PP2_TXQ_MAX 8 - -/** Minimum number of descriptors in tx queue */ -#define MRVL_PP2_TXD_MIN 16 - -/** Maximum number of descriptors in tx queue */ -#define MRVL_PP2_TXD_MAX 2048 - -/** Tx queue descriptors alignment */ -#define MRVL_PP2_TXD_ALIGN 16 - -/** Minimum number of descriptors in rx queue */ -#define MRVL_PP2_RXD_MIN 16 - -/** Maximum number of descriptors in rx queue */ -#define MRVL_PP2_RXD_MAX 2048 - -/** Rx queue descriptors alignment */ -#define MRVL_PP2_RXD_ALIGN 16 - -/** Maximum number of descriptors in tx aggregated queue */ -#define MRVL_PP2_AGGR_TXQD_MAX 2048 - -/** Maximum number of Traffic Classes. */ -#define MRVL_PP2_TC_MAX 8 - -/** Packet offset inside RX buffer. */ -#define MRVL_PKT_OFFS 64 - -/** Maximum number of descriptors in shadow queue. Must be power of 2 */ -#define MRVL_PP2_TX_SHADOWQ_SIZE MRVL_PP2_TXD_MAX - -/** Shadow queue size mask (since shadow queue size is power of 2) */ -#define MRVL_PP2_TX_SHADOWQ_MASK (MRVL_PP2_TX_SHADOWQ_SIZE - 1) - -/** Minimum number of sent buffers to release from shadow queue to BM */ -#define MRVL_PP2_BUF_RELEASE_BURST_SIZE 64 - -struct mrvl_priv { - /* Hot fields, used in fast path. */ - struct pp2_bpool *bpool; /**< BPool pointer */ - struct pp2_ppio *ppio; /**< Port handler pointer */ - rte_spinlock_t lock; /**< Spinlock for checking bpool status */ - uint16_t bpool_max_size; /**< BPool maximum size */ - uint16_t bpool_min_size; /**< BPool minimum size */ - uint16_t bpool_init_size; /**< Configured BPool size */ - - /** Mapping for DPDK rx queue->(TC, MRVL relative inq) */ - struct { - uint8_t tc; /**< Traffic Class */ - uint8_t inq; /**< Relative in-queue number */ - } rxq_map[MRVL_PP2_RXQ_MAX] __rte_cache_aligned; - - /* Configuration data, used sporadically. */ - uint8_t pp_id; - uint8_t ppio_id; - uint8_t bpool_bit; - uint8_t rss_hf_tcp; - uint8_t uc_mc_flushed; - uint8_t vlan_flushed; - uint8_t isolated; - - struct pp2_ppio_params ppio_params; - struct pp2_cls_qos_tbl_params qos_tbl_params; - struct pp2_cls_tbl *qos_tbl; - uint16_t nb_rx_queues; - - struct pp2_cls_tbl_params cls_tbl_params; - struct pp2_cls_tbl *cls_tbl; - uint32_t cls_tbl_pattern; - LIST_HEAD(mrvl_flows, rte_flow) flows; - - struct pp2_cls_plcr *policer; -}; - -/** Flow operations forward declaration. */ -extern const struct rte_flow_ops mrvl_flow_ops; -#endif /* _MRVL_ETHDEV_H_ */ diff --git a/drivers/net/mrvl/mrvl_flow.c b/drivers/net/mrvl/mrvl_flow.c deleted file mode 100644 index 8fd4dbfb19..0000000000 --- a/drivers/net/mrvl/mrvl_flow.c +++ /dev/null @@ -1,2759 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2018 Marvell International Ltd. - * Copyright(c) 2018 Semihalf. - * All rights reserved. - */ - -#include -#include -#include -#include - -#include - -#ifdef container_of -#undef container_of -#endif - -#include "mrvl_ethdev.h" -#include "mrvl_qos.h" -#include "env/mv_common.h" /* for BIT() */ - -/** Number of rules in the classifier table. */ -#define MRVL_CLS_MAX_NUM_RULES 20 - -/** Size of the classifier key and mask strings. */ -#define MRVL_CLS_STR_SIZE_MAX 40 - -/** Parsed fields in processed rte_flow_item. */ -enum mrvl_parsed_fields { - /* eth flags */ - F_DMAC = BIT(0), - F_SMAC = BIT(1), - F_TYPE = BIT(2), - /* vlan flags */ - F_VLAN_ID = BIT(3), - F_VLAN_PRI = BIT(4), - F_VLAN_TCI = BIT(5), /* not supported by MUSDK yet */ - /* ip4 flags */ - F_IP4_TOS = BIT(6), - F_IP4_SIP = BIT(7), - F_IP4_DIP = BIT(8), - F_IP4_PROTO = BIT(9), - /* ip6 flags */ - F_IP6_TC = BIT(10), /* not supported by MUSDK yet */ - F_IP6_SIP = BIT(11), - F_IP6_DIP = BIT(12), - F_IP6_FLOW = BIT(13), - F_IP6_NEXT_HDR = BIT(14), - /* tcp flags */ - F_TCP_SPORT = BIT(15), - F_TCP_DPORT = BIT(16), - /* udp flags */ - F_UDP_SPORT = BIT(17), - F_UDP_DPORT = BIT(18), -}; - -/** PMD-specific definition of a flow rule handle. */ -struct rte_flow { - LIST_ENTRY(rte_flow) next; - - enum mrvl_parsed_fields pattern; - - struct pp2_cls_tbl_rule rule; - struct pp2_cls_cos_desc cos; - struct pp2_cls_tbl_action action; -}; - -static const enum rte_flow_item_type pattern_eth[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_vlan[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_vlan_ip[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_vlan_ip6[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_ip4[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_ip4_tcp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_ip4_udp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_ip6[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_ip6_tcp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_eth_ip6_udp[] = { - RTE_FLOW_ITEM_TYPE_ETH, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan_ip[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan_ip_tcp[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan_ip_udp[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan_ip6[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan_ip6_tcp[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_vlan_ip6_udp[] = { - RTE_FLOW_ITEM_TYPE_VLAN, - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_ip[] = { - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_ip6[] = { - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_ip_tcp[] = { - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_ip6_tcp[] = { - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_ip_udp[] = { - RTE_FLOW_ITEM_TYPE_IPV4, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_ip6_udp[] = { - RTE_FLOW_ITEM_TYPE_IPV6, - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_tcp[] = { - RTE_FLOW_ITEM_TYPE_TCP, - RTE_FLOW_ITEM_TYPE_END -}; - -static const enum rte_flow_item_type pattern_udp[] = { - RTE_FLOW_ITEM_TYPE_UDP, - RTE_FLOW_ITEM_TYPE_END -}; - -#define MRVL_VLAN_ID_MASK 0x0fff -#define MRVL_VLAN_PRI_MASK 0x7000 -#define MRVL_IPV4_DSCP_MASK 0xfc -#define MRVL_IPV4_ADDR_MASK 0xffffffff -#define MRVL_IPV6_FLOW_MASK 0x0fffff - -/** - * Given a flow item, return the next non-void one. - * - * @param items Pointer to the item in the table. - * @returns Next not-void item, NULL otherwise. - */ -static const struct rte_flow_item * -mrvl_next_item(const struct rte_flow_item *items) -{ - const struct rte_flow_item *item = items; - - for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { - if (item->type != RTE_FLOW_ITEM_TYPE_VOID) - return item; - } - - return NULL; -} - -/** - * Allocate memory for classifier rule key and mask fields. - * - * @param field Pointer to the classifier rule. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_alloc_key_mask(struct pp2_cls_rule_key_field *field) -{ - unsigned int id = rte_socket_id(); - - field->key = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id); - if (!field->key) - goto out; - - field->mask = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id); - if (!field->mask) - goto out_mask; - - return 0; -out_mask: - rte_free(field->key); -out: - field->key = NULL; - field->mask = NULL; - return -1; -} - -/** - * Free memory allocated for classifier rule key and mask fields. - * - * @param field Pointer to the classifier rule. - */ -static void -mrvl_free_key_mask(struct pp2_cls_rule_key_field *field) -{ - rte_free(field->key); - rte_free(field->mask); - field->key = NULL; - field->mask = NULL; -} - -/** - * Free memory allocated for all classifier rule key and mask fields. - * - * @param rule Pointer to the classifier table rule. - */ -static void -mrvl_free_all_key_mask(struct pp2_cls_tbl_rule *rule) -{ - int i; - - for (i = 0; i < rule->num_fields; i++) - mrvl_free_key_mask(&rule->fields[i]); - rule->num_fields = 0; -} - -/* - * Initialize rte flow item parsing. - * - * @param item Pointer to the flow item. - * @param spec_ptr Pointer to the specific item pointer. - * @param mask_ptr Pointer to the specific item's mask pointer. - * @def_mask Pointer to the default mask. - * @size Size of the flow item. - * @error Pointer to the rte flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_init(const struct rte_flow_item *item, - const void **spec_ptr, - const void **mask_ptr, - const void *def_mask, - unsigned int size, - struct rte_flow_error *error) -{ - const uint8_t *spec; - const uint8_t *mask; - const uint8_t *last; - uint8_t zeros[size]; - - memset(zeros, 0, size); - - if (item == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "NULL item\n"); - return -rte_errno; - } - - if ((item->last != NULL || item->mask != NULL) && item->spec == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, item, - "Mask or last is set without spec\n"); - return -rte_errno; - } - - /* - * If "mask" is not set, default mask is used, - * but if default mask is NULL, "mask" should be set. - */ - if (item->mask == NULL) { - if (def_mask == NULL) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "Mask should be specified\n"); - return -rte_errno; - } - - mask = (const uint8_t *)def_mask; - } else { - mask = (const uint8_t *)item->mask; - } - - spec = (const uint8_t *)item->spec; - last = (const uint8_t *)item->last; - - if (spec == NULL) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Spec should be specified\n"); - return -rte_errno; - } - - /* - * If field values in "last" are either 0 or equal to the corresponding - * values in "spec" then they are ignored. - */ - if (last != NULL && - !memcmp(last, zeros, size) && - memcmp(last, spec, size) != 0) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "Ranging is not supported\n"); - return -rte_errno; - } - - *spec_ptr = spec; - *mask_ptr = mask; - - return 0; -} - -/** - * Parse the eth flow item. - * - * This will create classifier rule that matches either destination or source - * mac. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param mask Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_mac(const struct rte_flow_item_eth *spec, - const struct rte_flow_item_eth *mask, - int parse_dst, struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - const uint8_t *k, *m; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - if (parse_dst) { - k = spec->dst.addr_bytes; - m = mask->dst.addr_bytes; - - flow->pattern |= F_DMAC; - } else { - k = spec->src.addr_bytes; - m = mask->src.addr_bytes; - - flow->pattern |= F_SMAC; - } - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 6; - - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, - "%02x:%02x:%02x:%02x:%02x:%02x", - k[0], k[1], k[2], k[3], k[4], k[5]); - - snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, - "%02x:%02x:%02x:%02x:%02x:%02x", - m[0], m[1], m[2], m[3], m[4], m[5]); - - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Helper for parsing the eth flow item destination mac address. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_dmac(const struct rte_flow_item_eth *spec, - const struct rte_flow_item_eth *mask, - struct rte_flow *flow) -{ - return mrvl_parse_mac(spec, mask, 1, flow); -} - -/** - * Helper for parsing the eth flow item source mac address. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_smac(const struct rte_flow_item_eth *spec, - const struct rte_flow_item_eth *mask, - struct rte_flow *flow) -{ - return mrvl_parse_mac(spec, mask, 0, flow); -} - -/** - * Parse the ether type field of the eth flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_type(const struct rte_flow_item_eth *spec, - const struct rte_flow_item_eth *mask __rte_unused, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint16_t k; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 2; - - k = rte_be_to_cpu_16(spec->type); - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->pattern |= F_TYPE; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse the vid field of the vlan rte flow item. - * - * This will create classifier rule that matches vid. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec, - const struct rte_flow_item_vlan *mask __rte_unused, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint16_t k; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 2; - - k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK; - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->pattern |= F_VLAN_ID; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse the pri field of the vlan rte flow item. - * - * This will create classifier rule that matches pri. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec, - const struct rte_flow_item_vlan *mask __rte_unused, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint16_t k; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 1; - - k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13; - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->pattern |= F_VLAN_PRI; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse the dscp field of the ipv4 rte flow item. - * - * This will create classifier rule that matches dscp field. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_ip4_dscp(const struct rte_flow_item_ipv4 *spec, - const struct rte_flow_item_ipv4 *mask, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint8_t k, m; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 1; - - k = (spec->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2; - m = (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2; - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m); - - flow->pattern |= F_IP4_TOS; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse either source or destination ip addresses of the ipv4 flow item. - * - * This will create classifier rule that matches either destination - * or source ip field. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_ip4_addr(const struct rte_flow_item_ipv4 *spec, - const struct rte_flow_item_ipv4 *mask, - int parse_dst, struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - struct in_addr k; - uint32_t m; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - memset(&k, 0, sizeof(k)); - if (parse_dst) { - k.s_addr = spec->hdr.dst_addr; - m = rte_be_to_cpu_32(mask->hdr.dst_addr); - - flow->pattern |= F_IP4_DIP; - } else { - k.s_addr = spec->hdr.src_addr; - m = rte_be_to_cpu_32(mask->hdr.src_addr); - - flow->pattern |= F_IP4_SIP; - } - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 4; - - inet_ntop(AF_INET, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX); - snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "0x%x", m); - - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Helper for parsing destination ip of the ipv4 flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_ip4_dip(const struct rte_flow_item_ipv4 *spec, - const struct rte_flow_item_ipv4 *mask, - struct rte_flow *flow) -{ - return mrvl_parse_ip4_addr(spec, mask, 1, flow); -} - -/** - * Helper for parsing source ip of the ipv4 flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_ip4_sip(const struct rte_flow_item_ipv4 *spec, - const struct rte_flow_item_ipv4 *mask, - struct rte_flow *flow) -{ - return mrvl_parse_ip4_addr(spec, mask, 0, flow); -} - -/** - * Parse the proto field of the ipv4 rte flow item. - * - * This will create classifier rule that matches proto field. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_ip4_proto(const struct rte_flow_item_ipv4 *spec, - const struct rte_flow_item_ipv4 *mask __rte_unused, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint8_t k = spec->hdr.next_proto_id; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 1; - - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->pattern |= F_IP4_PROTO; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse either source or destination ip addresses of the ipv6 rte flow item. - * - * This will create classifier rule that matches either destination - * or source ip field. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_ip6_addr(const struct rte_flow_item_ipv6 *spec, - const struct rte_flow_item_ipv6 *mask, - int parse_dst, struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - int size = sizeof(spec->hdr.dst_addr); - struct in6_addr k, m; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - memset(&k, 0, sizeof(k)); - if (parse_dst) { - memcpy(k.s6_addr, spec->hdr.dst_addr, size); - memcpy(m.s6_addr, mask->hdr.dst_addr, size); - - flow->pattern |= F_IP6_DIP; - } else { - memcpy(k.s6_addr, spec->hdr.src_addr, size); - memcpy(m.s6_addr, mask->hdr.src_addr, size); - - flow->pattern |= F_IP6_SIP; - } - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 16; - - inet_ntop(AF_INET6, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX); - inet_ntop(AF_INET6, &m, (char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX); - - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Helper for parsing destination ip of the ipv6 flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_ip6_dip(const struct rte_flow_item_ipv6 *spec, - const struct rte_flow_item_ipv6 *mask, - struct rte_flow *flow) -{ - return mrvl_parse_ip6_addr(spec, mask, 1, flow); -} - -/** - * Helper for parsing source ip of the ipv6 flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_ip6_sip(const struct rte_flow_item_ipv6 *spec, - const struct rte_flow_item_ipv6 *mask, - struct rte_flow *flow) -{ - return mrvl_parse_ip6_addr(spec, mask, 0, flow); -} - -/** - * Parse the flow label of the ipv6 flow item. - * - * This will create classifier rule that matches flow field. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_ip6_flow(const struct rte_flow_item_ipv6 *spec, - const struct rte_flow_item_ipv6 *mask, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint32_t k = rte_be_to_cpu_32(spec->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK, - m = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 3; - - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m); - - flow->pattern |= F_IP6_FLOW; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse the next header of the ipv6 flow item. - * - * This will create classifier rule that matches next header field. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_ip6_next_hdr(const struct rte_flow_item_ipv6 *spec, - const struct rte_flow_item_ipv6 *mask __rte_unused, - struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint8_t k = spec->hdr.proto; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 1; - - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->pattern |= F_IP6_NEXT_HDR; - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Parse destination or source port of the tcp flow item. - * - * This will create classifier rule that matches either destination or - * source tcp port. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_tcp_port(const struct rte_flow_item_tcp *spec, - const struct rte_flow_item_tcp *mask __rte_unused, - int parse_dst, struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint16_t k; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 2; - - if (parse_dst) { - k = rte_be_to_cpu_16(spec->hdr.dst_port); - - flow->pattern |= F_TCP_DPORT; - } else { - k = rte_be_to_cpu_16(spec->hdr.src_port); - - flow->pattern |= F_TCP_SPORT; - } - - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Helper for parsing the tcp source port of the tcp flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_tcp_sport(const struct rte_flow_item_tcp *spec, - const struct rte_flow_item_tcp *mask, - struct rte_flow *flow) -{ - return mrvl_parse_tcp_port(spec, mask, 0, flow); -} - -/** - * Helper for parsing the tcp destination port of the tcp flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_tcp_dport(const struct rte_flow_item_tcp *spec, - const struct rte_flow_item_tcp *mask, - struct rte_flow *flow) -{ - return mrvl_parse_tcp_port(spec, mask, 1, flow); -} - -/** - * Parse destination or source port of the udp flow item. - * - * This will create classifier rule that matches either destination or - * source udp port. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static int -mrvl_parse_udp_port(const struct rte_flow_item_udp *spec, - const struct rte_flow_item_udp *mask __rte_unused, - int parse_dst, struct rte_flow *flow) -{ - struct pp2_cls_rule_key_field *key_field; - uint16_t k; - - if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) - return -ENOSPC; - - key_field = &flow->rule.fields[flow->rule.num_fields]; - mrvl_alloc_key_mask(key_field); - key_field->size = 2; - - if (parse_dst) { - k = rte_be_to_cpu_16(spec->hdr.dst_port); - - flow->pattern |= F_UDP_DPORT; - } else { - k = rte_be_to_cpu_16(spec->hdr.src_port); - - flow->pattern |= F_UDP_SPORT; - } - - snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); - - flow->rule.num_fields += 1; - - return 0; -} - -/** - * Helper for parsing the udp source port of the udp flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_udp_sport(const struct rte_flow_item_udp *spec, - const struct rte_flow_item_udp *mask, - struct rte_flow *flow) -{ - return mrvl_parse_udp_port(spec, mask, 0, flow); -} - -/** - * Helper for parsing the udp destination port of the udp flow item. - * - * @param spec Pointer to the specific flow item. - * @param mask Pointer to the specific flow item's mask. - * @param flow Pointer to the flow. - * @return 0 in case of success, negative error value otherwise. - */ -static inline int -mrvl_parse_udp_dport(const struct rte_flow_item_udp *spec, - const struct rte_flow_item_udp *mask, - struct rte_flow *flow) -{ - return mrvl_parse_udp_port(spec, mask, 1, flow); -} - -/** - * Parse eth flow item. - * - * @param item Pointer to the flow item. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param fields Pointer to the parsed parsed fields enum. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item_eth *spec = NULL, *mask = NULL; - struct ether_addr zero; - int ret; - - ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, - &rte_flow_item_eth_mask, - sizeof(struct rte_flow_item_eth), error); - if (ret) - return ret; - - memset(&zero, 0, sizeof(zero)); - - if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) { - ret = mrvl_parse_dmac(spec, mask, flow); - if (ret) - goto out; - } - - if (memcmp(&mask->src, &zero, sizeof(mask->src))) { - ret = mrvl_parse_smac(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->type) { - RTE_LOG(WARNING, PMD, "eth type mask is ignored\n"); - ret = mrvl_parse_type(spec, mask, flow); - if (ret) - goto out; - } - - return 0; -out: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Reached maximum number of fields in cls tbl key\n"); - return -rte_errno; -} - -/** - * Parse vlan flow item. - * - * @param item Pointer to the flow item. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param fields Pointer to the parsed parsed fields enum. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_parse_vlan(const struct rte_flow_item *item, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item_vlan *spec = NULL, *mask = NULL; - uint16_t m; - int ret; - - ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, - &rte_flow_item_vlan_mask, - sizeof(struct rte_flow_item_vlan), error); - if (ret) - return ret; - - if (mask->tpid) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Not supported by classifier\n"); - return -rte_errno; - } - - m = rte_be_to_cpu_16(mask->tci); - if (m & MRVL_VLAN_ID_MASK) { - RTE_LOG(WARNING, PMD, "vlan id mask is ignored\n"); - ret = mrvl_parse_vlan_id(spec, mask, flow); - if (ret) - goto out; - } - - if (m & MRVL_VLAN_PRI_MASK) { - RTE_LOG(WARNING, PMD, "vlan pri mask is ignored\n"); - ret = mrvl_parse_vlan_pri(spec, mask, flow); - if (ret) - goto out; - } - - return 0; -out: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Reached maximum number of fields in cls tbl key\n"); - return -rte_errno; -} - -/** - * Parse ipv4 flow item. - * - * @param item Pointer to the flow item. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param fields Pointer to the parsed parsed fields enum. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_parse_ip4(const struct rte_flow_item *item, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item_ipv4 *spec = NULL, *mask = NULL; - int ret; - - ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, - &rte_flow_item_ipv4_mask, - sizeof(struct rte_flow_item_ipv4), error); - if (ret) - return ret; - - if (mask->hdr.version_ihl || - mask->hdr.total_length || - mask->hdr.packet_id || - mask->hdr.fragment_offset || - mask->hdr.time_to_live || - mask->hdr.hdr_checksum) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Not supported by classifier\n"); - return -rte_errno; - } - - if (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) { - ret = mrvl_parse_ip4_dscp(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->hdr.src_addr) { - ret = mrvl_parse_ip4_sip(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->hdr.dst_addr) { - ret = mrvl_parse_ip4_dip(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->hdr.next_proto_id) { - RTE_LOG(WARNING, PMD, "next proto id mask is ignored\n"); - ret = mrvl_parse_ip4_proto(spec, mask, flow); - if (ret) - goto out; - } - - return 0; -out: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Reached maximum number of fields in cls tbl key\n"); - return -rte_errno; -} - -/** - * Parse ipv6 flow item. - * - * @param item Pointer to the flow item. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param fields Pointer to the parsed parsed fields enum. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_parse_ip6(const struct rte_flow_item *item, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item_ipv6 *spec = NULL, *mask = NULL; - struct ipv6_hdr zero; - uint32_t flow_mask; - int ret; - - ret = mrvl_parse_init(item, (const void **)&spec, - (const void **)&mask, - &rte_flow_item_ipv6_mask, - sizeof(struct rte_flow_item_ipv6), - error); - if (ret) - return ret; - - memset(&zero, 0, sizeof(zero)); - - if (mask->hdr.payload_len || - mask->hdr.hop_limits) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Not supported by classifier\n"); - return -rte_errno; - } - - if (memcmp(mask->hdr.src_addr, - zero.src_addr, sizeof(mask->hdr.src_addr))) { - ret = mrvl_parse_ip6_sip(spec, mask, flow); - if (ret) - goto out; - } - - if (memcmp(mask->hdr.dst_addr, - zero.dst_addr, sizeof(mask->hdr.dst_addr))) { - ret = mrvl_parse_ip6_dip(spec, mask, flow); - if (ret) - goto out; - } - - flow_mask = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK; - if (flow_mask) { - ret = mrvl_parse_ip6_flow(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->hdr.proto) { - RTE_LOG(WARNING, PMD, "next header mask is ignored\n"); - ret = mrvl_parse_ip6_next_hdr(spec, mask, flow); - if (ret) - goto out; - } - - return 0; -out: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Reached maximum number of fields in cls tbl key\n"); - return -rte_errno; -} - -/** - * Parse tcp flow item. - * - * @param item Pointer to the flow item. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param fields Pointer to the parsed parsed fields enum. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_parse_tcp(const struct rte_flow_item *item, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item_tcp *spec = NULL, *mask = NULL; - int ret; - - ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, - &rte_flow_item_ipv4_mask, - sizeof(struct rte_flow_item_ipv4), error); - if (ret) - return ret; - - if (mask->hdr.sent_seq || - mask->hdr.recv_ack || - mask->hdr.data_off || - mask->hdr.tcp_flags || - mask->hdr.rx_win || - mask->hdr.cksum || - mask->hdr.tcp_urp) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Not supported by classifier\n"); - return -rte_errno; - } - - if (mask->hdr.src_port) { - RTE_LOG(WARNING, PMD, "tcp sport mask is ignored\n"); - ret = mrvl_parse_tcp_sport(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->hdr.dst_port) { - RTE_LOG(WARNING, PMD, "tcp dport mask is ignored\n"); - ret = mrvl_parse_tcp_dport(spec, mask, flow); - if (ret) - goto out; - } - - return 0; -out: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Reached maximum number of fields in cls tbl key\n"); - return -rte_errno; -} - -/** - * Parse udp flow item. - * - * @param item Pointer to the flow item. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param fields Pointer to the parsed parsed fields enum. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_parse_udp(const struct rte_flow_item *item, - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item_udp *spec = NULL, *mask = NULL; - int ret; - - ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, - &rte_flow_item_ipv4_mask, - sizeof(struct rte_flow_item_ipv4), error); - if (ret) - return ret; - - if (mask->hdr.dgram_len || - mask->hdr.dgram_cksum) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, - NULL, "Not supported by classifier\n"); - return -rte_errno; - } - - if (mask->hdr.src_port) { - RTE_LOG(WARNING, PMD, "udp sport mask is ignored\n"); - ret = mrvl_parse_udp_sport(spec, mask, flow); - if (ret) - goto out; - } - - if (mask->hdr.dst_port) { - RTE_LOG(WARNING, PMD, "udp dport mask is ignored\n"); - ret = mrvl_parse_udp_dport(spec, mask, flow); - if (ret) - goto out; - } - - return 0; -out: - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Reached maximum number of fields in cls tbl key\n"); - return -rte_errno; -} - -/** - * Parse flow pattern composed of the the eth item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_eth(pattern, flow, error); -} - -/** - * Parse flow pattern composed of the eth and vlan items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_vlan(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_eth(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - - return mrvl_parse_vlan(item, flow, error); -} - -/** - * Parse flow pattern composed of the eth, vlan and ip4/ip6 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_vlan_ip4_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int ip6) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_eth(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - ret = mrvl_parse_vlan(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - - return ip6 ? mrvl_parse_ip6(item, flow, error) : - mrvl_parse_ip4(item, flow, error); -} - -/** - * Parse flow pattern composed of the eth, vlan and ipv4 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_vlan_ip4(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the eth, vlan and ipv6 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_vlan_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the eth and ip4/ip6 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_ip4_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int ip6) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_eth(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - - return ip6 ? mrvl_parse_ip6(item, flow, error) : - mrvl_parse_ip4(item, flow, error); -} - -/** - * Parse flow pattern composed of the eth and ipv4 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_eth_ip4(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the eth and ipv6 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_eth_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the eth, ip4 and tcp/udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param tcp 1 to parse tcp item, 0 to parse udp item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_ip4_tcp_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int tcp) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - item = mrvl_next_item(item + 1); - - if (tcp) - return mrvl_parse_tcp(item, flow, error); - - return mrvl_parse_udp(item, flow, error); -} - -/** - * Parse flow pattern composed of the eth, ipv4 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_eth_ip4_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the eth, ipv4 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_eth_ip4_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the eth, ipv6 and tcp/udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param tcp 1 to parse tcp item, 0 to parse udp item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_eth_ip6_tcp_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int tcp) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - item = mrvl_next_item(item + 1); - - if (tcp) - return mrvl_parse_tcp(item, flow, error); - - return mrvl_parse_udp(item, flow, error); -} - -/** - * Parse flow pattern composed of the eth, ipv6 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_eth_ip6_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the eth, ipv6 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_eth_ip6_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the vlan item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_vlan(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - - return mrvl_parse_vlan(item, flow, error); -} - -/** - * Parse flow pattern composed of the vlan and ip4/ip6 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_vlan_ip4_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int ip6) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_vlan(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - - return ip6 ? mrvl_parse_ip6(item, flow, error) : - mrvl_parse_ip4(item, flow, error); -} - -/** - * Parse flow pattern composed of the vlan and ipv4 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_vlan_ip4(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the vlan, ipv4 and tcp/udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_vlan_ip_tcp_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int tcp) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - item = mrvl_next_item(item + 1); - - if (tcp) - return mrvl_parse_tcp(item, flow, error); - - return mrvl_parse_udp(item, flow, error); -} - -/** - * Parse flow pattern composed of the vlan, ipv4 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_vlan_ip_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the vlan, ipv4 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_vlan_ip_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the vlan and ipv6 items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_vlan_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the vlan, ipv6 and tcp/udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_vlan_ip6_tcp_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int tcp) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - item = mrvl_next_item(item + 1); - - if (tcp) - return mrvl_parse_tcp(item, flow, error); - - return mrvl_parse_udp(item, flow, error); -} - -/** - * Parse flow pattern composed of the vlan, ipv6 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_vlan_ip6_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the vlan, ipv6 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_vlan_ip6_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the ip4/ip6 item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_ip4_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int ip6) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - - return ip6 ? mrvl_parse_ip6(item, flow, error) : - mrvl_parse_ip4(item, flow, error); -} - -/** - * Parse flow pattern composed of the ipv4 item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_ip4(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the ipv6 item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_ip6(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the ip4/ip6 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_ip4_ip6_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int ip6) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = ip6 ? mrvl_parse_ip6(item, flow, error) : - mrvl_parse_ip4(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - - return mrvl_parse_tcp(item, flow, error); -} - -/** - * Parse flow pattern composed of the ipv4 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_ip4_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the ipv6 and tcp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_ip6_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the ipv4/ipv6 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_ip4_ip6_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error, int ip6) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - int ret; - - ret = ip6 ? mrvl_parse_ip6(item, flow, error) : - mrvl_parse_ip4(item, flow, error); - if (ret) - return ret; - - item = mrvl_next_item(item + 1); - - return mrvl_parse_udp(item, flow, error); -} - -/** - * Parse flow pattern composed of the ipv4 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_ip4_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 0); -} - -/** - * Parse flow pattern composed of the ipv6 and udp items. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static inline int -mrvl_parse_pattern_ip6_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 1); -} - -/** - * Parse flow pattern composed of the tcp item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_tcp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - - return mrvl_parse_tcp(item, flow, error); -} - -/** - * Parse flow pattern composed of the udp item. - * - * @param pattern Pointer to the flow pattern table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_parse_pattern_udp(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_item *item = mrvl_next_item(pattern); - - return mrvl_parse_udp(item, flow, error); -} - -/** - * Structure used to map specific flow pattern to the pattern parse callback - * which will iterate over each pattern item and extract relevant data. - */ -static const struct { - const enum rte_flow_item_type *pattern; - int (*parse)(const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error); -} mrvl_patterns[] = { - { pattern_eth, mrvl_parse_pattern_eth }, - { pattern_eth_vlan, mrvl_parse_pattern_eth_vlan }, - { pattern_eth_vlan_ip, mrvl_parse_pattern_eth_vlan_ip4 }, - { pattern_eth_vlan_ip6, mrvl_parse_pattern_eth_vlan_ip6 }, - { pattern_eth_ip4, mrvl_parse_pattern_eth_ip4 }, - { pattern_eth_ip4_tcp, mrvl_parse_pattern_eth_ip4_tcp }, - { pattern_eth_ip4_udp, mrvl_parse_pattern_eth_ip4_udp }, - { pattern_eth_ip6, mrvl_parse_pattern_eth_ip6 }, - { pattern_eth_ip6_tcp, mrvl_parse_pattern_eth_ip6_tcp }, - { pattern_eth_ip6_udp, mrvl_parse_pattern_eth_ip6_udp }, - { pattern_vlan, mrvl_parse_pattern_vlan }, - { pattern_vlan_ip, mrvl_parse_pattern_vlan_ip4 }, - { pattern_vlan_ip_tcp, mrvl_parse_pattern_vlan_ip_tcp }, - { pattern_vlan_ip_udp, mrvl_parse_pattern_vlan_ip_udp }, - { pattern_vlan_ip6, mrvl_parse_pattern_vlan_ip6 }, - { pattern_vlan_ip6_tcp, mrvl_parse_pattern_vlan_ip6_tcp }, - { pattern_vlan_ip6_udp, mrvl_parse_pattern_vlan_ip6_udp }, - { pattern_ip, mrvl_parse_pattern_ip4 }, - { pattern_ip_tcp, mrvl_parse_pattern_ip4_tcp }, - { pattern_ip_udp, mrvl_parse_pattern_ip4_udp }, - { pattern_ip6, mrvl_parse_pattern_ip6 }, - { pattern_ip6_tcp, mrvl_parse_pattern_ip6_tcp }, - { pattern_ip6_udp, mrvl_parse_pattern_ip6_udp }, - { pattern_tcp, mrvl_parse_pattern_tcp }, - { pattern_udp, mrvl_parse_pattern_udp } -}; - -/** - * Check whether provided pattern matches any of the supported ones. - * - * @param type_pattern Pointer to the pattern type. - * @param item_pattern Pointer to the flow pattern. - * @returns 1 in case of success, 0 value otherwise. - */ -static int -mrvl_patterns_match(const enum rte_flow_item_type *type_pattern, - const struct rte_flow_item *item_pattern) -{ - const enum rte_flow_item_type *type = type_pattern; - const struct rte_flow_item *item = item_pattern; - - for (;;) { - if (item->type == RTE_FLOW_ITEM_TYPE_VOID) { - item++; - continue; - } - - if (*type == RTE_FLOW_ITEM_TYPE_END || - item->type == RTE_FLOW_ITEM_TYPE_END) - break; - - if (*type != item->type) - break; - - item++; - type++; - } - - return *type == item->type; -} - -/** - * Parse flow attribute. - * - * This will check whether the provided attribute's flags are supported. - * - * @param priv Unused - * @param attr Pointer to the flow attribute. - * @param flow Unused - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_parse_attr(struct mrvl_priv *priv __rte_unused, - const struct rte_flow_attr *attr, - struct rte_flow *flow __rte_unused, - struct rte_flow_error *error) -{ - if (!attr) { - rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, - NULL, "NULL attribute"); - return -rte_errno; - } - - if (attr->group) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL, - "Groups are not supported"); - return -rte_errno; - } - if (attr->priority) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL, - "Priorities are not supported"); - return -rte_errno; - } - if (!attr->ingress) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, NULL, - "Only ingress is supported"); - return -rte_errno; - } - if (attr->egress) { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, - "Egress is not supported"); - return -rte_errno; - } - - return 0; -} - -/** - * Parse flow pattern. - * - * Specific classifier rule will be created as well. - * - * @param priv Unused - * @param pattern Pointer to the flow pattern. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_parse_pattern(struct mrvl_priv *priv __rte_unused, - const struct rte_flow_item pattern[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - unsigned int i; - int ret; - - for (i = 0; i < RTE_DIM(mrvl_patterns); i++) { - if (!mrvl_patterns_match(mrvl_patterns[i].pattern, pattern)) - continue; - - ret = mrvl_patterns[i].parse(pattern, flow, error); - if (ret) - mrvl_free_all_key_mask(&flow->rule); - - return ret; - } - - rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, - "Unsupported pattern"); - - return -rte_errno; -} - -/** - * Parse flow actions. - * - * @param priv Pointer to the port's private data. - * @param actions Pointer the action table. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_parse_actions(struct mrvl_priv *priv, - const struct rte_flow_action actions[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - const struct rte_flow_action *action = actions; - int specified = 0; - - for (; action->type != RTE_FLOW_ACTION_TYPE_END; action++) { - if (action->type == RTE_FLOW_ACTION_TYPE_VOID) - continue; - - if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { - flow->cos.ppio = priv->ppio; - flow->cos.tc = 0; - flow->action.type = PP2_CLS_TBL_ACT_DROP; - flow->action.cos = &flow->cos; - specified++; - } else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { - const struct rte_flow_action_queue *q = - (const struct rte_flow_action_queue *) - action->conf; - - if (q->index > priv->nb_rx_queues) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_ACTION, - NULL, - "Queue index out of range"); - return -rte_errno; - } - - if (priv->rxq_map[q->index].tc == MRVL_UNKNOWN_TC) { - /* - * Unknown TC mapping, mapping will not have - * a correct queue. - */ - RTE_LOG(ERR, PMD, - "Unknown TC mapping for queue %hu eth%hhu\n", - q->index, priv->ppio_id); - - rte_flow_error_set(error, EFAULT, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, NULL); - return -rte_errno; - } - - RTE_LOG(DEBUG, PMD, - "Action: Assign packets to queue %d, tc:%d, q:%d\n", - q->index, priv->rxq_map[q->index].tc, - priv->rxq_map[q->index].inq); - - flow->cos.ppio = priv->ppio; - flow->cos.tc = priv->rxq_map[q->index].tc; - flow->action.type = PP2_CLS_TBL_ACT_DONE; - flow->action.cos = &flow->cos; - specified++; - } else { - rte_flow_error_set(error, ENOTSUP, - RTE_FLOW_ERROR_TYPE_ACTION, NULL, - "Action not supported"); - return -rte_errno; - } - - } - - if (!specified) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Action not specified"); - return -rte_errno; - } - - return 0; -} - -/** - * Parse flow attribute, pattern and actions. - * - * @param priv Pointer to the port's private data. - * @param attr Pointer to the flow attribute. - * @param pattern Pointer to the flow pattern. - * @param actions Pointer to the flow actions. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_flow_parse(struct mrvl_priv *priv, const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow *flow, - struct rte_flow_error *error) -{ - int ret; - - ret = mrvl_flow_parse_attr(priv, attr, flow, error); - if (ret) - return ret; - - ret = mrvl_flow_parse_pattern(priv, pattern, flow, error); - if (ret) - return ret; - - return mrvl_flow_parse_actions(priv, actions, flow, error); -} - -static inline enum pp2_cls_tbl_type -mrvl_engine_type(const struct rte_flow *flow) -{ - int i, size = 0; - - for (i = 0; i < flow->rule.num_fields; i++) - size += flow->rule.fields[i].size; - - /* - * For maskable engine type the key size must be up to 8 bytes. - * For keys with size bigger than 8 bytes, engine type must - * be set to exact match. - */ - if (size > 8) - return PP2_CLS_TBL_EXACT_MATCH; - - return PP2_CLS_TBL_MASKABLE; -} - -static int -mrvl_create_cls_table(struct rte_eth_dev *dev, struct rte_flow *first_flow) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct pp2_cls_tbl_key *key = &priv->cls_tbl_params.key; - int ret; - - if (priv->cls_tbl) { - pp2_cls_tbl_deinit(priv->cls_tbl); - priv->cls_tbl = NULL; - } - - memset(&priv->cls_tbl_params, 0, sizeof(priv->cls_tbl_params)); - - priv->cls_tbl_params.type = mrvl_engine_type(first_flow); - RTE_LOG(INFO, PMD, "Setting cls search engine type to %s\n", - priv->cls_tbl_params.type == PP2_CLS_TBL_EXACT_MATCH ? - "exact" : "maskable"); - priv->cls_tbl_params.max_num_rules = MRVL_CLS_MAX_NUM_RULES; - priv->cls_tbl_params.default_act.type = PP2_CLS_TBL_ACT_DONE; - priv->cls_tbl_params.default_act.cos = &first_flow->cos; - - if (first_flow->pattern & F_DMAC) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH; - key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_DA; - key->key_size += 6; - key->num_fields += 1; - } - - if (first_flow->pattern & F_SMAC) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH; - key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_SA; - key->key_size += 6; - key->num_fields += 1; - } - - if (first_flow->pattern & F_TYPE) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH; - key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_TYPE; - key->key_size += 2; - key->num_fields += 1; - } - - if (first_flow->pattern & F_VLAN_ID) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN; - key->proto_field[key->num_fields].field.vlan = MV_NET_VLAN_F_ID; - key->key_size += 2; - key->num_fields += 1; - } - - if (first_flow->pattern & F_VLAN_PRI) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN; - key->proto_field[key->num_fields].field.vlan = - MV_NET_VLAN_F_PRI; - key->key_size += 1; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP4_TOS) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; - key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_TOS; - key->key_size += 1; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP4_SIP) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; - key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_SA; - key->key_size += 4; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP4_DIP) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; - key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_DA; - key->key_size += 4; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP4_PROTO) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; - key->proto_field[key->num_fields].field.ipv4 = - MV_NET_IP4_F_PROTO; - key->key_size += 1; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP6_SIP) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; - key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_SA; - key->key_size += 16; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP6_DIP) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; - key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_DA; - key->key_size += 16; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP6_FLOW) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; - key->proto_field[key->num_fields].field.ipv6 = - MV_NET_IP6_F_FLOW; - key->key_size += 3; - key->num_fields += 1; - } - - if (first_flow->pattern & F_IP6_NEXT_HDR) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; - key->proto_field[key->num_fields].field.ipv6 = - MV_NET_IP6_F_NEXT_HDR; - key->key_size += 1; - key->num_fields += 1; - } - - if (first_flow->pattern & F_TCP_SPORT) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP; - key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP; - key->key_size += 2; - key->num_fields += 1; - } - - if (first_flow->pattern & F_TCP_DPORT) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP; - key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_DP; - key->key_size += 2; - key->num_fields += 1; - } - - if (first_flow->pattern & F_UDP_SPORT) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP; - key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP; - key->key_size += 2; - key->num_fields += 1; - } - - if (first_flow->pattern & F_UDP_DPORT) { - key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP; - key->proto_field[key->num_fields].field.udp = MV_NET_TCP_F_DP; - key->key_size += 2; - key->num_fields += 1; - } - - ret = pp2_cls_tbl_init(&priv->cls_tbl_params, &priv->cls_tbl); - if (!ret) - priv->cls_tbl_pattern = first_flow->pattern; - - return ret; -} - -/** - * Check whether new flow can be added to the table - * - * @param priv Pointer to the port's private data. - * @param flow Pointer to the new flow. - * @return 1 in case flow can be added, 0 otherwise. - */ -static inline int -mrvl_flow_can_be_added(struct mrvl_priv *priv, const struct rte_flow *flow) -{ - return flow->pattern == priv->cls_tbl_pattern && - mrvl_engine_type(flow) == priv->cls_tbl_params.type; -} - -/** - * DPDK flow create callback called when flow is to be created. - * - * @param dev Pointer to the device. - * @param attr Pointer to the flow attribute. - * @param pattern Pointer to the flow pattern. - * @param actions Pointer to the flow actions. - * @param error Pointer to the flow error. - * @returns Pointer to the created flow in case of success, NULL otherwise. - */ -static struct rte_flow * -mrvl_flow_create(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct rte_flow *flow, *first; - int ret; - - if (!dev->data->dev_started) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Port must be started first\n"); - return NULL; - } - - flow = rte_zmalloc_socket(NULL, sizeof(*flow), 0, rte_socket_id()); - if (!flow) - return NULL; - - ret = mrvl_flow_parse(priv, attr, pattern, actions, flow, error); - if (ret) - goto out; - - /* - * Four cases here: - * - * 1. In case table does not exist - create one. - * 2. In case table exists, is empty and new flow cannot be added - * recreate table. - * 3. In case table is not empty and new flow matches table format - * add it. - * 4. Otherwise flow cannot be added. - */ - first = LIST_FIRST(&priv->flows); - if (!priv->cls_tbl) { - ret = mrvl_create_cls_table(dev, flow); - } else if (!first && !mrvl_flow_can_be_added(priv, flow)) { - ret = mrvl_create_cls_table(dev, flow); - } else if (mrvl_flow_can_be_added(priv, flow)) { - ret = 0; - } else { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Pattern does not match cls table format\n"); - goto out; - } - - if (ret) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to create cls table\n"); - goto out; - } - - ret = pp2_cls_tbl_add_rule(priv->cls_tbl, &flow->rule, &flow->action); - if (ret) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to add rule\n"); - goto out; - } - - LIST_INSERT_HEAD(&priv->flows, flow, next); - - return flow; -out: - rte_free(flow); - return NULL; -} - -/** - * Remove classifier rule associated with given flow. - * - * @param priv Pointer to the port's private data. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_remove(struct mrvl_priv *priv, struct rte_flow *flow, - struct rte_flow_error *error) -{ - int ret; - - if (!priv->cls_tbl) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Classifier table not initialized"); - return -rte_errno; - } - - ret = pp2_cls_tbl_remove_rule(priv->cls_tbl, &flow->rule); - if (ret) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Failed to remove rule"); - return -rte_errno; - } - - mrvl_free_all_key_mask(&flow->rule); - - return 0; -} - -/** - * DPDK flow destroy callback called when flow is to be removed. - * - * @param priv Pointer to the port's private data. - * @param flow Pointer to the flow. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, - struct rte_flow_error *error) -{ - struct mrvl_priv *priv = dev->data->dev_private; - struct rte_flow *f; - int ret; - - LIST_FOREACH(f, &priv->flows, next) { - if (f == flow) - break; - } - - if (!flow) { - rte_flow_error_set(error, EINVAL, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, - "Rule was not found"); - return -rte_errno; - } - - LIST_REMOVE(f, next); - - ret = mrvl_flow_remove(priv, flow, error); - if (ret) - return ret; - - rte_free(flow); - - return 0; -} - -/** - * DPDK flow callback called to verify given attribute, pattern and actions. - * - * @param dev Pointer to the device. - * @param attr Pointer to the flow attribute. - * @param pattern Pointer to the flow pattern. - * @param actions Pointer to the flow actions. - * @param error Pointer to the flow error. - * @returns 0 on success, negative value otherwise. - */ -static int -mrvl_flow_validate(struct rte_eth_dev *dev, - const struct rte_flow_attr *attr, - const struct rte_flow_item pattern[], - const struct rte_flow_action actions[], - struct rte_flow_error *error) -{ - static struct rte_flow *flow; - - flow = mrvl_flow_create(dev, attr, pattern, actions, error); - if (!flow) - return -rte_errno; - - mrvl_flow_destroy(dev, flow, error); - - return 0; -} - -/** - * DPDK flow flush callback called when flows are to be flushed. - * - * @param dev Pointer to the device. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - while (!LIST_EMPTY(&priv->flows)) { - struct rte_flow *flow = LIST_FIRST(&priv->flows); - int ret = mrvl_flow_remove(priv, flow, error); - if (ret) - return ret; - - LIST_REMOVE(flow, next); - rte_free(flow); - } - - return 0; -} - -/** - * DPDK flow isolate callback called to isolate port. - * - * @param dev Pointer to the device. - * @param enable Pass 0/1 to disable/enable port isolation. - * @param error Pointer to the flow error. - * @returns 0 in case of success, negative value otherwise. - */ -static int -mrvl_flow_isolate(struct rte_eth_dev *dev, int enable, - struct rte_flow_error *error) -{ - struct mrvl_priv *priv = dev->data->dev_private; - - if (dev->data->dev_started) { - rte_flow_error_set(error, EBUSY, - RTE_FLOW_ERROR_TYPE_UNSPECIFIED, - NULL, "Port must be stopped first\n"); - return -rte_errno; - } - - priv->isolated = enable; - - return 0; -} - -const struct rte_flow_ops mrvl_flow_ops = { - .validate = mrvl_flow_validate, - .create = mrvl_flow_create, - .destroy = mrvl_flow_destroy, - .flush = mrvl_flow_flush, - .isolate = mrvl_flow_isolate -}; diff --git a/drivers/net/mrvl/mrvl_qos.c b/drivers/net/mrvl/mrvl_qos.c deleted file mode 100644 index 741d3da7a3..0000000000 --- a/drivers/net/mrvl/mrvl_qos.c +++ /dev/null @@ -1,894 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2017 Marvell International Ltd. - * Copyright(c) 2017 Semihalf. - * All rights reserved. - */ - -#include -#include -#include - -#include -#include -#include -#include -#include -#include - -/* Unluckily, container_of is defined by both DPDK and MUSDK, - * we'll declare only one version. - * - * Note that it is not used in this PMD anyway. - */ -#ifdef container_of -#undef container_of -#endif - -#include "mrvl_qos.h" - -/* Parsing tokens. Defined conveniently, so that any correction is easy. */ -#define MRVL_TOK_DEFAULT "default" -#define MRVL_TOK_DEFAULT_TC "default_tc" -#define MRVL_TOK_DSCP "dscp" -#define MRVL_TOK_MAPPING_PRIORITY "mapping_priority" -#define MRVL_TOK_IP "ip" -#define MRVL_TOK_IP_VLAN "ip/vlan" -#define MRVL_TOK_PCP "pcp" -#define MRVL_TOK_PORT "port" -#define MRVL_TOK_RXQ "rxq" -#define MRVL_TOK_TC "tc" -#define MRVL_TOK_TXQ "txq" -#define MRVL_TOK_VLAN "vlan" -#define MRVL_TOK_VLAN_IP "vlan/ip" - -/* egress specific configuration tokens */ -#define MRVL_TOK_BURST_SIZE "burst_size" -#define MRVL_TOK_RATE_LIMIT "rate_limit" -#define MRVL_TOK_RATE_LIMIT_ENABLE "rate_limit_enable" -#define MRVL_TOK_SCHED_MODE "sched_mode" -#define MRVL_TOK_SCHED_MODE_SP "sp" -#define MRVL_TOK_SCHED_MODE_WRR "wrr" -#define MRVL_TOK_WRR_WEIGHT "wrr_weight" - -/* policer specific configuration tokens */ -#define MRVL_TOK_PLCR_ENABLE "policer_enable" -#define MRVL_TOK_PLCR_UNIT "token_unit" -#define MRVL_TOK_PLCR_UNIT_BYTES "bytes" -#define MRVL_TOK_PLCR_UNIT_PACKETS "packets" -#define MRVL_TOK_PLCR_COLOR "color_mode" -#define MRVL_TOK_PLCR_COLOR_BLIND "blind" -#define MRVL_TOK_PLCR_COLOR_AWARE "aware" -#define MRVL_TOK_PLCR_CIR "cir" -#define MRVL_TOK_PLCR_CBS "cbs" -#define MRVL_TOK_PLCR_EBS "ebs" -#define MRVL_TOK_PLCR_DEFAULT_COLOR "default_color" -#define MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN "green" -#define MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW "yellow" -#define MRVL_TOK_PLCR_DEFAULT_COLOR_RED "red" - -/** Number of tokens in range a-b = 2. */ -#define MAX_RNG_TOKENS 2 - -/** Maximum possible value of PCP. */ -#define MAX_PCP 7 - -/** Maximum possible value of DSCP. */ -#define MAX_DSCP 63 - -/** Global QoS configuration. */ -struct mrvl_qos_cfg *mrvl_qos_cfg; - -/** - * Convert string to uint32_t with extra checks for result correctness. - * - * @param string String to convert. - * @param val Conversion result. - * @returns 0 in case of success, negative value otherwise. - */ -static int -get_val_securely(const char *string, uint32_t *val) -{ - char *endptr; - size_t len = strlen(string); - - if (len == 0) - return -1; - - errno = 0; - *val = strtoul(string, &endptr, 0); - if (errno != 0 || RTE_PTR_DIFF(endptr, string) != len) - return -2; - - return 0; -} - -/** - * Read out-queue configuration from file. - * - * @param file Path to the configuration file. - * @param port Port number. - * @param outq Out queue number. - * @param cfg Pointer to the Marvell QoS configuration structure. - * @returns 0 in case of success, negative value otherwise. - */ -static int -get_outq_cfg(struct rte_cfgfile *file, int port, int outq, - struct mrvl_qos_cfg *cfg) -{ - char sec_name[32]; - const char *entry; - uint32_t val; - - snprintf(sec_name, sizeof(sec_name), "%s %d %s %d", - MRVL_TOK_PORT, port, MRVL_TOK_TXQ, outq); - - /* Skip non-existing */ - if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0) - return 0; - - /* Read scheduling mode */ - entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_SCHED_MODE); - if (entry) { - if (!strncmp(entry, MRVL_TOK_SCHED_MODE_SP, - strlen(MRVL_TOK_SCHED_MODE_SP))) { - cfg->port[port].outq[outq].sched_mode = - PP2_PPIO_SCHED_M_SP; - } else if (!strncmp(entry, MRVL_TOK_SCHED_MODE_WRR, - strlen(MRVL_TOK_SCHED_MODE_WRR))) { - cfg->port[port].outq[outq].sched_mode = - PP2_PPIO_SCHED_M_WRR; - } else { - RTE_LOG(ERR, PMD, "Unknown token: %s\n", entry); - return -1; - } - } - - /* Read wrr weight */ - if (cfg->port[port].outq[outq].sched_mode == PP2_PPIO_SCHED_M_WRR) { - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_WRR_WEIGHT); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - cfg->port[port].outq[outq].weight = val; - } - } - - /* - * There's no point in setting rate limiting for specific outq as - * global port rate limiting has priority. - */ - if (cfg->port[port].rate_limit_enable) { - RTE_LOG(WARNING, PMD, "Port %d rate limiting already enabled\n", - port); - return 0; - } - - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_RATE_LIMIT_ENABLE); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - cfg->port[port].outq[outq].rate_limit_enable = val; - } - - if (!cfg->port[port].outq[outq].rate_limit_enable) - return 0; - - /* Read CBS (in kB) */ - entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_BURST_SIZE); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - cfg->port[port].outq[outq].rate_limit_params.cbs = val; - } - - /* Read CIR (in kbps) */ - entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RATE_LIMIT); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - cfg->port[port].outq[outq].rate_limit_params.cir = val; - } - - return 0; -} - -/** - * Gets multiple-entry values and places them in table. - * - * Entry can be anything, e.g. "1 2-3 5 6 7-9". This needs to be converted to - * table entries, respectively: {1, 2, 3, 5, 6, 7, 8, 9}. - * As all result table's elements are always 1-byte long, we - * won't overcomplicate the function, but we'll keep API generic, - * check if someone hasn't changed element size and make it simple - * to extend to other sizes. - * - * This function is purely utilitary, it does not print any error, only returns - * different error numbers. - * - * @param entry[in] Values string to parse. - * @param tab[out] Results table. - * @param elem_sz[in] Element size (in bytes). - * @param max_elems[in] Number of results table elements available. - * @param max val[in] Maximum value allowed. - * @returns Number of correctly parsed elements in case of success. - * @retval -1 Wrong element size. - * @retval -2 More tokens than result table allows. - * @retval -3 Wrong range syntax. - * @retval -4 Wrong range values. - * @retval -5 Maximum value exceeded. - */ -static int -get_entry_values(const char *entry, uint8_t *tab, - size_t elem_sz, uint8_t max_elems, uint8_t max_val) -{ - /* There should not be more tokens than max elements. - * Add 1 for error trap. - */ - char *tokens[max_elems + 1]; - - /* Begin, End + error trap = 3. */ - char *rng_tokens[MAX_RNG_TOKENS + 1]; - long beg, end; - uint32_t token_val; - int nb_tokens, nb_rng_tokens; - int i; - int values = 0; - char val; - char entry_cpy[CFG_VALUE_LEN]; - - if (elem_sz != 1) - return -1; - - /* Copy the entry to safely use rte_strsplit(). */ - snprintf(entry_cpy, RTE_DIM(entry_cpy), "%s", entry); - - /* - * If there are more tokens than array size, rte_strsplit will - * not return error, just array size. - */ - nb_tokens = rte_strsplit(entry_cpy, strlen(entry_cpy), - tokens, max_elems + 1, ' '); - - /* Quick check, will be refined later. */ - if (nb_tokens > max_elems) - return -2; - - for (i = 0; i < nb_tokens; ++i) { - if (strchr(tokens[i], '-') != NULL) { - /* - * Split to begin and end tokens. - * We want to catch error cases too, thus we leave - * option for number of tokens to be more than 2. - */ - nb_rng_tokens = rte_strsplit(tokens[i], - strlen(tokens[i]), rng_tokens, - RTE_DIM(rng_tokens), '-'); - if (nb_rng_tokens != 2) - return -3; - - /* Range and sanity checks. */ - if (get_val_securely(rng_tokens[0], &token_val) < 0) - return -4; - beg = (char)token_val; - if (get_val_securely(rng_tokens[1], &token_val) < 0) - return -4; - end = (char)token_val; - if (beg < 0 || beg > UCHAR_MAX || - end < 0 || end > UCHAR_MAX || end < beg) - return -4; - - for (val = beg; val <= end; ++val) { - if (val > max_val) - return -5; - - *tab = val; - tab = RTE_PTR_ADD(tab, elem_sz); - ++values; - if (values >= max_elems) - return -2; - } - } else { - /* Single values. */ - if (get_val_securely(tokens[i], &token_val) < 0) - return -5; - val = (char)token_val; - if (val > max_val) - return -5; - - *tab = val; - tab = RTE_PTR_ADD(tab, elem_sz); - ++values; - if (values >= max_elems) - return -2; - } - } - - return values; -} - -/** - * Parse Traffic Class'es mapping configuration. - * - * @param file Config file handle. - * @param port Which port to look for. - * @param tc Which Traffic Class to look for. - * @param cfg[out] Parsing results. - * @returns 0 in case of success, negative value otherwise. - */ -static int -parse_tc_cfg(struct rte_cfgfile *file, int port, int tc, - struct mrvl_qos_cfg *cfg) -{ - char sec_name[32]; - const char *entry; - int n; - - snprintf(sec_name, sizeof(sec_name), "%s %d %s %d", - MRVL_TOK_PORT, port, MRVL_TOK_TC, tc); - - /* Skip non-existing */ - if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0) - return 0; - - entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RXQ); - if (entry) { - n = get_entry_values(entry, - cfg->port[port].tc[tc].inq, - sizeof(cfg->port[port].tc[tc].inq[0]), - RTE_DIM(cfg->port[port].tc[tc].inq), - MRVL_PP2_RXQ_MAX); - if (n < 0) { - RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", - n, entry); - return n; - } - cfg->port[port].tc[tc].inqs = n; - } - - entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_PCP); - if (entry) { - n = get_entry_values(entry, - cfg->port[port].tc[tc].pcp, - sizeof(cfg->port[port].tc[tc].pcp[0]), - RTE_DIM(cfg->port[port].tc[tc].pcp), - MAX_PCP); - if (n < 0) { - RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", - n, entry); - return n; - } - cfg->port[port].tc[tc].pcps = n; - } - - entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_DSCP); - if (entry) { - n = get_entry_values(entry, - cfg->port[port].tc[tc].dscp, - sizeof(cfg->port[port].tc[tc].dscp[0]), - RTE_DIM(cfg->port[port].tc[tc].dscp), - MAX_DSCP); - if (n < 0) { - RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", - n, entry); - return n; - } - cfg->port[port].tc[tc].dscps = n; - } - - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_DEFAULT_COLOR); - if (entry) { - if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN, - sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN))) { - cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_GREEN; - } else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW, - sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW))) { - cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_YELLOW; - } else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_RED, - sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_RED))) { - cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_RED; - } else { - RTE_LOG(ERR, PMD, "Error while parsing: %s\n", entry); - return -1; - } - } - - return 0; -} - -/** - * Parse QoS configuration - rte_kvargs_process handler. - * - * Opens configuration file and parses its content. - * - * @param key Unused. - * @param path Path to config file. - * @param extra_args Pointer to configuration structure. - * @returns 0 in case of success, exits otherwise. - */ -int -mrvl_get_qoscfg(const char *key __rte_unused, const char *path, - void *extra_args) -{ - struct mrvl_qos_cfg **cfg = extra_args; - struct rte_cfgfile *file = rte_cfgfile_load(path, 0); - uint32_t val; - int n, i, ret; - const char *entry; - char sec_name[32]; - - if (file == NULL) - rte_exit(EXIT_FAILURE, "Cannot load configuration %s\n", path); - - /* Create configuration. This is never accessed on the fast path, - * so we can ignore socket. - */ - *cfg = rte_zmalloc("mrvl_qos_cfg", sizeof(struct mrvl_qos_cfg), 0); - if (*cfg == NULL) - rte_exit(EXIT_FAILURE, "Cannot allocate configuration %s\n", - path); - - n = rte_cfgfile_num_sections(file, MRVL_TOK_PORT, - sizeof(MRVL_TOK_PORT) - 1); - - if (n == 0) { - /* This is weird, but not bad. */ - RTE_LOG(WARNING, PMD, "Empty configuration file?\n"); - return 0; - } - - /* Use the number of ports given as vdev parameters. */ - for (n = 0; n < (PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC); ++n) { - snprintf(sec_name, sizeof(sec_name), "%s %d %s", - MRVL_TOK_PORT, n, MRVL_TOK_DEFAULT); - - /* Skip ports non-existing in configuration. */ - if (rte_cfgfile_num_sections(file, sec_name, - strlen(sec_name)) <= 0) { - (*cfg)->port[n].use_global_defaults = 1; - (*cfg)->port[n].mapping_priority = - PP2_CLS_QOS_TBL_VLAN_IP_PRI; - continue; - } - - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_DEFAULT_TC); - if (entry) { - if (get_val_securely(entry, &val) < 0 || - val > USHRT_MAX) - return -1; - (*cfg)->port[n].default_tc = (uint8_t)val; - } else { - RTE_LOG(ERR, PMD, - "Default Traffic Class required in custom configuration!\n"); - return -1; - } - - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_ENABLE); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].policer_enable = val; - } - - if ((*cfg)->port[n].policer_enable) { - enum pp2_cls_plcr_token_unit unit; - - /* Read policer token unit */ - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_UNIT); - if (entry) { - if (!strncmp(entry, MRVL_TOK_PLCR_UNIT_BYTES, - sizeof(MRVL_TOK_PLCR_UNIT_BYTES))) { - unit = PP2_CLS_PLCR_BYTES_TOKEN_UNIT; - } else if (!strncmp(entry, - MRVL_TOK_PLCR_UNIT_PACKETS, - sizeof(MRVL_TOK_PLCR_UNIT_PACKETS))) { - unit = PP2_CLS_PLCR_PACKETS_TOKEN_UNIT; - } else { - RTE_LOG(ERR, PMD, "Unknown token: %s\n", - entry); - return -1; - } - (*cfg)->port[n].policer_params.token_unit = - unit; - } - - /* Read policer color mode */ - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_COLOR); - if (entry) { - enum pp2_cls_plcr_color_mode mode; - - if (!strncmp(entry, MRVL_TOK_PLCR_COLOR_BLIND, - sizeof(MRVL_TOK_PLCR_COLOR_BLIND))) { - mode = PP2_CLS_PLCR_COLOR_BLIND_MODE; - } else if (!strncmp(entry, - MRVL_TOK_PLCR_COLOR_AWARE, - sizeof(MRVL_TOK_PLCR_COLOR_AWARE))) { - mode = PP2_CLS_PLCR_COLOR_AWARE_MODE; - } else { - RTE_LOG(ERR, PMD, - "Error in parsing: %s\n", - entry); - return -1; - } - (*cfg)->port[n].policer_params.color_mode = - mode; - } - - /* Read policer cir */ - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_CIR); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].policer_params.cir = val; - } - - /* Read policer cbs */ - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_CBS); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].policer_params.cbs = val; - } - - /* Read policer ebs */ - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_PLCR_EBS); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].policer_params.ebs = val; - } - } - - /* - * Read per-port rate limiting. Setting that will - * disable per-queue rate limiting. - */ - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_RATE_LIMIT_ENABLE); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].rate_limit_enable = val; - } - - if ((*cfg)->port[n].rate_limit_enable) { - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_BURST_SIZE); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].rate_limit_params.cbs = val; - } - - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_RATE_LIMIT); - if (entry) { - if (get_val_securely(entry, &val) < 0) - return -1; - (*cfg)->port[n].rate_limit_params.cir = val; - } - } - - entry = rte_cfgfile_get_entry(file, sec_name, - MRVL_TOK_MAPPING_PRIORITY); - if (entry) { - if (!strncmp(entry, MRVL_TOK_VLAN_IP, - sizeof(MRVL_TOK_VLAN_IP))) - (*cfg)->port[n].mapping_priority = - PP2_CLS_QOS_TBL_VLAN_IP_PRI; - else if (!strncmp(entry, MRVL_TOK_IP_VLAN, - sizeof(MRVL_TOK_IP_VLAN))) - (*cfg)->port[n].mapping_priority = - PP2_CLS_QOS_TBL_IP_VLAN_PRI; - else if (!strncmp(entry, MRVL_TOK_IP, - sizeof(MRVL_TOK_IP))) - (*cfg)->port[n].mapping_priority = - PP2_CLS_QOS_TBL_IP_PRI; - else if (!strncmp(entry, MRVL_TOK_VLAN, - sizeof(MRVL_TOK_VLAN))) - (*cfg)->port[n].mapping_priority = - PP2_CLS_QOS_TBL_VLAN_PRI; - else - rte_exit(EXIT_FAILURE, - "Error in parsing %s value (%s)!\n", - MRVL_TOK_MAPPING_PRIORITY, entry); - } else { - (*cfg)->port[n].mapping_priority = - PP2_CLS_QOS_TBL_VLAN_IP_PRI; - } - - for (i = 0; i < MRVL_PP2_RXQ_MAX; ++i) { - ret = get_outq_cfg(file, n, i, *cfg); - if (ret < 0) - rte_exit(EXIT_FAILURE, - "Error %d parsing port %d outq %d!\n", - ret, n, i); - } - - for (i = 0; i < MRVL_PP2_TC_MAX; ++i) { - ret = parse_tc_cfg(file, n, i, *cfg); - if (ret < 0) - rte_exit(EXIT_FAILURE, - "Error %d parsing port %d tc %d!\n", - ret, n, i); - } - } - - return 0; -} - -/** - * Setup Traffic Class. - * - * Fill in TC parameters in single MUSDK TC config entry. - * @param param TC parameters entry. - * @param inqs Number of MUSDK in-queues in this TC. - * @param bpool Bpool for this TC. - * @param color Default color for this TC. - * @returns 0 in case of success, exits otherwise. - */ -static int -setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs, - struct pp2_bpool *bpool, enum pp2_ppio_color color) -{ - struct pp2_ppio_inq_params *inq_params; - - param->pkt_offset = MRVL_PKT_OFFS; - param->pools[0] = bpool; - param->default_color = color; - - inq_params = rte_zmalloc_socket("inq_params", - inqs * sizeof(*inq_params), - 0, rte_socket_id()); - if (!inq_params) - return -ENOMEM; - - param->num_in_qs = inqs; - - /* Release old config if necessary. */ - if (param->inqs_params) - rte_free(param->inqs_params); - - param->inqs_params = inq_params; - - return 0; -} - -/** - * Setup ingress policer. - * - * @param priv Port's private data. - * @param params Pointer to the policer's configuration. - * @returns 0 in case of success, negative values otherwise. - */ -static int -setup_policer(struct mrvl_priv *priv, struct pp2_cls_plcr_params *params) -{ - char match[16]; - int ret; - - snprintf(match, sizeof(match), "policer-%d:%d\n", - priv->pp_id, priv->ppio_id); - params->match = match; - - ret = pp2_cls_plcr_init(params, &priv->policer); - if (ret) { - RTE_LOG(ERR, PMD, "Failed to setup %s\n", match); - return -1; - } - - priv->ppio_params.inqs_params.plcr = priv->policer; - - return 0; -} - -/** - * Configure RX Queues in a given port. - * - * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping. - * - * @param priv Port's private data - * @param portid DPDK port ID - * @param max_queues Maximum number of queues to configure. - * @returns 0 in case of success, negative value otherwise. - */ -int -mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, - uint16_t max_queues) -{ - size_t i, tc; - - if (mrvl_qos_cfg == NULL || - mrvl_qos_cfg->port[portid].use_global_defaults) { - /* - * No port configuration, use default: 1 TC, no QoS, - * TC color set to green. - */ - priv->ppio_params.inqs_params.num_tcs = 1; - setup_tc(&priv->ppio_params.inqs_params.tcs_params[0], - max_queues, priv->bpool, PP2_PPIO_COLOR_GREEN); - - /* Direct mapping of queues i.e. 0->0, 1->1 etc. */ - for (i = 0; i < max_queues; ++i) { - priv->rxq_map[i].tc = 0; - priv->rxq_map[i].inq = i; - } - return 0; - } - - /* We need only a subset of configuration. */ - struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid]; - - priv->qos_tbl_params.type = port_cfg->mapping_priority; - - /* - * We need to reverse mapping, from tc->pcp (better from usability - * point of view) to pcp->tc (configurable in MUSDK). - * First, set all map elements to "default". - */ - for (i = 0; i < RTE_DIM(priv->qos_tbl_params.pcp_cos_map); ++i) - priv->qos_tbl_params.pcp_cos_map[i].tc = port_cfg->default_tc; - - /* Then, fill in all known values. */ - for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { - if (port_cfg->tc[tc].pcps > RTE_DIM(port_cfg->tc[0].pcp)) { - /* Better safe than sorry. */ - RTE_LOG(ERR, PMD, - "Too many PCPs configured in TC %zu!\n", tc); - return -1; - } - for (i = 0; i < port_cfg->tc[tc].pcps; ++i) { - priv->qos_tbl_params.pcp_cos_map[ - port_cfg->tc[tc].pcp[i]].tc = tc; - } - } - - /* - * The same logic goes with DSCP. - * First, set all map elements to "default". - */ - for (i = 0; i < RTE_DIM(priv->qos_tbl_params.dscp_cos_map); ++i) - priv->qos_tbl_params.dscp_cos_map[i].tc = - port_cfg->default_tc; - - /* Fill in all known values. */ - for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { - if (port_cfg->tc[tc].dscps > RTE_DIM(port_cfg->tc[0].dscp)) { - /* Better safe than sorry. */ - RTE_LOG(ERR, PMD, - "Too many DSCPs configured in TC %zu!\n", tc); - return -1; - } - for (i = 0; i < port_cfg->tc[tc].dscps; ++i) { - priv->qos_tbl_params.dscp_cos_map[ - port_cfg->tc[tc].dscp[i]].tc = tc; - } - } - - /* - * Surprisingly, similar logic goes with queue mapping. - * We need only to store qid->tc mapping, - * to know TC when queue is read. - */ - for (i = 0; i < RTE_DIM(priv->rxq_map); ++i) - priv->rxq_map[i].tc = MRVL_UNKNOWN_TC; - - /* Set up DPDKq->(TC,inq) mapping. */ - for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { - if (port_cfg->tc[tc].inqs > RTE_DIM(port_cfg->tc[0].inq)) { - /* Overflow. */ - RTE_LOG(ERR, PMD, - "Too many RX queues configured per TC %zu!\n", - tc); - return -1; - } - for (i = 0; i < port_cfg->tc[tc].inqs; ++i) { - uint8_t idx = port_cfg->tc[tc].inq[i]; - - if (idx > RTE_DIM(priv->rxq_map)) { - RTE_LOG(ERR, PMD, "Bad queue index %d!\n", idx); - return -1; - } - - priv->rxq_map[idx].tc = tc; - priv->rxq_map[idx].inq = i; - } - } - - /* - * Set up TC configuration. TCs need to be sequenced: 0, 1, 2 - * with no gaps. Empty TC means end of processing. - */ - for (i = 0; i < MRVL_PP2_TC_MAX; ++i) { - if (port_cfg->tc[i].inqs == 0) - break; - setup_tc(&priv->ppio_params.inqs_params.tcs_params[i], - port_cfg->tc[i].inqs, - priv->bpool, port_cfg->tc[i].color); - } - - priv->ppio_params.inqs_params.num_tcs = i; - - if (port_cfg->policer_enable) - return setup_policer(priv, &port_cfg->policer_params); - - return 0; -} - -/** - * Configure TX Queues in a given port. - * - * Sets up TX queues egress scheduler and limiter. - * - * @param priv Port's private data - * @param portid DPDK port ID - * @param max_queues Maximum number of queues to configure. - * @returns 0 in case of success, negative value otherwise. - */ -int -mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid, - uint16_t max_queues) -{ - /* We need only a subset of configuration. */ - struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid]; - int i; - - if (mrvl_qos_cfg == NULL) - return 0; - - priv->ppio_params.rate_limit_enable = port_cfg->rate_limit_enable; - if (port_cfg->rate_limit_enable) - priv->ppio_params.rate_limit_params = - port_cfg->rate_limit_params; - - for (i = 0; i < max_queues; i++) { - struct pp2_ppio_outq_params *params = - &priv->ppio_params.outqs_params.outqs_params[i]; - - params->sched_mode = port_cfg->outq[i].sched_mode; - params->weight = port_cfg->outq[i].weight; - params->rate_limit_enable = port_cfg->outq[i].rate_limit_enable; - params->rate_limit_params = port_cfg->outq[i].rate_limit_params; - } - - return 0; -} - -/** - * Start QoS mapping. - * - * Finalize QoS table configuration and initialize it in SDK. It can be done - * only after port is started, so we have a valid ppio reference. - * - * @param priv Port's private (configuration) data. - * @returns 0 in case of success, exits otherwise. - */ -int -mrvl_start_qos_mapping(struct mrvl_priv *priv) -{ - size_t i; - - if (priv->ppio == NULL) { - RTE_LOG(ERR, PMD, "ppio must not be NULL here!\n"); - return -1; - } - - for (i = 0; i < RTE_DIM(priv->qos_tbl_params.pcp_cos_map); ++i) - priv->qos_tbl_params.pcp_cos_map[i].ppio = priv->ppio; - - for (i = 0; i < RTE_DIM(priv->qos_tbl_params.dscp_cos_map); ++i) - priv->qos_tbl_params.dscp_cos_map[i].ppio = priv->ppio; - - /* Initialize Classifier QoS table. */ - - return pp2_cls_qos_tbl_init(&priv->qos_tbl_params, &priv->qos_tbl); -} diff --git a/drivers/net/mrvl/mrvl_qos.h b/drivers/net/mrvl/mrvl_qos.h deleted file mode 100644 index fa9ddecb86..0000000000 --- a/drivers/net/mrvl/mrvl_qos.h +++ /dev/null @@ -1,107 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2017 Marvell International Ltd. - * Copyright(c) 2017 Semihalf. - * All rights reserved. - */ - -#ifndef _MRVL_QOS_H_ -#define _MRVL_QOS_H_ - -#include - -#include "mrvl_ethdev.h" - -/** Code Points per Traffic Class. Equals max(DSCP, PCP). */ -#define MRVL_CP_PER_TC (64) - -/** Value used as "unknown". */ -#define MRVL_UNKNOWN_TC (0xFF) - -/* QoS config. */ -struct mrvl_qos_cfg { - struct port_cfg { - int rate_limit_enable; - struct pp2_ppio_rate_limit_params rate_limit_params; - struct { - uint8_t inq[MRVL_PP2_RXQ_MAX]; - uint8_t dscp[MRVL_CP_PER_TC]; - uint8_t pcp[MRVL_CP_PER_TC]; - uint8_t inqs; - uint8_t dscps; - uint8_t pcps; - enum pp2_ppio_color color; - } tc[MRVL_PP2_TC_MAX]; - struct { - enum pp2_ppio_outq_sched_mode sched_mode; - uint8_t weight; - int rate_limit_enable; - struct pp2_ppio_rate_limit_params rate_limit_params; - } outq[MRVL_PP2_RXQ_MAX]; - enum pp2_cls_qos_tbl_type mapping_priority; - uint16_t inqs; - uint16_t outqs; - uint8_t default_tc; - uint8_t use_global_defaults; - struct pp2_cls_plcr_params policer_params; - uint8_t policer_enable; - } port[RTE_MAX_ETHPORTS]; -}; - -/** Global QoS configuration. */ -extern struct mrvl_qos_cfg *mrvl_qos_cfg; - -/** - * Parse QoS configuration - rte_kvargs_process handler. - * - * Opens configuration file and parses its content. - * - * @param key Unused. - * @param path Path to config file. - * @param extra_args Pointer to configuration structure. - * @returns 0 in case of success, exits otherwise. - */ -int -mrvl_get_qoscfg(const char *key __rte_unused, const char *path, - void *extra_args); - -/** - * Configure RX Queues in a given port. - * - * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping. - * - * @param priv Port's private data - * @param portid DPDK port ID - * @param max_queues Maximum number of queues to configure. - * @returns 0 in case of success, negative value otherwise. - */ -int -mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, - uint16_t max_queues); - -/** - * Configure TX Queues in a given port. - * - * Sets up TX queues egress scheduler and limiter. - * - * @param priv Port's private data - * @param portid DPDK port ID - * @param max_queues Maximum number of queues to configure. - * @returns 0 in case of success, negative value otherwise. - */ -int -mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid, - uint16_t max_queues); - -/** - * Start QoS mapping. - * - * Finalize QoS table configuration and initialize it in SDK. It can be done - * only after port is started, so we have a valid ppio reference. - * - * @param priv Port's private (configuration) data. - * @returns 0 in case of success, exits otherwise. - */ -int -mrvl_start_qos_mapping(struct mrvl_priv *priv); - -#endif /* _MRVL_QOS_H_ */ diff --git a/drivers/net/mrvl/rte_pmd_mrvl_version.map b/drivers/net/mrvl/rte_pmd_mrvl_version.map deleted file mode 100644 index a753031720..0000000000 --- a/drivers/net/mrvl/rte_pmd_mrvl_version.map +++ /dev/null @@ -1,3 +0,0 @@ -DPDK_17.11 { - local: *; -}; diff --git a/drivers/net/mvpp2/Makefile b/drivers/net/mvpp2/Makefile new file mode 100644 index 0000000000..2383ec18cc --- /dev/null +++ b/drivers/net/mvpp2/Makefile @@ -0,0 +1,42 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2017 Marvell International Ltd. +# Copyright(c) 2017 Semihalf. +# All rights reserved. + +include $(RTE_SDK)/mk/rte.vars.mk + +ifneq ($(MAKECMDGOALS),clean) +ifneq ($(MAKECMDGOALS),config) +ifeq ($(LIBMUSDK_PATH),) +$(error "Please define LIBMUSDK_PATH environment variable") +endif +endif +endif + +# library name +LIB = librte_pmd_mvpp2.a + +# library version +LIBABIVER := 1 + +# versioning export map +EXPORT_MAP := rte_pmd_mrvl_version.map + +# external library dependencies +CFLAGS += -I$(LIBMUSDK_PATH)/include +CFLAGS += -DMVCONF_TYPES_PUBLIC +CFLAGS += -DMVCONF_DMA_PHYS_ADDR_T_PUBLIC +CFLAGS += $(WERROR_FLAGS) +CFLAGS += -O3 +LDLIBS += -L$(LIBMUSDK_PATH)/lib +LDLIBS += -lmusdk +LDLIBS += -lrte_eal -lrte_mbuf -lrte_mempool -lrte_ring +LDLIBS += -lrte_ethdev -lrte_net -lrte_kvargs -lrte_cfgfile +LDLIBS += -lrte_bus_vdev + +# library source files +SRCS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += mrvl_ethdev.c +SRCS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += mrvl_qos.c +SRCS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += mrvl_flow.c + +include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c new file mode 100644 index 0000000000..6ab515ca9e --- /dev/null +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -0,0 +1,2832 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017 Marvell International Ltd. + * Copyright(c) 2017 Semihalf. + * All rights reserved. + */ + +#include +#include +#include +#include +#include + +/* Unluckily, container_of is defined by both DPDK and MUSDK, + * we'll declare only one version. + * + * Note that it is not used in this PMD anyway. + */ +#ifdef container_of +#undef container_of +#endif + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "mrvl_ethdev.h" +#include "mrvl_qos.h" + +/* bitmask with reserved hifs */ +#define MRVL_MUSDK_HIFS_RESERVED 0x0F +/* bitmask with reserved bpools */ +#define MRVL_MUSDK_BPOOLS_RESERVED 0x07 +/* bitmask with reserved kernel RSS tables */ +#define MRVL_MUSDK_RSS_RESERVED 0x01 +/* maximum number of available hifs */ +#define MRVL_MUSDK_HIFS_MAX 9 + +/* prefetch shift */ +#define MRVL_MUSDK_PREFETCH_SHIFT 2 + +/* TCAM has 25 entries reserved for uc/mc filter entries */ +#define MRVL_MAC_ADDRS_MAX 25 +#define MRVL_MATCH_LEN 16 +#define MRVL_PKT_EFFEC_OFFS (MRVL_PKT_OFFS + MV_MH_SIZE) +/* Maximum allowable packet size */ +#define MRVL_PKT_SIZE_MAX (10240 - MV_MH_SIZE) + +#define MRVL_IFACE_NAME_ARG "iface" +#define MRVL_CFG_ARG "cfg" + +#define MRVL_BURST_SIZE 64 + +#define MRVL_ARP_LENGTH 28 + +#define MRVL_COOKIE_ADDR_INVALID ~0ULL + +#define MRVL_COOKIE_HIGH_ADDR_SHIFT (sizeof(pp2_cookie_t) * 8) +#define MRVL_COOKIE_HIGH_ADDR_MASK (~0ULL << MRVL_COOKIE_HIGH_ADDR_SHIFT) + +/* Memory size (in bytes) for MUSDK dma buffers */ +#define MRVL_MUSDK_DMA_MEMSIZE 41943040 + +/** Port Rx offload capabilities */ +#define MRVL_RX_OFFLOADS (DEV_RX_OFFLOAD_VLAN_FILTER | \ + DEV_RX_OFFLOAD_JUMBO_FRAME | \ + DEV_RX_OFFLOAD_CRC_STRIP | \ + DEV_RX_OFFLOAD_CHECKSUM) + +/** Port Tx offloads capabilities */ +#define MRVL_TX_OFFLOADS (DEV_TX_OFFLOAD_IPV4_CKSUM | \ + DEV_TX_OFFLOAD_UDP_CKSUM | \ + DEV_TX_OFFLOAD_TCP_CKSUM) + +static const char * const valid_args[] = { + MRVL_IFACE_NAME_ARG, + MRVL_CFG_ARG, + NULL +}; + +static int used_hifs = MRVL_MUSDK_HIFS_RESERVED; +static struct pp2_hif *hifs[RTE_MAX_LCORE]; +static int used_bpools[PP2_NUM_PKT_PROC] = { + MRVL_MUSDK_BPOOLS_RESERVED, + MRVL_MUSDK_BPOOLS_RESERVED +}; + +struct pp2_bpool *mrvl_port_to_bpool_lookup[RTE_MAX_ETHPORTS]; +int mrvl_port_bpool_size[PP2_NUM_PKT_PROC][PP2_BPOOL_NUM_POOLS][RTE_MAX_LCORE]; +uint64_t cookie_addr_high = MRVL_COOKIE_ADDR_INVALID; + +struct mrvl_ifnames { + const char *names[PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC]; + int idx; +}; + +/* + * To use buffer harvesting based on loopback port shadow queue structure + * was introduced for buffers information bookkeeping. + * + * Before sending the packet, related buffer information (pp2_buff_inf) is + * stored in shadow queue. After packet is transmitted no longer used + * packet buffer is released back to it's original hardware pool, + * on condition it originated from interface. + * In case it was generated by application itself i.e: mbuf->port field is + * 0xff then its released to software mempool. + */ +struct mrvl_shadow_txq { + int head; /* write index - used when sending buffers */ + int tail; /* read index - used when releasing buffers */ + u16 size; /* queue occupied size */ + u16 num_to_release; /* number of buffers sent, that can be released */ + struct buff_release_entry ent[MRVL_PP2_TX_SHADOWQ_SIZE]; /* q entries */ +}; + +struct mrvl_rxq { + struct mrvl_priv *priv; + struct rte_mempool *mp; + int queue_id; + int port_id; + int cksum_enabled; + uint64_t bytes_recv; + uint64_t drop_mac; +}; + +struct mrvl_txq { + struct mrvl_priv *priv; + int queue_id; + int port_id; + uint64_t bytes_sent; + struct mrvl_shadow_txq shadow_txqs[RTE_MAX_LCORE]; + int tx_deferred_start; +}; + +static int mrvl_lcore_first; +static int mrvl_lcore_last; +static int mrvl_dev_num; + +static int mrvl_fill_bpool(struct mrvl_rxq *rxq, int num); +static inline void mrvl_free_sent_buffers(struct pp2_ppio *ppio, + struct pp2_hif *hif, unsigned int core_id, + struct mrvl_shadow_txq *sq, int qid, int force); + +#define MRVL_XSTATS_TBL_ENTRY(name) { \ + #name, offsetof(struct pp2_ppio_statistics, name), \ + sizeof(((struct pp2_ppio_statistics *)0)->name) \ +} + +/* Table with xstats data */ +static struct { + const char *name; + unsigned int offset; + unsigned int size; +} mrvl_xstats_tbl[] = { + MRVL_XSTATS_TBL_ENTRY(rx_bytes), + MRVL_XSTATS_TBL_ENTRY(rx_packets), + MRVL_XSTATS_TBL_ENTRY(rx_unicast_packets), + MRVL_XSTATS_TBL_ENTRY(rx_errors), + MRVL_XSTATS_TBL_ENTRY(rx_fullq_dropped), + MRVL_XSTATS_TBL_ENTRY(rx_bm_dropped), + MRVL_XSTATS_TBL_ENTRY(rx_early_dropped), + MRVL_XSTATS_TBL_ENTRY(rx_fifo_dropped), + MRVL_XSTATS_TBL_ENTRY(rx_cls_dropped), + MRVL_XSTATS_TBL_ENTRY(tx_bytes), + MRVL_XSTATS_TBL_ENTRY(tx_packets), + MRVL_XSTATS_TBL_ENTRY(tx_unicast_packets), + MRVL_XSTATS_TBL_ENTRY(tx_errors) +}; + +static inline int +mrvl_get_bpool_size(int pp2_id, int pool_id) +{ + int i; + int size = 0; + + for (i = mrvl_lcore_first; i <= mrvl_lcore_last; i++) + size += mrvl_port_bpool_size[pp2_id][pool_id][i]; + + return size; +} + +static inline int +mrvl_reserve_bit(int *bitmap, int max) +{ + int n = sizeof(*bitmap) * 8 - __builtin_clz(*bitmap); + + if (n >= max) + return -1; + + *bitmap |= 1 << n; + + return n; +} + +static int +mrvl_init_hif(int core_id) +{ + struct pp2_hif_params params; + char match[MRVL_MATCH_LEN]; + int ret; + + ret = mrvl_reserve_bit(&used_hifs, MRVL_MUSDK_HIFS_MAX); + if (ret < 0) { + RTE_LOG(ERR, PMD, "Failed to allocate hif %d\n", core_id); + return ret; + } + + snprintf(match, sizeof(match), "hif-%d", ret); + memset(¶ms, 0, sizeof(params)); + params.match = match; + params.out_size = MRVL_PP2_AGGR_TXQD_MAX; + ret = pp2_hif_init(¶ms, &hifs[core_id]); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to initialize hif %d\n", core_id); + return ret; + } + + return 0; +} + +static inline struct pp2_hif* +mrvl_get_hif(struct mrvl_priv *priv, int core_id) +{ + int ret; + + if (likely(hifs[core_id] != NULL)) + return hifs[core_id]; + + rte_spinlock_lock(&priv->lock); + + ret = mrvl_init_hif(core_id); + if (ret < 0) { + RTE_LOG(ERR, PMD, "Failed to allocate hif %d\n", core_id); + goto out; + } + + if (core_id < mrvl_lcore_first) + mrvl_lcore_first = core_id; + + if (core_id > mrvl_lcore_last) + mrvl_lcore_last = core_id; +out: + rte_spinlock_unlock(&priv->lock); + + return hifs[core_id]; +} + +/** + * Configure rss based on dpdk rss configuration. + * + * @param priv + * Pointer to private structure. + * @param rss_conf + * Pointer to RSS configuration. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_configure_rss(struct mrvl_priv *priv, struct rte_eth_rss_conf *rss_conf) +{ + if (rss_conf->rss_key) + RTE_LOG(WARNING, PMD, "Changing hash key is not supported\n"); + + if (rss_conf->rss_hf == 0) { + priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE; + } else if (rss_conf->rss_hf & ETH_RSS_IPV4) { + priv->ppio_params.inqs_params.hash_type = + PP2_PPIO_HASH_T_2_TUPLE; + } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) { + priv->ppio_params.inqs_params.hash_type = + PP2_PPIO_HASH_T_5_TUPLE; + priv->rss_hf_tcp = 1; + } else if (rss_conf->rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) { + priv->ppio_params.inqs_params.hash_type = + PP2_PPIO_HASH_T_5_TUPLE; + priv->rss_hf_tcp = 0; + } else { + return -EINVAL; + } + + return 0; +} + +/** + * Ethernet device configuration. + * + * Prepare the driver for a given number of TX and RX queues and + * configure RSS. + * + * @param dev + * Pointer to Ethernet device structure. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_dev_configure(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_NONE && + dev->data->dev_conf.rxmode.mq_mode != ETH_MQ_RX_RSS) { + RTE_LOG(INFO, PMD, "Unsupported rx multi queue mode %d\n", + dev->data->dev_conf.rxmode.mq_mode); + return -EINVAL; + } + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_CRC_STRIP)) { + RTE_LOG(INFO, PMD, + "L2 CRC stripping is always enabled in hw\n"); + dev->data->dev_conf.rxmode.offloads |= DEV_RX_OFFLOAD_CRC_STRIP; + } + + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_VLAN_STRIP) { + RTE_LOG(INFO, PMD, "VLAN stripping not supported\n"); + return -EINVAL; + } + + if (dev->data->dev_conf.rxmode.split_hdr_size) { + RTE_LOG(INFO, PMD, "Split headers not supported\n"); + return -EINVAL; + } + + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_SCATTER) { + RTE_LOG(INFO, PMD, "RX Scatter/Gather not supported\n"); + return -EINVAL; + } + + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) { + RTE_LOG(INFO, PMD, "LRO not supported\n"); + return -EINVAL; + } + + if (dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) + dev->data->mtu = dev->data->dev_conf.rxmode.max_rx_pkt_len - + ETHER_HDR_LEN - ETHER_CRC_LEN; + + ret = mrvl_configure_rxqs(priv, dev->data->port_id, + dev->data->nb_rx_queues); + if (ret < 0) + return ret; + + ret = mrvl_configure_txqs(priv, dev->data->port_id, + dev->data->nb_tx_queues); + if (ret < 0) + return ret; + + priv->ppio_params.outqs_params.num_outqs = dev->data->nb_tx_queues; + priv->ppio_params.maintain_stats = 1; + priv->nb_rx_queues = dev->data->nb_rx_queues; + + if (dev->data->nb_rx_queues == 1 && + dev->data->dev_conf.rxmode.mq_mode == ETH_MQ_RX_RSS) { + RTE_LOG(WARNING, PMD, "Disabling hash for 1 rx queue\n"); + priv->ppio_params.inqs_params.hash_type = PP2_PPIO_HASH_T_NONE; + + return 0; + } + + return mrvl_configure_rss(priv, + &dev->data->dev_conf.rx_adv_conf.rss_conf); +} + +/** + * DPDK callback to change the MTU. + * + * Setting the MTU affects hardware MRU (packets larger than the MRU + * will be dropped). + * + * @param dev + * Pointer to Ethernet device structure. + * @param mtu + * New MTU. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +{ + struct mrvl_priv *priv = dev->data->dev_private; + /* extra MV_MH_SIZE bytes are required for Marvell tag */ + uint16_t mru = mtu + MV_MH_SIZE + ETHER_HDR_LEN + ETHER_CRC_LEN; + int ret; + + if (mtu < ETHER_MIN_MTU || mru > MRVL_PKT_SIZE_MAX) + return -EINVAL; + + if (!priv->ppio) + return 0; + + ret = pp2_ppio_set_mru(priv->ppio, mru); + if (ret) + return ret; + + return pp2_ppio_set_mtu(priv->ppio, mtu); +} + +/** + * DPDK callback to bring the link up. + * + * @param dev + * Pointer to Ethernet device structure. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_dev_set_link_up(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return -EPERM; + + ret = pp2_ppio_enable(priv->ppio); + if (ret) + return ret; + + /* + * mtu/mru can be updated if pp2_ppio_enable() was called at least once + * as pp2_ppio_enable() changes port->t_mode from default 0 to + * PP2_TRAFFIC_INGRESS_EGRESS. + * + * Set mtu to default DPDK value here. + */ + ret = mrvl_mtu_set(dev, dev->data->mtu); + if (ret) + pp2_ppio_disable(priv->ppio); + + return ret; +} + +/** + * DPDK callback to bring the link down. + * + * @param dev + * Pointer to Ethernet device structure. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_dev_set_link_down(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + if (!priv->ppio) + return -EPERM; + + return pp2_ppio_disable(priv->ppio); +} + +/** + * DPDK callback to start tx queue. + * + * @param dev + * Pointer to Ethernet device structure. + * @param queue_id + * Transmit queue index. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_tx_queue_start(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv) + return -EPERM; + + /* passing 1 enables given tx queue */ + ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 1); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to start txq %d\n", queue_id); + return ret; + } + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + + return 0; +} + +/** + * DPDK callback to stop tx queue. + * + * @param dev + * Pointer to Ethernet device structure. + * @param queue_id + * Transmit queue index. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_tx_queue_stop(struct rte_eth_dev *dev, uint16_t queue_id) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return -EPERM; + + /* passing 0 disables given tx queue */ + ret = pp2_ppio_set_outq_state(priv->ppio, queue_id, 0); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to stop txq %d\n", queue_id); + return ret; + } + + dev->data->tx_queue_state[queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + + return 0; +} + +/** + * DPDK callback to start the device. + * + * @param dev + * Pointer to Ethernet device structure. + * + * @return + * 0 on success, negative errno value on failure. + */ +static int +mrvl_dev_start(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + char match[MRVL_MATCH_LEN]; + int ret = 0, i, def_init_size; + + snprintf(match, sizeof(match), "ppio-%d:%d", + priv->pp_id, priv->ppio_id); + priv->ppio_params.match = match; + + /* + * Calculate the minimum bpool size for refill feature as follows: + * 2 default burst sizes multiply by number of rx queues. + * If the bpool size will be below this value, new buffers will + * be added to the pool. + */ + priv->bpool_min_size = priv->nb_rx_queues * MRVL_BURST_SIZE * 2; + + /* In case initial bpool size configured in queues setup is + * smaller than minimum size add more buffers + */ + def_init_size = priv->bpool_min_size + MRVL_BURST_SIZE * 2; + if (priv->bpool_init_size < def_init_size) { + int buffs_to_add = def_init_size - priv->bpool_init_size; + + priv->bpool_init_size += buffs_to_add; + ret = mrvl_fill_bpool(dev->data->rx_queues[0], buffs_to_add); + if (ret) + RTE_LOG(ERR, PMD, "Failed to add buffers to bpool\n"); + } + + /* + * Calculate the maximum bpool size for refill feature as follows: + * maximum number of descriptors in rx queue multiply by number + * of rx queues plus minimum bpool size. + * In case the bpool size will exceed this value, superfluous buffers + * will be removed + */ + priv->bpool_max_size = (priv->nb_rx_queues * MRVL_PP2_RXD_MAX) + + priv->bpool_min_size; + + ret = pp2_ppio_init(&priv->ppio_params, &priv->ppio); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to init ppio\n"); + return ret; + } + + /* + * In case there are some some stale uc/mc mac addresses flush them + * here. It cannot be done during mrvl_dev_close() as port information + * is already gone at that point (due to pp2_ppio_deinit() in + * mrvl_dev_stop()). + */ + if (!priv->uc_mc_flushed) { + ret = pp2_ppio_flush_mac_addrs(priv->ppio, 1, 1); + if (ret) { + RTE_LOG(ERR, PMD, + "Failed to flush uc/mc filter list\n"); + goto out; + } + priv->uc_mc_flushed = 1; + } + + if (!priv->vlan_flushed) { + ret = pp2_ppio_flush_vlan(priv->ppio); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to flush vlan list\n"); + /* + * TODO + * once pp2_ppio_flush_vlan() is supported jump to out + * goto out; + */ + } + priv->vlan_flushed = 1; + } + + /* For default QoS config, don't start classifier. */ + if (mrvl_qos_cfg) { + ret = mrvl_start_qos_mapping(priv); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to setup QoS mapping\n"); + goto out; + } + } + + ret = mrvl_dev_set_link_up(dev); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to set link up\n"); + goto out; + } + + /* start tx queues */ + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct mrvl_txq *txq = dev->data->tx_queues[i]; + + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; + + if (!txq->tx_deferred_start) + continue; + + /* + * All txqs are started by default. Stop them + * so that tx_deferred_start works as expected. + */ + ret = mrvl_tx_queue_stop(dev, i); + if (ret) + goto out; + } + + return 0; +out: + RTE_LOG(ERR, PMD, "Failed to start device\n"); + pp2_ppio_deinit(priv->ppio); + return ret; +} + +/** + * Flush receive queues. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_flush_rx_queues(struct rte_eth_dev *dev) +{ + int i; + + RTE_LOG(INFO, PMD, "Flushing rx queues\n"); + for (i = 0; i < dev->data->nb_rx_queues; i++) { + int ret, num; + + do { + struct mrvl_rxq *q = dev->data->rx_queues[i]; + struct pp2_ppio_desc descs[MRVL_PP2_RXD_MAX]; + + num = MRVL_PP2_RXD_MAX; + ret = pp2_ppio_recv(q->priv->ppio, + q->priv->rxq_map[q->queue_id].tc, + q->priv->rxq_map[q->queue_id].inq, + descs, (uint16_t *)&num); + } while (ret == 0 && num); + } +} + +/** + * Flush transmit shadow queues. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_flush_tx_shadow_queues(struct rte_eth_dev *dev) +{ + int i, j; + struct mrvl_txq *txq; + + RTE_LOG(INFO, PMD, "Flushing tx shadow queues\n"); + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = (struct mrvl_txq *)dev->data->tx_queues[i]; + + for (j = 0; j < RTE_MAX_LCORE; j++) { + struct mrvl_shadow_txq *sq; + + if (!hifs[j]) + continue; + + sq = &txq->shadow_txqs[j]; + mrvl_free_sent_buffers(txq->priv->ppio, + hifs[j], j, sq, txq->queue_id, 1); + while (sq->tail != sq->head) { + uint64_t addr = cookie_addr_high | + sq->ent[sq->tail].buff.cookie; + rte_pktmbuf_free( + (struct rte_mbuf *)addr); + sq->tail = (sq->tail + 1) & + MRVL_PP2_TX_SHADOWQ_MASK; + } + memset(sq, 0, sizeof(*sq)); + } + } +} + +/** + * Flush hardware bpool (buffer-pool). + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_flush_bpool(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct pp2_hif *hif; + uint32_t num; + int ret; + unsigned int core_id = rte_lcore_id(); + + if (core_id == LCORE_ID_ANY) + core_id = 0; + + hif = mrvl_get_hif(priv, core_id); + + ret = pp2_bpool_get_num_buffs(priv->bpool, &num); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to get bpool buffers number\n"); + return; + } + + while (num--) { + struct pp2_buff_inf inf; + uint64_t addr; + + ret = pp2_bpool_get_buff(hif, priv->bpool, &inf); + if (ret) + break; + + addr = cookie_addr_high | inf.cookie; + rte_pktmbuf_free((struct rte_mbuf *)addr); + } +} + +/** + * DPDK callback to stop the device. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_dev_stop(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + mrvl_dev_set_link_down(dev); + mrvl_flush_rx_queues(dev); + mrvl_flush_tx_shadow_queues(dev); + if (priv->cls_tbl) { + pp2_cls_tbl_deinit(priv->cls_tbl); + priv->cls_tbl = NULL; + } + if (priv->qos_tbl) { + pp2_cls_qos_tbl_deinit(priv->qos_tbl); + priv->qos_tbl = NULL; + } + if (priv->ppio) + pp2_ppio_deinit(priv->ppio); + priv->ppio = NULL; + + /* policer must be released after ppio deinitialization */ + if (priv->policer) { + pp2_cls_plcr_deinit(priv->policer); + priv->policer = NULL; + } +} + +/** + * DPDK callback to close the device. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_dev_close(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + size_t i; + + for (i = 0; i < priv->ppio_params.inqs_params.num_tcs; ++i) { + struct pp2_ppio_tc_params *tc_params = + &priv->ppio_params.inqs_params.tcs_params[i]; + + if (tc_params->inqs_params) { + rte_free(tc_params->inqs_params); + tc_params->inqs_params = NULL; + } + } + + mrvl_flush_bpool(dev); +} + +/** + * DPDK callback to retrieve physical link information. + * + * @param dev + * Pointer to Ethernet device structure. + * @param wait_to_complete + * Wait for request completion (ignored). + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_link_update(struct rte_eth_dev *dev, int wait_to_complete __rte_unused) +{ + /* + * TODO + * once MUSDK provides necessary API use it here + */ + struct mrvl_priv *priv = dev->data->dev_private; + struct ethtool_cmd edata; + struct ifreq req; + int ret, fd, link_up; + + if (!priv->ppio) + return -EPERM; + + edata.cmd = ETHTOOL_GSET; + + strcpy(req.ifr_name, dev->data->name); + req.ifr_data = (void *)&edata; + + fd = socket(AF_INET, SOCK_DGRAM, 0); + if (fd == -1) + return -EFAULT; + + ret = ioctl(fd, SIOCETHTOOL, &req); + if (ret == -1) { + close(fd); + return -EFAULT; + } + + close(fd); + + switch (ethtool_cmd_speed(&edata)) { + case SPEED_10: + dev->data->dev_link.link_speed = ETH_SPEED_NUM_10M; + break; + case SPEED_100: + dev->data->dev_link.link_speed = ETH_SPEED_NUM_100M; + break; + case SPEED_1000: + dev->data->dev_link.link_speed = ETH_SPEED_NUM_1G; + break; + case SPEED_10000: + dev->data->dev_link.link_speed = ETH_SPEED_NUM_10G; + break; + default: + dev->data->dev_link.link_speed = ETH_SPEED_NUM_NONE; + } + + dev->data->dev_link.link_duplex = edata.duplex ? ETH_LINK_FULL_DUPLEX : + ETH_LINK_HALF_DUPLEX; + dev->data->dev_link.link_autoneg = edata.autoneg ? ETH_LINK_AUTONEG : + ETH_LINK_FIXED; + pp2_ppio_get_link_state(priv->ppio, &link_up); + dev->data->dev_link.link_status = link_up ? ETH_LINK_UP : ETH_LINK_DOWN; + + return 0; +} + +/** + * DPDK callback to enable promiscuous mode. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_promiscuous_enable(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return; + + if (priv->isolated) + return; + + ret = pp2_ppio_set_promisc(priv->ppio, 1); + if (ret) + RTE_LOG(ERR, PMD, "Failed to enable promiscuous mode\n"); +} + +/** + * DPDK callback to enable allmulti mode. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_allmulticast_enable(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return; + + if (priv->isolated) + return; + + ret = pp2_ppio_set_mc_promisc(priv->ppio, 1); + if (ret) + RTE_LOG(ERR, PMD, "Failed enable all-multicast mode\n"); +} + +/** + * DPDK callback to disable promiscuous mode. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_promiscuous_disable(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return; + + ret = pp2_ppio_set_promisc(priv->ppio, 0); + if (ret) + RTE_LOG(ERR, PMD, "Failed to disable promiscuous mode\n"); +} + +/** + * DPDK callback to disable allmulticast mode. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_allmulticast_disable(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return; + + ret = pp2_ppio_set_mc_promisc(priv->ppio, 0); + if (ret) + RTE_LOG(ERR, PMD, "Failed to disable all-multicast mode\n"); +} + +/** + * DPDK callback to remove a MAC address. + * + * @param dev + * Pointer to Ethernet device structure. + * @param index + * MAC address index. + */ +static void +mrvl_mac_addr_remove(struct rte_eth_dev *dev, uint32_t index) +{ + struct mrvl_priv *priv = dev->data->dev_private; + char buf[ETHER_ADDR_FMT_SIZE]; + int ret; + + if (!priv->ppio) + return; + + if (priv->isolated) + return; + + ret = pp2_ppio_remove_mac_addr(priv->ppio, + dev->data->mac_addrs[index].addr_bytes); + if (ret) { + ether_format_addr(buf, sizeof(buf), + &dev->data->mac_addrs[index]); + RTE_LOG(ERR, PMD, "Failed to remove mac %s\n", buf); + } +} + +/** + * DPDK callback to add a MAC address. + * + * @param dev + * Pointer to Ethernet device structure. + * @param mac_addr + * MAC address to register. + * @param index + * MAC address index. + * @param vmdq + * VMDq pool index to associate address with (unused). + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_mac_addr_add(struct rte_eth_dev *dev, struct ether_addr *mac_addr, + uint32_t index, uint32_t vmdq __rte_unused) +{ + struct mrvl_priv *priv = dev->data->dev_private; + char buf[ETHER_ADDR_FMT_SIZE]; + int ret; + + if (priv->isolated) + return -ENOTSUP; + + if (index == 0) + /* For setting index 0, mrvl_mac_addr_set() should be used.*/ + return -1; + + if (!priv->ppio) + return 0; + + /* + * Maximum number of uc addresses can be tuned via kernel module mvpp2x + * parameter uc_filter_max. Maximum number of mc addresses is then + * MRVL_MAC_ADDRS_MAX - uc_filter_max. Currently it defaults to 4 and + * 21 respectively. + * + * If more than uc_filter_max uc addresses were added to filter list + * then NIC will switch to promiscuous mode automatically. + * + * If more than MRVL_MAC_ADDRS_MAX - uc_filter_max number mc addresses + * were added to filter list then NIC will switch to all-multicast mode + * automatically. + */ + ret = pp2_ppio_add_mac_addr(priv->ppio, mac_addr->addr_bytes); + if (ret) { + ether_format_addr(buf, sizeof(buf), mac_addr); + RTE_LOG(ERR, PMD, "Failed to add mac %s\n", buf); + return -1; + } + + return 0; +} + +/** + * DPDK callback to set the primary MAC address. + * + * @param dev + * Pointer to Ethernet device structure. + * @param mac_addr + * MAC address to register. + */ +static void +mrvl_mac_addr_set(struct rte_eth_dev *dev, struct ether_addr *mac_addr) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret; + + if (!priv->ppio) + return; + + if (priv->isolated) + return; + + ret = pp2_ppio_set_mac_addr(priv->ppio, mac_addr->addr_bytes); + if (ret) { + char buf[ETHER_ADDR_FMT_SIZE]; + ether_format_addr(buf, sizeof(buf), mac_addr); + RTE_LOG(ERR, PMD, "Failed to set mac to %s\n", buf); + } +} + +/** + * DPDK callback to get device statistics. + * + * @param dev + * Pointer to Ethernet device structure. + * @param stats + * Stats structure output buffer. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct pp2_ppio_statistics ppio_stats; + uint64_t drop_mac = 0; + unsigned int i, idx, ret; + + if (!priv->ppio) + return -EPERM; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct mrvl_rxq *rxq = dev->data->rx_queues[i]; + struct pp2_ppio_inq_statistics rx_stats; + + if (!rxq) + continue; + + idx = rxq->queue_id; + if (unlikely(idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)) { + RTE_LOG(ERR, PMD, + "rx queue %d stats out of range (0 - %d)\n", + idx, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); + continue; + } + + ret = pp2_ppio_inq_get_statistics(priv->ppio, + priv->rxq_map[idx].tc, + priv->rxq_map[idx].inq, + &rx_stats, 0); + if (unlikely(ret)) { + RTE_LOG(ERR, PMD, + "Failed to update rx queue %d stats\n", idx); + break; + } + + stats->q_ibytes[idx] = rxq->bytes_recv; + stats->q_ipackets[idx] = rx_stats.enq_desc - rxq->drop_mac; + stats->q_errors[idx] = rx_stats.drop_early + + rx_stats.drop_fullq + + rx_stats.drop_bm + + rxq->drop_mac; + stats->ibytes += rxq->bytes_recv; + drop_mac += rxq->drop_mac; + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct mrvl_txq *txq = dev->data->tx_queues[i]; + struct pp2_ppio_outq_statistics tx_stats; + + if (!txq) + continue; + + idx = txq->queue_id; + if (unlikely(idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS)) { + RTE_LOG(ERR, PMD, + "tx queue %d stats out of range (0 - %d)\n", + idx, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1); + } + + ret = pp2_ppio_outq_get_statistics(priv->ppio, idx, + &tx_stats, 0); + if (unlikely(ret)) { + RTE_LOG(ERR, PMD, + "Failed to update tx queue %d stats\n", idx); + break; + } + + stats->q_opackets[idx] = tx_stats.deq_desc; + stats->q_obytes[idx] = txq->bytes_sent; + stats->obytes += txq->bytes_sent; + } + + ret = pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0); + if (unlikely(ret)) { + RTE_LOG(ERR, PMD, "Failed to update port statistics\n"); + return ret; + } + + stats->ipackets += ppio_stats.rx_packets - drop_mac; + stats->opackets += ppio_stats.tx_packets; + stats->imissed += ppio_stats.rx_fullq_dropped + + ppio_stats.rx_bm_dropped + + ppio_stats.rx_early_dropped + + ppio_stats.rx_fifo_dropped + + ppio_stats.rx_cls_dropped; + stats->ierrors = drop_mac; + + return 0; +} + +/** + * DPDK callback to clear device statistics. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_stats_reset(struct rte_eth_dev *dev) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int i; + + if (!priv->ppio) + return; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct mrvl_rxq *rxq = dev->data->rx_queues[i]; + + pp2_ppio_inq_get_statistics(priv->ppio, priv->rxq_map[i].tc, + priv->rxq_map[i].inq, NULL, 1); + rxq->bytes_recv = 0; + rxq->drop_mac = 0; + } + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + struct mrvl_txq *txq = dev->data->tx_queues[i]; + + pp2_ppio_outq_get_statistics(priv->ppio, i, NULL, 1); + txq->bytes_sent = 0; + } + + pp2_ppio_get_statistics(priv->ppio, NULL, 1); +} + +/** + * DPDK callback to get extended statistics. + * + * @param dev + * Pointer to Ethernet device structure. + * @param stats + * Pointer to xstats table. + * @param n + * Number of entries in xstats table. + * @return + * Negative value on error, number of read xstats otherwise. + */ +static int +mrvl_xstats_get(struct rte_eth_dev *dev, + struct rte_eth_xstat *stats, unsigned int n) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct pp2_ppio_statistics ppio_stats; + unsigned int i; + + if (!stats) + return 0; + + pp2_ppio_get_statistics(priv->ppio, &ppio_stats, 0); + for (i = 0; i < n && i < RTE_DIM(mrvl_xstats_tbl); i++) { + uint64_t val; + + if (mrvl_xstats_tbl[i].size == sizeof(uint32_t)) + val = *(uint32_t *)((uint8_t *)&ppio_stats + + mrvl_xstats_tbl[i].offset); + else if (mrvl_xstats_tbl[i].size == sizeof(uint64_t)) + val = *(uint64_t *)((uint8_t *)&ppio_stats + + mrvl_xstats_tbl[i].offset); + else + return -EINVAL; + + stats[i].id = i; + stats[i].value = val; + } + + return n; +} + +/** + * DPDK callback to reset extended statistics. + * + * @param dev + * Pointer to Ethernet device structure. + */ +static void +mrvl_xstats_reset(struct rte_eth_dev *dev) +{ + mrvl_stats_reset(dev); +} + +/** + * DPDK callback to get extended statistics names. + * + * @param dev (unused) + * Pointer to Ethernet device structure. + * @param xstats_names + * Pointer to xstats names table. + * @param size + * Size of the xstats names table. + * @return + * Number of read names. + */ +static int +mrvl_xstats_get_names(struct rte_eth_dev *dev __rte_unused, + struct rte_eth_xstat_name *xstats_names, + unsigned int size) +{ + unsigned int i; + + if (!xstats_names) + return RTE_DIM(mrvl_xstats_tbl); + + for (i = 0; i < size && i < RTE_DIM(mrvl_xstats_tbl); i++) + snprintf(xstats_names[i].name, RTE_ETH_XSTATS_NAME_SIZE, "%s", + mrvl_xstats_tbl[i].name); + + return size; +} + +/** + * DPDK callback to get information about the device. + * + * @param dev + * Pointer to Ethernet device structure (unused). + * @param info + * Info structure output buffer. + */ +static void +mrvl_dev_infos_get(struct rte_eth_dev *dev __rte_unused, + struct rte_eth_dev_info *info) +{ + info->speed_capa = ETH_LINK_SPEED_10M | + ETH_LINK_SPEED_100M | + ETH_LINK_SPEED_1G | + ETH_LINK_SPEED_10G; + + info->max_rx_queues = MRVL_PP2_RXQ_MAX; + info->max_tx_queues = MRVL_PP2_TXQ_MAX; + info->max_mac_addrs = MRVL_MAC_ADDRS_MAX; + + info->rx_desc_lim.nb_max = MRVL_PP2_RXD_MAX; + info->rx_desc_lim.nb_min = MRVL_PP2_RXD_MIN; + info->rx_desc_lim.nb_align = MRVL_PP2_RXD_ALIGN; + + info->tx_desc_lim.nb_max = MRVL_PP2_TXD_MAX; + info->tx_desc_lim.nb_min = MRVL_PP2_TXD_MIN; + info->tx_desc_lim.nb_align = MRVL_PP2_TXD_ALIGN; + + info->rx_offload_capa = MRVL_RX_OFFLOADS; + info->rx_queue_offload_capa = MRVL_RX_OFFLOADS; + + info->tx_offload_capa = MRVL_TX_OFFLOADS; + info->tx_queue_offload_capa = MRVL_TX_OFFLOADS; + + info->flow_type_rss_offloads = ETH_RSS_IPV4 | + ETH_RSS_NONFRAG_IPV4_TCP | + ETH_RSS_NONFRAG_IPV4_UDP; + + /* By default packets are dropped if no descriptors are available */ + info->default_rxconf.rx_drop_en = 1; + info->default_rxconf.offloads = DEV_RX_OFFLOAD_CRC_STRIP; + + info->max_rx_pktlen = MRVL_PKT_SIZE_MAX; +} + +/** + * Return supported packet types. + * + * @param dev + * Pointer to Ethernet device structure (unused). + * + * @return + * Const pointer to the table with supported packet types. + */ +static const uint32_t * +mrvl_dev_supported_ptypes_get(struct rte_eth_dev *dev __rte_unused) +{ + static const uint32_t ptypes[] = { + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L3_IPV4, + RTE_PTYPE_L3_IPV4_EXT, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6, + RTE_PTYPE_L3_IPV6_EXT, + RTE_PTYPE_L2_ETHER_ARP, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP + }; + + return ptypes; +} + +/** + * DPDK callback to get information about specific receive queue. + * + * @param dev + * Pointer to Ethernet device structure. + * @param rx_queue_id + * Receive queue index. + * @param qinfo + * Receive queue information structure. + */ +static void mrvl_rxq_info_get(struct rte_eth_dev *dev, uint16_t rx_queue_id, + struct rte_eth_rxq_info *qinfo) +{ + struct mrvl_rxq *q = dev->data->rx_queues[rx_queue_id]; + struct mrvl_priv *priv = dev->data->dev_private; + int inq = priv->rxq_map[rx_queue_id].inq; + int tc = priv->rxq_map[rx_queue_id].tc; + struct pp2_ppio_tc_params *tc_params = + &priv->ppio_params.inqs_params.tcs_params[tc]; + + qinfo->mp = q->mp; + qinfo->nb_desc = tc_params->inqs_params[inq].size; +} + +/** + * DPDK callback to get information about specific transmit queue. + * + * @param dev + * Pointer to Ethernet device structure. + * @param tx_queue_id + * Transmit queue index. + * @param qinfo + * Transmit queue information structure. + */ +static void mrvl_txq_info_get(struct rte_eth_dev *dev, uint16_t tx_queue_id, + struct rte_eth_txq_info *qinfo) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct mrvl_txq *txq = dev->data->tx_queues[tx_queue_id]; + + qinfo->nb_desc = + priv->ppio_params.outqs_params.outqs_params[tx_queue_id].size; + qinfo->conf.tx_deferred_start = txq->tx_deferred_start; +} + +/** + * DPDK callback to Configure a VLAN filter. + * + * @param dev + * Pointer to Ethernet device structure. + * @param vlan_id + * VLAN ID to filter. + * @param on + * Toggle filter. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_vlan_filter_set(struct rte_eth_dev *dev, uint16_t vlan_id, int on) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + if (!priv->ppio) + return -EPERM; + + if (priv->isolated) + return -ENOTSUP; + + return on ? pp2_ppio_add_vlan(priv->ppio, vlan_id) : + pp2_ppio_remove_vlan(priv->ppio, vlan_id); +} + +/** + * Release buffers to hardware bpool (buffer-pool) + * + * @param rxq + * Receive queue pointer. + * @param num + * Number of buffers to release to bpool. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_fill_bpool(struct mrvl_rxq *rxq, int num) +{ + struct buff_release_entry entries[MRVL_PP2_RXD_MAX]; + struct rte_mbuf *mbufs[MRVL_PP2_RXD_MAX]; + int i, ret; + unsigned int core_id; + struct pp2_hif *hif; + struct pp2_bpool *bpool; + + core_id = rte_lcore_id(); + if (core_id == LCORE_ID_ANY) + core_id = 0; + + hif = mrvl_get_hif(rxq->priv, core_id); + if (!hif) + return -1; + + bpool = rxq->priv->bpool; + + ret = rte_pktmbuf_alloc_bulk(rxq->mp, mbufs, num); + if (ret) + return ret; + + if (cookie_addr_high == MRVL_COOKIE_ADDR_INVALID) + cookie_addr_high = + (uint64_t)mbufs[0] & MRVL_COOKIE_HIGH_ADDR_MASK; + + for (i = 0; i < num; i++) { + if (((uint64_t)mbufs[i] & MRVL_COOKIE_HIGH_ADDR_MASK) + != cookie_addr_high) { + RTE_LOG(ERR, PMD, + "mbuf virtual addr high 0x%lx out of range\n", + (uint64_t)mbufs[i] >> 32); + goto out; + } + + entries[i].buff.addr = + rte_mbuf_data_iova_default(mbufs[i]); + entries[i].buff.cookie = (pp2_cookie_t)(uint64_t)mbufs[i]; + entries[i].bpool = bpool; + } + + pp2_bpool_put_buffs(hif, entries, (uint16_t *)&i); + mrvl_port_bpool_size[bpool->pp2_id][bpool->id][core_id] += i; + + if (i != num) + goto out; + + return 0; +out: + for (; i < num; i++) + rte_pktmbuf_free(mbufs[i]); + + return -1; +} + +/** + * Check whether requested rx queue offloads match port offloads. + * + * @param + * dev Pointer to the device. + * @param + * requested Bitmap of the requested offloads. + * + * @return + * 1 if requested offloads are okay, 0 otherwise. + */ +static int +mrvl_rx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested) +{ + uint64_t mandatory = dev->data->dev_conf.rxmode.offloads; + uint64_t supported = MRVL_RX_OFFLOADS; + uint64_t unsupported = requested & ~supported; + uint64_t missing = mandatory & ~requested; + + if (unsupported) { + RTE_LOG(ERR, PMD, "Some Rx offloads are not supported. " + "Requested 0x%" PRIx64 " supported 0x%" PRIx64 ".\n", + requested, supported); + return 0; + } + + if (missing) { + RTE_LOG(ERR, PMD, "Some Rx offloads are missing. " + "Requested 0x%" PRIx64 " missing 0x%" PRIx64 ".\n", + requested, missing); + return 0; + } + + return 1; +} + +/** + * DPDK callback to configure the receive queue. + * + * @param dev + * Pointer to Ethernet device structure. + * @param idx + * RX queue index. + * @param desc + * Number of descriptors to configure in queue. + * @param socket + * NUMA socket on which memory must be allocated. + * @param conf + * Thresholds parameters. + * @param mp + * Memory pool for buffer allocations. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_rx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, + unsigned int socket, + const struct rte_eth_rxconf *conf, + struct rte_mempool *mp) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct mrvl_rxq *rxq; + uint32_t min_size, + max_rx_pkt_len = dev->data->dev_conf.rxmode.max_rx_pkt_len; + int ret, tc, inq; + + if (!mrvl_rx_queue_offloads_okay(dev, conf->offloads)) + return -ENOTSUP; + + if (priv->rxq_map[idx].tc == MRVL_UNKNOWN_TC) { + /* + * Unknown TC mapping, mapping will not have a correct queue. + */ + RTE_LOG(ERR, PMD, "Unknown TC mapping for queue %hu eth%hhu\n", + idx, priv->ppio_id); + return -EFAULT; + } + + min_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM - + MRVL_PKT_EFFEC_OFFS; + if (min_size < max_rx_pkt_len) { + RTE_LOG(ERR, PMD, + "Mbuf size must be increased to %u bytes to hold up to %u bytes of data.\n", + max_rx_pkt_len + RTE_PKTMBUF_HEADROOM + + MRVL_PKT_EFFEC_OFFS, + max_rx_pkt_len); + return -EINVAL; + } + + if (dev->data->rx_queues[idx]) { + rte_free(dev->data->rx_queues[idx]); + dev->data->rx_queues[idx] = NULL; + } + + rxq = rte_zmalloc_socket("rxq", sizeof(*rxq), 0, socket); + if (!rxq) + return -ENOMEM; + + rxq->priv = priv; + rxq->mp = mp; + rxq->cksum_enabled = + dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_IPV4_CKSUM; + rxq->queue_id = idx; + rxq->port_id = dev->data->port_id; + mrvl_port_to_bpool_lookup[rxq->port_id] = priv->bpool; + + tc = priv->rxq_map[rxq->queue_id].tc, + inq = priv->rxq_map[rxq->queue_id].inq; + priv->ppio_params.inqs_params.tcs_params[tc].inqs_params[inq].size = + desc; + + ret = mrvl_fill_bpool(rxq, desc); + if (ret) { + rte_free(rxq); + return ret; + } + + priv->bpool_init_size += desc; + + dev->data->rx_queues[idx] = rxq; + + return 0; +} + +/** + * DPDK callback to release the receive queue. + * + * @param rxq + * Generic receive queue pointer. + */ +static void +mrvl_rx_queue_release(void *rxq) +{ + struct mrvl_rxq *q = rxq; + struct pp2_ppio_tc_params *tc_params; + int i, num, tc, inq; + struct pp2_hif *hif; + unsigned int core_id = rte_lcore_id(); + + if (core_id == LCORE_ID_ANY) + core_id = 0; + + hif = mrvl_get_hif(q->priv, core_id); + + if (!q || !hif) + return; + + tc = q->priv->rxq_map[q->queue_id].tc; + inq = q->priv->rxq_map[q->queue_id].inq; + tc_params = &q->priv->ppio_params.inqs_params.tcs_params[tc]; + num = tc_params->inqs_params[inq].size; + for (i = 0; i < num; i++) { + struct pp2_buff_inf inf; + uint64_t addr; + + pp2_bpool_get_buff(hif, q->priv->bpool, &inf); + addr = cookie_addr_high | inf.cookie; + rte_pktmbuf_free((struct rte_mbuf *)addr); + } + + rte_free(q); +} + +/** + * Check whether requested tx queue offloads match port offloads. + * + * @param + * dev Pointer to the device. + * @param + * requested Bitmap of the requested offloads. + * + * @return + * 1 if requested offloads are okay, 0 otherwise. + */ +static int +mrvl_tx_queue_offloads_okay(struct rte_eth_dev *dev, uint64_t requested) +{ + uint64_t mandatory = dev->data->dev_conf.txmode.offloads; + uint64_t supported = MRVL_TX_OFFLOADS; + uint64_t unsupported = requested & ~supported; + uint64_t missing = mandatory & ~requested; + + if (unsupported) { + RTE_LOG(ERR, PMD, "Some Tx offloads are not supported. " + "Requested 0x%" PRIx64 " supported 0x%" PRIx64 ".\n", + requested, supported); + return 0; + } + + if (missing) { + RTE_LOG(ERR, PMD, "Some Tx offloads are missing. " + "Requested 0x%" PRIx64 " missing 0x%" PRIx64 ".\n", + requested, missing); + return 0; + } + + return 1; +} + +/** + * DPDK callback to configure the transmit queue. + * + * @param dev + * Pointer to Ethernet device structure. + * @param idx + * Transmit queue index. + * @param desc + * Number of descriptors to configure in the queue. + * @param socket + * NUMA socket on which memory must be allocated. + * @param conf + * Tx queue configuration parameters. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_tx_queue_setup(struct rte_eth_dev *dev, uint16_t idx, uint16_t desc, + unsigned int socket, + const struct rte_eth_txconf *conf) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct mrvl_txq *txq; + + if (!mrvl_tx_queue_offloads_okay(dev, conf->offloads)) + return -ENOTSUP; + + if (dev->data->tx_queues[idx]) { + rte_free(dev->data->tx_queues[idx]); + dev->data->tx_queues[idx] = NULL; + } + + txq = rte_zmalloc_socket("txq", sizeof(*txq), 0, socket); + if (!txq) + return -ENOMEM; + + txq->priv = priv; + txq->queue_id = idx; + txq->port_id = dev->data->port_id; + txq->tx_deferred_start = conf->tx_deferred_start; + dev->data->tx_queues[idx] = txq; + + priv->ppio_params.outqs_params.outqs_params[idx].size = desc; + + return 0; +} + +/** + * DPDK callback to release the transmit queue. + * + * @param txq + * Generic transmit queue pointer. + */ +static void +mrvl_tx_queue_release(void *txq) +{ + struct mrvl_txq *q = txq; + + if (!q) + return; + + rte_free(q); +} + +/** + * DPDK callback to get flow control configuration. + * + * @param dev + * Pointer to Ethernet device structure. + * @param fc_conf + * Pointer to the flow control configuration. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_flow_ctrl_get(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) +{ + struct mrvl_priv *priv = dev->data->dev_private; + int ret, en; + + if (!priv) + return -EPERM; + + ret = pp2_ppio_get_rx_pause(priv->ppio, &en); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to read rx pause state\n"); + return ret; + } + + fc_conf->mode = en ? RTE_FC_RX_PAUSE : RTE_FC_NONE; + + return 0; +} + +/** + * DPDK callback to set flow control configuration. + * + * @param dev + * Pointer to Ethernet device structure. + * @param fc_conf + * Pointer to the flow control configuration. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_flow_ctrl_set(struct rte_eth_dev *dev, struct rte_eth_fc_conf *fc_conf) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + if (!priv) + return -EPERM; + + if (fc_conf->high_water || + fc_conf->low_water || + fc_conf->pause_time || + fc_conf->mac_ctrl_frame_fwd || + fc_conf->autoneg) { + RTE_LOG(ERR, PMD, "Flowctrl parameter is not supported\n"); + + return -EINVAL; + } + + if (fc_conf->mode == RTE_FC_NONE || + fc_conf->mode == RTE_FC_RX_PAUSE) { + int ret, en; + + en = fc_conf->mode == RTE_FC_NONE ? 0 : 1; + ret = pp2_ppio_set_rx_pause(priv->ppio, en); + if (ret) + RTE_LOG(ERR, PMD, + "Failed to change flowctrl on RX side\n"); + + return ret; + } + + return 0; +} + +/** + * Update RSS hash configuration + * + * @param dev + * Pointer to Ethernet device structure. + * @param rss_conf + * Pointer to RSS configuration. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + if (priv->isolated) + return -ENOTSUP; + + return mrvl_configure_rss(priv, rss_conf); +} + +/** + * DPDK callback to get RSS hash configuration. + * + * @param dev + * Pointer to Ethernet device structure. + * @rss_conf + * Pointer to RSS configuration. + * + * @return + * Always 0. + */ +static int +mrvl_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct mrvl_priv *priv = dev->data->dev_private; + enum pp2_ppio_hash_type hash_type = + priv->ppio_params.inqs_params.hash_type; + + rss_conf->rss_key = NULL; + + if (hash_type == PP2_PPIO_HASH_T_NONE) + rss_conf->rss_hf = 0; + else if (hash_type == PP2_PPIO_HASH_T_2_TUPLE) + rss_conf->rss_hf = ETH_RSS_IPV4; + else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && priv->rss_hf_tcp) + rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_TCP; + else if (hash_type == PP2_PPIO_HASH_T_5_TUPLE && !priv->rss_hf_tcp) + rss_conf->rss_hf = ETH_RSS_NONFRAG_IPV4_UDP; + + return 0; +} + +/** + * DPDK callback to get rte_flow callbacks. + * + * @param dev + * Pointer to the device structure. + * @param filer_type + * Flow filter type. + * @param filter_op + * Flow filter operation. + * @param arg + * Pointer to pass the flow ops. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_eth_filter_ctrl(struct rte_eth_dev *dev __rte_unused, + enum rte_filter_type filter_type, + enum rte_filter_op filter_op, void *arg) +{ + switch (filter_type) { + case RTE_ETH_FILTER_GENERIC: + if (filter_op != RTE_ETH_FILTER_GET) + return -EINVAL; + *(const void **)arg = &mrvl_flow_ops; + return 0; + default: + RTE_LOG(WARNING, PMD, "Filter type (%d) not supported", + filter_type); + return -EINVAL; + } +} + +static const struct eth_dev_ops mrvl_ops = { + .dev_configure = mrvl_dev_configure, + .dev_start = mrvl_dev_start, + .dev_stop = mrvl_dev_stop, + .dev_set_link_up = mrvl_dev_set_link_up, + .dev_set_link_down = mrvl_dev_set_link_down, + .dev_close = mrvl_dev_close, + .link_update = mrvl_link_update, + .promiscuous_enable = mrvl_promiscuous_enable, + .allmulticast_enable = mrvl_allmulticast_enable, + .promiscuous_disable = mrvl_promiscuous_disable, + .allmulticast_disable = mrvl_allmulticast_disable, + .mac_addr_remove = mrvl_mac_addr_remove, + .mac_addr_add = mrvl_mac_addr_add, + .mac_addr_set = mrvl_mac_addr_set, + .mtu_set = mrvl_mtu_set, + .stats_get = mrvl_stats_get, + .stats_reset = mrvl_stats_reset, + .xstats_get = mrvl_xstats_get, + .xstats_reset = mrvl_xstats_reset, + .xstats_get_names = mrvl_xstats_get_names, + .dev_infos_get = mrvl_dev_infos_get, + .dev_supported_ptypes_get = mrvl_dev_supported_ptypes_get, + .rxq_info_get = mrvl_rxq_info_get, + .txq_info_get = mrvl_txq_info_get, + .vlan_filter_set = mrvl_vlan_filter_set, + .tx_queue_start = mrvl_tx_queue_start, + .tx_queue_stop = mrvl_tx_queue_stop, + .rx_queue_setup = mrvl_rx_queue_setup, + .rx_queue_release = mrvl_rx_queue_release, + .tx_queue_setup = mrvl_tx_queue_setup, + .tx_queue_release = mrvl_tx_queue_release, + .flow_ctrl_get = mrvl_flow_ctrl_get, + .flow_ctrl_set = mrvl_flow_ctrl_set, + .rss_hash_update = mrvl_rss_hash_update, + .rss_hash_conf_get = mrvl_rss_hash_conf_get, + .filter_ctrl = mrvl_eth_filter_ctrl, +}; + +/** + * Return packet type information and l3/l4 offsets. + * + * @param desc + * Pointer to the received packet descriptor. + * @param l3_offset + * l3 packet offset. + * @param l4_offset + * l4 packet offset. + * + * @return + * Packet type information. + */ +static inline uint64_t +mrvl_desc_to_packet_type_and_offset(struct pp2_ppio_desc *desc, + uint8_t *l3_offset, uint8_t *l4_offset) +{ + enum pp2_inq_l3_type l3_type; + enum pp2_inq_l4_type l4_type; + uint64_t packet_type; + + pp2_ppio_inq_desc_get_l3_info(desc, &l3_type, l3_offset); + pp2_ppio_inq_desc_get_l4_info(desc, &l4_type, l4_offset); + + packet_type = RTE_PTYPE_L2_ETHER; + + switch (l3_type) { + case PP2_INQ_L3_TYPE_IPV4_NO_OPTS: + packet_type |= RTE_PTYPE_L3_IPV4; + break; + case PP2_INQ_L3_TYPE_IPV4_OK: + packet_type |= RTE_PTYPE_L3_IPV4_EXT; + break; + case PP2_INQ_L3_TYPE_IPV4_TTL_ZERO: + packet_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + break; + case PP2_INQ_L3_TYPE_IPV6_NO_EXT: + packet_type |= RTE_PTYPE_L3_IPV6; + break; + case PP2_INQ_L3_TYPE_IPV6_EXT: + packet_type |= RTE_PTYPE_L3_IPV6_EXT; + break; + case PP2_INQ_L3_TYPE_ARP: + packet_type |= RTE_PTYPE_L2_ETHER_ARP; + /* + * In case of ARP l4_offset is set to wrong value. + * Set it to proper one so that later on mbuf->l3_len can be + * calculated subtracting l4_offset and l3_offset. + */ + *l4_offset = *l3_offset + MRVL_ARP_LENGTH; + break; + default: + RTE_LOG(DEBUG, PMD, "Failed to recognise l3 packet type\n"); + break; + } + + switch (l4_type) { + case PP2_INQ_L4_TYPE_TCP: + packet_type |= RTE_PTYPE_L4_TCP; + break; + case PP2_INQ_L4_TYPE_UDP: + packet_type |= RTE_PTYPE_L4_UDP; + break; + default: + RTE_LOG(DEBUG, PMD, "Failed to recognise l4 packet type\n"); + break; + } + + return packet_type; +} + +/** + * Get offload information from the received packet descriptor. + * + * @param desc + * Pointer to the received packet descriptor. + * + * @return + * Mbuf offload flags. + */ +static inline uint64_t +mrvl_desc_to_ol_flags(struct pp2_ppio_desc *desc) +{ + uint64_t flags; + enum pp2_inq_desc_status status; + + status = pp2_ppio_inq_desc_get_l3_pkt_error(desc); + if (unlikely(status != PP2_DESC_ERR_OK)) + flags = PKT_RX_IP_CKSUM_BAD; + else + flags = PKT_RX_IP_CKSUM_GOOD; + + status = pp2_ppio_inq_desc_get_l4_pkt_error(desc); + if (unlikely(status != PP2_DESC_ERR_OK)) + flags |= PKT_RX_L4_CKSUM_BAD; + else + flags |= PKT_RX_L4_CKSUM_GOOD; + + return flags; +} + +/** + * DPDK callback for receive. + * + * @param rxq + * Generic pointer to the receive queue. + * @param rx_pkts + * Array to store received packets. + * @param nb_pkts + * Maximum number of packets in array. + * + * @return + * Number of packets successfully received. + */ +static uint16_t +mrvl_rx_pkt_burst(void *rxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct mrvl_rxq *q = rxq; + struct pp2_ppio_desc descs[nb_pkts]; + struct pp2_bpool *bpool; + int i, ret, rx_done = 0; + int num; + struct pp2_hif *hif; + unsigned int core_id = rte_lcore_id(); + + hif = mrvl_get_hif(q->priv, core_id); + + if (unlikely(!q->priv->ppio || !hif)) + return 0; + + bpool = q->priv->bpool; + + ret = pp2_ppio_recv(q->priv->ppio, q->priv->rxq_map[q->queue_id].tc, + q->priv->rxq_map[q->queue_id].inq, descs, &nb_pkts); + if (unlikely(ret < 0)) { + RTE_LOG(ERR, PMD, "Failed to receive packets\n"); + return 0; + } + mrvl_port_bpool_size[bpool->pp2_id][bpool->id][core_id] -= nb_pkts; + + for (i = 0; i < nb_pkts; i++) { + struct rte_mbuf *mbuf; + uint8_t l3_offset, l4_offset; + enum pp2_inq_desc_status status; + uint64_t addr; + + if (likely(nb_pkts - i > MRVL_MUSDK_PREFETCH_SHIFT)) { + struct pp2_ppio_desc *pref_desc; + u64 pref_addr; + + pref_desc = &descs[i + MRVL_MUSDK_PREFETCH_SHIFT]; + pref_addr = cookie_addr_high | + pp2_ppio_inq_desc_get_cookie(pref_desc); + rte_mbuf_prefetch_part1((struct rte_mbuf *)(pref_addr)); + rte_mbuf_prefetch_part2((struct rte_mbuf *)(pref_addr)); + } + + addr = cookie_addr_high | + pp2_ppio_inq_desc_get_cookie(&descs[i]); + mbuf = (struct rte_mbuf *)addr; + rte_pktmbuf_reset(mbuf); + + /* drop packet in case of mac, overrun or resource error */ + status = pp2_ppio_inq_desc_get_l2_pkt_error(&descs[i]); + if (unlikely(status != PP2_DESC_ERR_OK)) { + struct pp2_buff_inf binf = { + .addr = rte_mbuf_data_iova_default(mbuf), + .cookie = (pp2_cookie_t)(uint64_t)mbuf, + }; + + pp2_bpool_put_buff(hif, bpool, &binf); + mrvl_port_bpool_size + [bpool->pp2_id][bpool->id][core_id]++; + q->drop_mac++; + continue; + } + + mbuf->data_off += MRVL_PKT_EFFEC_OFFS; + mbuf->pkt_len = pp2_ppio_inq_desc_get_pkt_len(&descs[i]); + mbuf->data_len = mbuf->pkt_len; + mbuf->port = q->port_id; + mbuf->packet_type = + mrvl_desc_to_packet_type_and_offset(&descs[i], + &l3_offset, + &l4_offset); + mbuf->l2_len = l3_offset; + mbuf->l3_len = l4_offset - l3_offset; + + if (likely(q->cksum_enabled)) + mbuf->ol_flags = mrvl_desc_to_ol_flags(&descs[i]); + + rx_pkts[rx_done++] = mbuf; + q->bytes_recv += mbuf->pkt_len; + } + + if (rte_spinlock_trylock(&q->priv->lock) == 1) { + num = mrvl_get_bpool_size(bpool->pp2_id, bpool->id); + + if (unlikely(num <= q->priv->bpool_min_size || + (!rx_done && num < q->priv->bpool_init_size))) { + ret = mrvl_fill_bpool(q, MRVL_BURST_SIZE); + if (ret) + RTE_LOG(ERR, PMD, "Failed to fill bpool\n"); + } else if (unlikely(num > q->priv->bpool_max_size)) { + int i; + int pkt_to_remove = num - q->priv->bpool_init_size; + struct rte_mbuf *mbuf; + struct pp2_buff_inf buff; + + RTE_LOG(DEBUG, PMD, + "\nport-%d:%d: bpool %d oversize - remove %d buffers (pool size: %d -> %d)\n", + bpool->pp2_id, q->priv->ppio->port_id, + bpool->id, pkt_to_remove, num, + q->priv->bpool_init_size); + + for (i = 0; i < pkt_to_remove; i++) { + ret = pp2_bpool_get_buff(hif, bpool, &buff); + if (ret) + break; + mbuf = (struct rte_mbuf *) + (cookie_addr_high | buff.cookie); + rte_pktmbuf_free(mbuf); + } + mrvl_port_bpool_size + [bpool->pp2_id][bpool->id][core_id] -= i; + } + rte_spinlock_unlock(&q->priv->lock); + } + + return rx_done; +} + +/** + * Prepare offload information. + * + * @param ol_flags + * Offload flags. + * @param packet_type + * Packet type bitfield. + * @param l3_type + * Pointer to the pp2_ouq_l3_type structure. + * @param l4_type + * Pointer to the pp2_outq_l4_type structure. + * @param gen_l3_cksum + * Will be set to 1 in case l3 checksum is computed. + * @param l4_cksum + * Will be set to 1 in case l4 checksum is computed. + * + * @return + * 0 on success, negative error value otherwise. + */ +static inline int +mrvl_prepare_proto_info(uint64_t ol_flags, uint32_t packet_type, + enum pp2_outq_l3_type *l3_type, + enum pp2_outq_l4_type *l4_type, + int *gen_l3_cksum, + int *gen_l4_cksum) +{ + /* + * Based on ol_flags prepare information + * for pp2_ppio_outq_desc_set_proto_info() which setups descriptor + * for offloading. + */ + if (ol_flags & PKT_TX_IPV4) { + *l3_type = PP2_OUTQ_L3_TYPE_IPV4; + *gen_l3_cksum = ol_flags & PKT_TX_IP_CKSUM ? 1 : 0; + } else if (ol_flags & PKT_TX_IPV6) { + *l3_type = PP2_OUTQ_L3_TYPE_IPV6; + /* no checksum for ipv6 header */ + *gen_l3_cksum = 0; + } else { + /* if something different then stop processing */ + return -1; + } + + ol_flags &= PKT_TX_L4_MASK; + if ((packet_type & RTE_PTYPE_L4_TCP) && + ol_flags == PKT_TX_TCP_CKSUM) { + *l4_type = PP2_OUTQ_L4_TYPE_TCP; + *gen_l4_cksum = 1; + } else if ((packet_type & RTE_PTYPE_L4_UDP) && + ol_flags == PKT_TX_UDP_CKSUM) { + *l4_type = PP2_OUTQ_L4_TYPE_UDP; + *gen_l4_cksum = 1; + } else { + *l4_type = PP2_OUTQ_L4_TYPE_OTHER; + /* no checksum for other type */ + *gen_l4_cksum = 0; + } + + return 0; +} + +/** + * Release already sent buffers to bpool (buffer-pool). + * + * @param ppio + * Pointer to the port structure. + * @param hif + * Pointer to the MUSDK hardware interface. + * @param sq + * Pointer to the shadow queue. + * @param qid + * Queue id number. + * @param force + * Force releasing packets. + */ +static inline void +mrvl_free_sent_buffers(struct pp2_ppio *ppio, struct pp2_hif *hif, + unsigned int core_id, struct mrvl_shadow_txq *sq, + int qid, int force) +{ + struct buff_release_entry *entry; + uint16_t nb_done = 0, num = 0, skip_bufs = 0; + int i; + + pp2_ppio_get_num_outq_done(ppio, hif, qid, &nb_done); + + sq->num_to_release += nb_done; + + if (likely(!force && + sq->num_to_release < MRVL_PP2_BUF_RELEASE_BURST_SIZE)) + return; + + nb_done = sq->num_to_release; + sq->num_to_release = 0; + + for (i = 0; i < nb_done; i++) { + entry = &sq->ent[sq->tail + num]; + if (unlikely(!entry->buff.addr)) { + RTE_LOG(ERR, PMD, + "Shadow memory @%d: cookie(%lx), pa(%lx)!\n", + sq->tail, (u64)entry->buff.cookie, + (u64)entry->buff.addr); + skip_bufs = 1; + goto skip; + } + + if (unlikely(!entry->bpool)) { + struct rte_mbuf *mbuf; + + mbuf = (struct rte_mbuf *) + (cookie_addr_high | entry->buff.cookie); + rte_pktmbuf_free(mbuf); + skip_bufs = 1; + goto skip; + } + + mrvl_port_bpool_size + [entry->bpool->pp2_id][entry->bpool->id][core_id]++; + num++; + if (unlikely(sq->tail + num == MRVL_PP2_TX_SHADOWQ_SIZE)) + goto skip; + continue; +skip: + if (likely(num)) + pp2_bpool_put_buffs(hif, &sq->ent[sq->tail], &num); + num += skip_bufs; + sq->tail = (sq->tail + num) & MRVL_PP2_TX_SHADOWQ_MASK; + sq->size -= num; + num = 0; + skip_bufs = 0; + } + + if (likely(num)) { + pp2_bpool_put_buffs(hif, &sq->ent[sq->tail], &num); + sq->tail = (sq->tail + num) & MRVL_PP2_TX_SHADOWQ_MASK; + sq->size -= num; + } +} + +/** + * DPDK callback for transmit. + * + * @param txq + * Generic pointer transmit queue. + * @param tx_pkts + * Packets to transmit. + * @param nb_pkts + * Number of packets in array. + * + * @return + * Number of packets successfully transmitted. + */ +static uint16_t +mrvl_tx_pkt_burst(void *txq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct mrvl_txq *q = txq; + struct mrvl_shadow_txq *sq; + struct pp2_hif *hif; + struct pp2_ppio_desc descs[nb_pkts]; + unsigned int core_id = rte_lcore_id(); + int i, ret, bytes_sent = 0; + uint16_t num, sq_free_size; + uint64_t addr; + + hif = mrvl_get_hif(q->priv, core_id); + sq = &q->shadow_txqs[core_id]; + + if (unlikely(!q->priv->ppio || !hif)) + return 0; + + if (sq->size) + mrvl_free_sent_buffers(q->priv->ppio, hif, core_id, + sq, q->queue_id, 0); + + sq_free_size = MRVL_PP2_TX_SHADOWQ_SIZE - sq->size - 1; + if (unlikely(nb_pkts > sq_free_size)) { + RTE_LOG(DEBUG, PMD, + "No room in shadow queue for %d packets! %d packets will be sent.\n", + nb_pkts, sq_free_size); + nb_pkts = sq_free_size; + } + + for (i = 0; i < nb_pkts; i++) { + struct rte_mbuf *mbuf = tx_pkts[i]; + int gen_l3_cksum, gen_l4_cksum; + enum pp2_outq_l3_type l3_type; + enum pp2_outq_l4_type l4_type; + + if (likely(nb_pkts - i > MRVL_MUSDK_PREFETCH_SHIFT)) { + struct rte_mbuf *pref_pkt_hdr; + + pref_pkt_hdr = tx_pkts[i + MRVL_MUSDK_PREFETCH_SHIFT]; + rte_mbuf_prefetch_part1(pref_pkt_hdr); + rte_mbuf_prefetch_part2(pref_pkt_hdr); + } + + sq->ent[sq->head].buff.cookie = (pp2_cookie_t)(uint64_t)mbuf; + sq->ent[sq->head].buff.addr = + rte_mbuf_data_iova_default(mbuf); + sq->ent[sq->head].bpool = + (unlikely(mbuf->port >= RTE_MAX_ETHPORTS || + mbuf->refcnt > 1)) ? NULL : + mrvl_port_to_bpool_lookup[mbuf->port]; + sq->head = (sq->head + 1) & MRVL_PP2_TX_SHADOWQ_MASK; + sq->size++; + + pp2_ppio_outq_desc_reset(&descs[i]); + pp2_ppio_outq_desc_set_phys_addr(&descs[i], + rte_pktmbuf_iova(mbuf)); + pp2_ppio_outq_desc_set_pkt_offset(&descs[i], 0); + pp2_ppio_outq_desc_set_pkt_len(&descs[i], + rte_pktmbuf_pkt_len(mbuf)); + + bytes_sent += rte_pktmbuf_pkt_len(mbuf); + /* + * in case unsupported ol_flags were passed + * do not update descriptor offload information + */ + ret = mrvl_prepare_proto_info(mbuf->ol_flags, mbuf->packet_type, + &l3_type, &l4_type, &gen_l3_cksum, + &gen_l4_cksum); + if (unlikely(ret)) + continue; + + pp2_ppio_outq_desc_set_proto_info(&descs[i], l3_type, l4_type, + mbuf->l2_len, + mbuf->l2_len + mbuf->l3_len, + gen_l3_cksum, gen_l4_cksum); + } + + num = nb_pkts; + pp2_ppio_send(q->priv->ppio, hif, q->queue_id, descs, &nb_pkts); + /* number of packets that were not sent */ + if (unlikely(num > nb_pkts)) { + for (i = nb_pkts; i < num; i++) { + sq->head = (MRVL_PP2_TX_SHADOWQ_SIZE + sq->head - 1) & + MRVL_PP2_TX_SHADOWQ_MASK; + addr = cookie_addr_high | sq->ent[sq->head].buff.cookie; + bytes_sent -= + rte_pktmbuf_pkt_len((struct rte_mbuf *)addr); + } + sq->size -= num - nb_pkts; + } + + q->bytes_sent += bytes_sent; + + return nb_pkts; +} + +/** + * Initialize packet processor. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_init_pp2(void) +{ + struct pp2_init_params init_params; + + memset(&init_params, 0, sizeof(init_params)); + init_params.hif_reserved_map = MRVL_MUSDK_HIFS_RESERVED; + init_params.bm_pool_reserved_map = MRVL_MUSDK_BPOOLS_RESERVED; + init_params.rss_tbl_reserved_map = MRVL_MUSDK_RSS_RESERVED; + + return pp2_init(&init_params); +} + +/** + * Deinitialize packet processor. + * + * @return + * 0 on success, negative error value otherwise. + */ +static void +mrvl_deinit_pp2(void) +{ + pp2_deinit(); +} + +/** + * Create private device structure. + * + * @param dev_name + * Pointer to the port name passed in the initialization parameters. + * + * @return + * Pointer to the newly allocated private device structure. + */ +static struct mrvl_priv * +mrvl_priv_create(const char *dev_name) +{ + struct pp2_bpool_params bpool_params; + char match[MRVL_MATCH_LEN]; + struct mrvl_priv *priv; + int ret, bpool_bit; + + priv = rte_zmalloc_socket(dev_name, sizeof(*priv), 0, rte_socket_id()); + if (!priv) + return NULL; + + ret = pp2_netdev_get_ppio_info((char *)(uintptr_t)dev_name, + &priv->pp_id, &priv->ppio_id); + if (ret) + goto out_free_priv; + + bpool_bit = mrvl_reserve_bit(&used_bpools[priv->pp_id], + PP2_BPOOL_NUM_POOLS); + if (bpool_bit < 0) + goto out_free_priv; + priv->bpool_bit = bpool_bit; + + snprintf(match, sizeof(match), "pool-%d:%d", priv->pp_id, + priv->bpool_bit); + memset(&bpool_params, 0, sizeof(bpool_params)); + bpool_params.match = match; + bpool_params.buff_len = MRVL_PKT_SIZE_MAX + MRVL_PKT_EFFEC_OFFS; + ret = pp2_bpool_init(&bpool_params, &priv->bpool); + if (ret) + goto out_clear_bpool_bit; + + priv->ppio_params.type = PP2_PPIO_T_NIC; + rte_spinlock_init(&priv->lock); + + return priv; +out_clear_bpool_bit: + used_bpools[priv->pp_id] &= ~(1 << priv->bpool_bit); +out_free_priv: + rte_free(priv); + return NULL; +} + +/** + * Create device representing Ethernet port. + * + * @param name + * Pointer to the port's name. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +mrvl_eth_dev_create(struct rte_vdev_device *vdev, const char *name) +{ + int ret, fd = socket(AF_INET, SOCK_DGRAM, 0); + struct rte_eth_dev *eth_dev; + struct mrvl_priv *priv; + struct ifreq req; + + eth_dev = rte_eth_dev_allocate(name); + if (!eth_dev) + return -ENOMEM; + + priv = mrvl_priv_create(name); + if (!priv) { + ret = -ENOMEM; + goto out_free_dev; + } + + eth_dev->data->mac_addrs = + rte_zmalloc("mac_addrs", + ETHER_ADDR_LEN * MRVL_MAC_ADDRS_MAX, 0); + if (!eth_dev->data->mac_addrs) { + RTE_LOG(ERR, PMD, "Failed to allocate space for eth addrs\n"); + ret = -ENOMEM; + goto out_free_priv; + } + + memset(&req, 0, sizeof(req)); + strcpy(req.ifr_name, name); + ret = ioctl(fd, SIOCGIFHWADDR, &req); + if (ret) + goto out_free_mac; + + memcpy(eth_dev->data->mac_addrs[0].addr_bytes, + req.ifr_addr.sa_data, ETHER_ADDR_LEN); + + eth_dev->rx_pkt_burst = mrvl_rx_pkt_burst; + eth_dev->tx_pkt_burst = mrvl_tx_pkt_burst; + eth_dev->data->kdrv = RTE_KDRV_NONE; + eth_dev->data->dev_private = priv; + eth_dev->device = &vdev->device; + eth_dev->dev_ops = &mrvl_ops; + + return 0; +out_free_mac: + rte_free(eth_dev->data->mac_addrs); +out_free_dev: + rte_eth_dev_release_port(eth_dev); +out_free_priv: + rte_free(priv); + + return ret; +} + +/** + * Cleanup previously created device representing Ethernet port. + * + * @param name + * Pointer to the port name. + */ +static void +mrvl_eth_dev_destroy(const char *name) +{ + struct rte_eth_dev *eth_dev; + struct mrvl_priv *priv; + + eth_dev = rte_eth_dev_allocated(name); + if (!eth_dev) + return; + + priv = eth_dev->data->dev_private; + pp2_bpool_deinit(priv->bpool); + used_bpools[priv->pp_id] &= ~(1 << priv->bpool_bit); + rte_free(priv); + rte_free(eth_dev->data->mac_addrs); + rte_eth_dev_release_port(eth_dev); +} + +/** + * Callback used by rte_kvargs_process() during argument parsing. + * + * @param key + * Pointer to the parsed key (unused). + * @param value + * Pointer to the parsed value. + * @param extra_args + * Pointer to the extra arguments which contains address of the + * table of pointers to parsed interface names. + * + * @return + * Always 0. + */ +static int +mrvl_get_ifnames(const char *key __rte_unused, const char *value, + void *extra_args) +{ + struct mrvl_ifnames *ifnames = extra_args; + + ifnames->names[ifnames->idx++] = value; + + return 0; +} + +/** + * Deinitialize per-lcore MUSDK hardware interfaces (hifs). + */ +static void +mrvl_deinit_hifs(void) +{ + int i; + + for (i = mrvl_lcore_first; i <= mrvl_lcore_last; i++) { + if (hifs[i]) + pp2_hif_deinit(hifs[i]); + } + used_hifs = MRVL_MUSDK_HIFS_RESERVED; + memset(hifs, 0, sizeof(hifs)); +} + +/** + * DPDK callback to register the virtual device. + * + * @param vdev + * Pointer to the virtual device. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +rte_pmd_mrvl_probe(struct rte_vdev_device *vdev) +{ + struct rte_kvargs *kvlist; + struct mrvl_ifnames ifnames; + int ret = -EINVAL; + uint32_t i, ifnum, cfgnum; + const char *params; + + params = rte_vdev_device_args(vdev); + if (!params) + return -EINVAL; + + kvlist = rte_kvargs_parse(params, valid_args); + if (!kvlist) + return -EINVAL; + + ifnum = rte_kvargs_count(kvlist, MRVL_IFACE_NAME_ARG); + if (ifnum > RTE_DIM(ifnames.names)) + goto out_free_kvlist; + + ifnames.idx = 0; + rte_kvargs_process(kvlist, MRVL_IFACE_NAME_ARG, + mrvl_get_ifnames, &ifnames); + + + /* + * The below system initialization should be done only once, + * on the first provided configuration file + */ + if (!mrvl_qos_cfg) { + cfgnum = rte_kvargs_count(kvlist, MRVL_CFG_ARG); + RTE_LOG(INFO, PMD, "Parsing config file!\n"); + if (cfgnum > 1) { + RTE_LOG(ERR, PMD, "Cannot handle more than one config file!\n"); + goto out_free_kvlist; + } else if (cfgnum == 1) { + rte_kvargs_process(kvlist, MRVL_CFG_ARG, + mrvl_get_qoscfg, &mrvl_qos_cfg); + } + } + + if (mrvl_dev_num) + goto init_devices; + + RTE_LOG(INFO, PMD, "Perform MUSDK initializations\n"); + /* + * ret == -EEXIST is correct, it means DMA + * has been already initialized (by another PMD). + */ + ret = mv_sys_dma_mem_init(MRVL_MUSDK_DMA_MEMSIZE); + if (ret < 0) { + if (ret != -EEXIST) + goto out_free_kvlist; + else + RTE_LOG(INFO, PMD, + "DMA memory has been already initialized by a different driver.\n"); + } + + ret = mrvl_init_pp2(); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to init PP!\n"); + goto out_deinit_dma; + } + + memset(mrvl_port_bpool_size, 0, sizeof(mrvl_port_bpool_size)); + memset(mrvl_port_to_bpool_lookup, 0, sizeof(mrvl_port_to_bpool_lookup)); + + mrvl_lcore_first = RTE_MAX_LCORE; + mrvl_lcore_last = 0; + +init_devices: + for (i = 0; i < ifnum; i++) { + RTE_LOG(INFO, PMD, "Creating %s\n", ifnames.names[i]); + ret = mrvl_eth_dev_create(vdev, ifnames.names[i]); + if (ret) + goto out_cleanup; + } + mrvl_dev_num += ifnum; + + rte_kvargs_free(kvlist); + + return 0; +out_cleanup: + for (; i > 0; i--) + mrvl_eth_dev_destroy(ifnames.names[i]); + + if (mrvl_dev_num == 0) + mrvl_deinit_pp2(); +out_deinit_dma: + if (mrvl_dev_num == 0) + mv_sys_dma_mem_destroy(); +out_free_kvlist: + rte_kvargs_free(kvlist); + + return ret; +} + +/** + * DPDK callback to remove virtual device. + * + * @param vdev + * Pointer to the removed virtual device. + * + * @return + * 0 on success, negative error value otherwise. + */ +static int +rte_pmd_mrvl_remove(struct rte_vdev_device *vdev) +{ + int i; + const char *name; + + name = rte_vdev_device_name(vdev); + if (!name) + return -EINVAL; + + RTE_LOG(INFO, PMD, "Removing %s\n", name); + + for (i = 0; i < rte_eth_dev_count(); i++) { + char ifname[RTE_ETH_NAME_MAX_LEN]; + + rte_eth_dev_get_name_by_port(i, ifname); + mrvl_eth_dev_destroy(ifname); + mrvl_dev_num--; + } + + if (mrvl_dev_num == 0) { + RTE_LOG(INFO, PMD, "Perform MUSDK deinit\n"); + mrvl_deinit_hifs(); + mrvl_deinit_pp2(); + mv_sys_dma_mem_destroy(); + } + + return 0; +} + +static struct rte_vdev_driver pmd_mrvl_drv = { + .probe = rte_pmd_mrvl_probe, + .remove = rte_pmd_mrvl_remove, +}; + +RTE_PMD_REGISTER_VDEV(net_mvpp2, pmd_mrvl_drv); +RTE_PMD_REGISTER_ALIAS(net_mvpp2, eth_mvpp2); diff --git a/drivers/net/mvpp2/mrvl_ethdev.h b/drivers/net/mvpp2/mrvl_ethdev.h new file mode 100644 index 0000000000..3a428092df --- /dev/null +++ b/drivers/net/mvpp2/mrvl_ethdev.h @@ -0,0 +1,101 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017 Marvell International Ltd. + * Copyright(c) 2017 Semihalf. + * All rights reserved. + */ + +#ifndef _MRVL_ETHDEV_H_ +#define _MRVL_ETHDEV_H_ + +#include +#include + +#include +#include +#include +#include +#include +#include + +/** Maximum number of rx queues per port */ +#define MRVL_PP2_RXQ_MAX 32 + +/** Maximum number of tx queues per port */ +#define MRVL_PP2_TXQ_MAX 8 + +/** Minimum number of descriptors in tx queue */ +#define MRVL_PP2_TXD_MIN 16 + +/** Maximum number of descriptors in tx queue */ +#define MRVL_PP2_TXD_MAX 2048 + +/** Tx queue descriptors alignment */ +#define MRVL_PP2_TXD_ALIGN 16 + +/** Minimum number of descriptors in rx queue */ +#define MRVL_PP2_RXD_MIN 16 + +/** Maximum number of descriptors in rx queue */ +#define MRVL_PP2_RXD_MAX 2048 + +/** Rx queue descriptors alignment */ +#define MRVL_PP2_RXD_ALIGN 16 + +/** Maximum number of descriptors in tx aggregated queue */ +#define MRVL_PP2_AGGR_TXQD_MAX 2048 + +/** Maximum number of Traffic Classes. */ +#define MRVL_PP2_TC_MAX 8 + +/** Packet offset inside RX buffer. */ +#define MRVL_PKT_OFFS 64 + +/** Maximum number of descriptors in shadow queue. Must be power of 2 */ +#define MRVL_PP2_TX_SHADOWQ_SIZE MRVL_PP2_TXD_MAX + +/** Shadow queue size mask (since shadow queue size is power of 2) */ +#define MRVL_PP2_TX_SHADOWQ_MASK (MRVL_PP2_TX_SHADOWQ_SIZE - 1) + +/** Minimum number of sent buffers to release from shadow queue to BM */ +#define MRVL_PP2_BUF_RELEASE_BURST_SIZE 64 + +struct mrvl_priv { + /* Hot fields, used in fast path. */ + struct pp2_bpool *bpool; /**< BPool pointer */ + struct pp2_ppio *ppio; /**< Port handler pointer */ + rte_spinlock_t lock; /**< Spinlock for checking bpool status */ + uint16_t bpool_max_size; /**< BPool maximum size */ + uint16_t bpool_min_size; /**< BPool minimum size */ + uint16_t bpool_init_size; /**< Configured BPool size */ + + /** Mapping for DPDK rx queue->(TC, MRVL relative inq) */ + struct { + uint8_t tc; /**< Traffic Class */ + uint8_t inq; /**< Relative in-queue number */ + } rxq_map[MRVL_PP2_RXQ_MAX] __rte_cache_aligned; + + /* Configuration data, used sporadically. */ + uint8_t pp_id; + uint8_t ppio_id; + uint8_t bpool_bit; + uint8_t rss_hf_tcp; + uint8_t uc_mc_flushed; + uint8_t vlan_flushed; + uint8_t isolated; + + struct pp2_ppio_params ppio_params; + struct pp2_cls_qos_tbl_params qos_tbl_params; + struct pp2_cls_tbl *qos_tbl; + uint16_t nb_rx_queues; + + struct pp2_cls_tbl_params cls_tbl_params; + struct pp2_cls_tbl *cls_tbl; + uint32_t cls_tbl_pattern; + LIST_HEAD(mrvl_flows, rte_flow) flows; + + struct pp2_cls_plcr *policer; +}; + +/** Flow operations forward declaration. */ +extern const struct rte_flow_ops mrvl_flow_ops; +#endif /* _MRVL_ETHDEV_H_ */ diff --git a/drivers/net/mvpp2/mrvl_flow.c b/drivers/net/mvpp2/mrvl_flow.c new file mode 100644 index 0000000000..8fd4dbfb19 --- /dev/null +++ b/drivers/net/mvpp2/mrvl_flow.c @@ -0,0 +1,2759 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2018 Marvell International Ltd. + * Copyright(c) 2018 Semihalf. + * All rights reserved. + */ + +#include +#include +#include +#include + +#include + +#ifdef container_of +#undef container_of +#endif + +#include "mrvl_ethdev.h" +#include "mrvl_qos.h" +#include "env/mv_common.h" /* for BIT() */ + +/** Number of rules in the classifier table. */ +#define MRVL_CLS_MAX_NUM_RULES 20 + +/** Size of the classifier key and mask strings. */ +#define MRVL_CLS_STR_SIZE_MAX 40 + +/** Parsed fields in processed rte_flow_item. */ +enum mrvl_parsed_fields { + /* eth flags */ + F_DMAC = BIT(0), + F_SMAC = BIT(1), + F_TYPE = BIT(2), + /* vlan flags */ + F_VLAN_ID = BIT(3), + F_VLAN_PRI = BIT(4), + F_VLAN_TCI = BIT(5), /* not supported by MUSDK yet */ + /* ip4 flags */ + F_IP4_TOS = BIT(6), + F_IP4_SIP = BIT(7), + F_IP4_DIP = BIT(8), + F_IP4_PROTO = BIT(9), + /* ip6 flags */ + F_IP6_TC = BIT(10), /* not supported by MUSDK yet */ + F_IP6_SIP = BIT(11), + F_IP6_DIP = BIT(12), + F_IP6_FLOW = BIT(13), + F_IP6_NEXT_HDR = BIT(14), + /* tcp flags */ + F_TCP_SPORT = BIT(15), + F_TCP_DPORT = BIT(16), + /* udp flags */ + F_UDP_SPORT = BIT(17), + F_UDP_DPORT = BIT(18), +}; + +/** PMD-specific definition of a flow rule handle. */ +struct rte_flow { + LIST_ENTRY(rte_flow) next; + + enum mrvl_parsed_fields pattern; + + struct pp2_cls_tbl_rule rule; + struct pp2_cls_cos_desc cos; + struct pp2_cls_tbl_action action; +}; + +static const enum rte_flow_item_type pattern_eth[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_vlan[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_vlan_ip[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_vlan_ip6[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_ip4[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_ip4_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_ip4_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_ip6[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_ip6_tcp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_eth_ip6_udp[] = { + RTE_FLOW_ITEM_TYPE_ETH, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan_ip[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan_ip_tcp[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan_ip_udp[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan_ip6[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan_ip6_tcp[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_vlan_ip6_udp[] = { + RTE_FLOW_ITEM_TYPE_VLAN, + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_ip[] = { + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_ip6[] = { + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_ip_tcp[] = { + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_ip6_tcp[] = { + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_ip_udp[] = { + RTE_FLOW_ITEM_TYPE_IPV4, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_ip6_udp[] = { + RTE_FLOW_ITEM_TYPE_IPV6, + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_tcp[] = { + RTE_FLOW_ITEM_TYPE_TCP, + RTE_FLOW_ITEM_TYPE_END +}; + +static const enum rte_flow_item_type pattern_udp[] = { + RTE_FLOW_ITEM_TYPE_UDP, + RTE_FLOW_ITEM_TYPE_END +}; + +#define MRVL_VLAN_ID_MASK 0x0fff +#define MRVL_VLAN_PRI_MASK 0x7000 +#define MRVL_IPV4_DSCP_MASK 0xfc +#define MRVL_IPV4_ADDR_MASK 0xffffffff +#define MRVL_IPV6_FLOW_MASK 0x0fffff + +/** + * Given a flow item, return the next non-void one. + * + * @param items Pointer to the item in the table. + * @returns Next not-void item, NULL otherwise. + */ +static const struct rte_flow_item * +mrvl_next_item(const struct rte_flow_item *items) +{ + const struct rte_flow_item *item = items; + + for (; item->type != RTE_FLOW_ITEM_TYPE_END; item++) { + if (item->type != RTE_FLOW_ITEM_TYPE_VOID) + return item; + } + + return NULL; +} + +/** + * Allocate memory for classifier rule key and mask fields. + * + * @param field Pointer to the classifier rule. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_alloc_key_mask(struct pp2_cls_rule_key_field *field) +{ + unsigned int id = rte_socket_id(); + + field->key = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id); + if (!field->key) + goto out; + + field->mask = rte_zmalloc_socket(NULL, MRVL_CLS_STR_SIZE_MAX, 0, id); + if (!field->mask) + goto out_mask; + + return 0; +out_mask: + rte_free(field->key); +out: + field->key = NULL; + field->mask = NULL; + return -1; +} + +/** + * Free memory allocated for classifier rule key and mask fields. + * + * @param field Pointer to the classifier rule. + */ +static void +mrvl_free_key_mask(struct pp2_cls_rule_key_field *field) +{ + rte_free(field->key); + rte_free(field->mask); + field->key = NULL; + field->mask = NULL; +} + +/** + * Free memory allocated for all classifier rule key and mask fields. + * + * @param rule Pointer to the classifier table rule. + */ +static void +mrvl_free_all_key_mask(struct pp2_cls_tbl_rule *rule) +{ + int i; + + for (i = 0; i < rule->num_fields; i++) + mrvl_free_key_mask(&rule->fields[i]); + rule->num_fields = 0; +} + +/* + * Initialize rte flow item parsing. + * + * @param item Pointer to the flow item. + * @param spec_ptr Pointer to the specific item pointer. + * @param mask_ptr Pointer to the specific item's mask pointer. + * @def_mask Pointer to the default mask. + * @size Size of the flow item. + * @error Pointer to the rte flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_init(const struct rte_flow_item *item, + const void **spec_ptr, + const void **mask_ptr, + const void *def_mask, + unsigned int size, + struct rte_flow_error *error) +{ + const uint8_t *spec; + const uint8_t *mask; + const uint8_t *last; + uint8_t zeros[size]; + + memset(zeros, 0, size); + + if (item == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "NULL item\n"); + return -rte_errno; + } + + if ((item->last != NULL || item->mask != NULL) && item->spec == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, item, + "Mask or last is set without spec\n"); + return -rte_errno; + } + + /* + * If "mask" is not set, default mask is used, + * but if default mask is NULL, "mask" should be set. + */ + if (item->mask == NULL) { + if (def_mask == NULL) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Mask should be specified\n"); + return -rte_errno; + } + + mask = (const uint8_t *)def_mask; + } else { + mask = (const uint8_t *)item->mask; + } + + spec = (const uint8_t *)item->spec; + last = (const uint8_t *)item->last; + + if (spec == NULL) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Spec should be specified\n"); + return -rte_errno; + } + + /* + * If field values in "last" are either 0 or equal to the corresponding + * values in "spec" then they are ignored. + */ + if (last != NULL && + !memcmp(last, zeros, size) && + memcmp(last, spec, size) != 0) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Ranging is not supported\n"); + return -rte_errno; + } + + *spec_ptr = spec; + *mask_ptr = mask; + + return 0; +} + +/** + * Parse the eth flow item. + * + * This will create classifier rule that matches either destination or source + * mac. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param mask Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_mac(const struct rte_flow_item_eth *spec, + const struct rte_flow_item_eth *mask, + int parse_dst, struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + const uint8_t *k, *m; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + if (parse_dst) { + k = spec->dst.addr_bytes; + m = mask->dst.addr_bytes; + + flow->pattern |= F_DMAC; + } else { + k = spec->src.addr_bytes; + m = mask->src.addr_bytes; + + flow->pattern |= F_SMAC; + } + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 6; + + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, + "%02x:%02x:%02x:%02x:%02x:%02x", + k[0], k[1], k[2], k[3], k[4], k[5]); + + snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, + "%02x:%02x:%02x:%02x:%02x:%02x", + m[0], m[1], m[2], m[3], m[4], m[5]); + + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Helper for parsing the eth flow item destination mac address. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_dmac(const struct rte_flow_item_eth *spec, + const struct rte_flow_item_eth *mask, + struct rte_flow *flow) +{ + return mrvl_parse_mac(spec, mask, 1, flow); +} + +/** + * Helper for parsing the eth flow item source mac address. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_smac(const struct rte_flow_item_eth *spec, + const struct rte_flow_item_eth *mask, + struct rte_flow *flow) +{ + return mrvl_parse_mac(spec, mask, 0, flow); +} + +/** + * Parse the ether type field of the eth flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_type(const struct rte_flow_item_eth *spec, + const struct rte_flow_item_eth *mask __rte_unused, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint16_t k; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 2; + + k = rte_be_to_cpu_16(spec->type); + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->pattern |= F_TYPE; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse the vid field of the vlan rte flow item. + * + * This will create classifier rule that matches vid. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_vlan_id(const struct rte_flow_item_vlan *spec, + const struct rte_flow_item_vlan *mask __rte_unused, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint16_t k; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 2; + + k = rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_ID_MASK; + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->pattern |= F_VLAN_ID; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse the pri field of the vlan rte flow item. + * + * This will create classifier rule that matches pri. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_vlan_pri(const struct rte_flow_item_vlan *spec, + const struct rte_flow_item_vlan *mask __rte_unused, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint16_t k; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 1; + + k = (rte_be_to_cpu_16(spec->tci) & MRVL_VLAN_PRI_MASK) >> 13; + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->pattern |= F_VLAN_PRI; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse the dscp field of the ipv4 rte flow item. + * + * This will create classifier rule that matches dscp field. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_ip4_dscp(const struct rte_flow_item_ipv4 *spec, + const struct rte_flow_item_ipv4 *mask, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint8_t k, m; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 1; + + k = (spec->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2; + m = (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) >> 2; + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m); + + flow->pattern |= F_IP4_TOS; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse either source or destination ip addresses of the ipv4 flow item. + * + * This will create classifier rule that matches either destination + * or source ip field. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_ip4_addr(const struct rte_flow_item_ipv4 *spec, + const struct rte_flow_item_ipv4 *mask, + int parse_dst, struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + struct in_addr k; + uint32_t m; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + memset(&k, 0, sizeof(k)); + if (parse_dst) { + k.s_addr = spec->hdr.dst_addr; + m = rte_be_to_cpu_32(mask->hdr.dst_addr); + + flow->pattern |= F_IP4_DIP; + } else { + k.s_addr = spec->hdr.src_addr; + m = rte_be_to_cpu_32(mask->hdr.src_addr); + + flow->pattern |= F_IP4_SIP; + } + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 4; + + inet_ntop(AF_INET, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX); + snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "0x%x", m); + + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Helper for parsing destination ip of the ipv4 flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_ip4_dip(const struct rte_flow_item_ipv4 *spec, + const struct rte_flow_item_ipv4 *mask, + struct rte_flow *flow) +{ + return mrvl_parse_ip4_addr(spec, mask, 1, flow); +} + +/** + * Helper for parsing source ip of the ipv4 flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_ip4_sip(const struct rte_flow_item_ipv4 *spec, + const struct rte_flow_item_ipv4 *mask, + struct rte_flow *flow) +{ + return mrvl_parse_ip4_addr(spec, mask, 0, flow); +} + +/** + * Parse the proto field of the ipv4 rte flow item. + * + * This will create classifier rule that matches proto field. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_ip4_proto(const struct rte_flow_item_ipv4 *spec, + const struct rte_flow_item_ipv4 *mask __rte_unused, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint8_t k = spec->hdr.next_proto_id; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 1; + + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->pattern |= F_IP4_PROTO; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse either source or destination ip addresses of the ipv6 rte flow item. + * + * This will create classifier rule that matches either destination + * or source ip field. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_ip6_addr(const struct rte_flow_item_ipv6 *spec, + const struct rte_flow_item_ipv6 *mask, + int parse_dst, struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + int size = sizeof(spec->hdr.dst_addr); + struct in6_addr k, m; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + memset(&k, 0, sizeof(k)); + if (parse_dst) { + memcpy(k.s6_addr, spec->hdr.dst_addr, size); + memcpy(m.s6_addr, mask->hdr.dst_addr, size); + + flow->pattern |= F_IP6_DIP; + } else { + memcpy(k.s6_addr, spec->hdr.src_addr, size); + memcpy(m.s6_addr, mask->hdr.src_addr, size); + + flow->pattern |= F_IP6_SIP; + } + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 16; + + inet_ntop(AF_INET6, &k, (char *)key_field->key, MRVL_CLS_STR_SIZE_MAX); + inet_ntop(AF_INET6, &m, (char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX); + + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Helper for parsing destination ip of the ipv6 flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_ip6_dip(const struct rte_flow_item_ipv6 *spec, + const struct rte_flow_item_ipv6 *mask, + struct rte_flow *flow) +{ + return mrvl_parse_ip6_addr(spec, mask, 1, flow); +} + +/** + * Helper for parsing source ip of the ipv6 flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_ip6_sip(const struct rte_flow_item_ipv6 *spec, + const struct rte_flow_item_ipv6 *mask, + struct rte_flow *flow) +{ + return mrvl_parse_ip6_addr(spec, mask, 0, flow); +} + +/** + * Parse the flow label of the ipv6 flow item. + * + * This will create classifier rule that matches flow field. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_ip6_flow(const struct rte_flow_item_ipv6 *spec, + const struct rte_flow_item_ipv6 *mask, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint32_t k = rte_be_to_cpu_32(spec->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK, + m = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 3; + + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + snprintf((char *)key_field->mask, MRVL_CLS_STR_SIZE_MAX, "%u", m); + + flow->pattern |= F_IP6_FLOW; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse the next header of the ipv6 flow item. + * + * This will create classifier rule that matches next header field. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_ip6_next_hdr(const struct rte_flow_item_ipv6 *spec, + const struct rte_flow_item_ipv6 *mask __rte_unused, + struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint8_t k = spec->hdr.proto; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 1; + + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->pattern |= F_IP6_NEXT_HDR; + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Parse destination or source port of the tcp flow item. + * + * This will create classifier rule that matches either destination or + * source tcp port. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_tcp_port(const struct rte_flow_item_tcp *spec, + const struct rte_flow_item_tcp *mask __rte_unused, + int parse_dst, struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint16_t k; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 2; + + if (parse_dst) { + k = rte_be_to_cpu_16(spec->hdr.dst_port); + + flow->pattern |= F_TCP_DPORT; + } else { + k = rte_be_to_cpu_16(spec->hdr.src_port); + + flow->pattern |= F_TCP_SPORT; + } + + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Helper for parsing the tcp source port of the tcp flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_tcp_sport(const struct rte_flow_item_tcp *spec, + const struct rte_flow_item_tcp *mask, + struct rte_flow *flow) +{ + return mrvl_parse_tcp_port(spec, mask, 0, flow); +} + +/** + * Helper for parsing the tcp destination port of the tcp flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_tcp_dport(const struct rte_flow_item_tcp *spec, + const struct rte_flow_item_tcp *mask, + struct rte_flow *flow) +{ + return mrvl_parse_tcp_port(spec, mask, 1, flow); +} + +/** + * Parse destination or source port of the udp flow item. + * + * This will create classifier rule that matches either destination or + * source udp port. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static int +mrvl_parse_udp_port(const struct rte_flow_item_udp *spec, + const struct rte_flow_item_udp *mask __rte_unused, + int parse_dst, struct rte_flow *flow) +{ + struct pp2_cls_rule_key_field *key_field; + uint16_t k; + + if (flow->rule.num_fields >= PP2_CLS_TBL_MAX_NUM_FIELDS) + return -ENOSPC; + + key_field = &flow->rule.fields[flow->rule.num_fields]; + mrvl_alloc_key_mask(key_field); + key_field->size = 2; + + if (parse_dst) { + k = rte_be_to_cpu_16(spec->hdr.dst_port); + + flow->pattern |= F_UDP_DPORT; + } else { + k = rte_be_to_cpu_16(spec->hdr.src_port); + + flow->pattern |= F_UDP_SPORT; + } + + snprintf((char *)key_field->key, MRVL_CLS_STR_SIZE_MAX, "%u", k); + + flow->rule.num_fields += 1; + + return 0; +} + +/** + * Helper for parsing the udp source port of the udp flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_udp_sport(const struct rte_flow_item_udp *spec, + const struct rte_flow_item_udp *mask, + struct rte_flow *flow) +{ + return mrvl_parse_udp_port(spec, mask, 0, flow); +} + +/** + * Helper for parsing the udp destination port of the udp flow item. + * + * @param spec Pointer to the specific flow item. + * @param mask Pointer to the specific flow item's mask. + * @param flow Pointer to the flow. + * @return 0 in case of success, negative error value otherwise. + */ +static inline int +mrvl_parse_udp_dport(const struct rte_flow_item_udp *spec, + const struct rte_flow_item_udp *mask, + struct rte_flow *flow) +{ + return mrvl_parse_udp_port(spec, mask, 1, flow); +} + +/** + * Parse eth flow item. + * + * @param item Pointer to the flow item. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param fields Pointer to the parsed parsed fields enum. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_parse_eth(const struct rte_flow_item *item, struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_eth *spec = NULL, *mask = NULL; + struct ether_addr zero; + int ret; + + ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, + &rte_flow_item_eth_mask, + sizeof(struct rte_flow_item_eth), error); + if (ret) + return ret; + + memset(&zero, 0, sizeof(zero)); + + if (memcmp(&mask->dst, &zero, sizeof(mask->dst))) { + ret = mrvl_parse_dmac(spec, mask, flow); + if (ret) + goto out; + } + + if (memcmp(&mask->src, &zero, sizeof(mask->src))) { + ret = mrvl_parse_smac(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->type) { + RTE_LOG(WARNING, PMD, "eth type mask is ignored\n"); + ret = mrvl_parse_type(spec, mask, flow); + if (ret) + goto out; + } + + return 0; +out: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Reached maximum number of fields in cls tbl key\n"); + return -rte_errno; +} + +/** + * Parse vlan flow item. + * + * @param item Pointer to the flow item. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param fields Pointer to the parsed parsed fields enum. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_parse_vlan(const struct rte_flow_item *item, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_vlan *spec = NULL, *mask = NULL; + uint16_t m; + int ret; + + ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, + &rte_flow_item_vlan_mask, + sizeof(struct rte_flow_item_vlan), error); + if (ret) + return ret; + + if (mask->tpid) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Not supported by classifier\n"); + return -rte_errno; + } + + m = rte_be_to_cpu_16(mask->tci); + if (m & MRVL_VLAN_ID_MASK) { + RTE_LOG(WARNING, PMD, "vlan id mask is ignored\n"); + ret = mrvl_parse_vlan_id(spec, mask, flow); + if (ret) + goto out; + } + + if (m & MRVL_VLAN_PRI_MASK) { + RTE_LOG(WARNING, PMD, "vlan pri mask is ignored\n"); + ret = mrvl_parse_vlan_pri(spec, mask, flow); + if (ret) + goto out; + } + + return 0; +out: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Reached maximum number of fields in cls tbl key\n"); + return -rte_errno; +} + +/** + * Parse ipv4 flow item. + * + * @param item Pointer to the flow item. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param fields Pointer to the parsed parsed fields enum. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_parse_ip4(const struct rte_flow_item *item, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv4 *spec = NULL, *mask = NULL; + int ret; + + ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, + &rte_flow_item_ipv4_mask, + sizeof(struct rte_flow_item_ipv4), error); + if (ret) + return ret; + + if (mask->hdr.version_ihl || + mask->hdr.total_length || + mask->hdr.packet_id || + mask->hdr.fragment_offset || + mask->hdr.time_to_live || + mask->hdr.hdr_checksum) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Not supported by classifier\n"); + return -rte_errno; + } + + if (mask->hdr.type_of_service & MRVL_IPV4_DSCP_MASK) { + ret = mrvl_parse_ip4_dscp(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->hdr.src_addr) { + ret = mrvl_parse_ip4_sip(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->hdr.dst_addr) { + ret = mrvl_parse_ip4_dip(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->hdr.next_proto_id) { + RTE_LOG(WARNING, PMD, "next proto id mask is ignored\n"); + ret = mrvl_parse_ip4_proto(spec, mask, flow); + if (ret) + goto out; + } + + return 0; +out: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Reached maximum number of fields in cls tbl key\n"); + return -rte_errno; +} + +/** + * Parse ipv6 flow item. + * + * @param item Pointer to the flow item. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param fields Pointer to the parsed parsed fields enum. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_parse_ip6(const struct rte_flow_item *item, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_ipv6 *spec = NULL, *mask = NULL; + struct ipv6_hdr zero; + uint32_t flow_mask; + int ret; + + ret = mrvl_parse_init(item, (const void **)&spec, + (const void **)&mask, + &rte_flow_item_ipv6_mask, + sizeof(struct rte_flow_item_ipv6), + error); + if (ret) + return ret; + + memset(&zero, 0, sizeof(zero)); + + if (mask->hdr.payload_len || + mask->hdr.hop_limits) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Not supported by classifier\n"); + return -rte_errno; + } + + if (memcmp(mask->hdr.src_addr, + zero.src_addr, sizeof(mask->hdr.src_addr))) { + ret = mrvl_parse_ip6_sip(spec, mask, flow); + if (ret) + goto out; + } + + if (memcmp(mask->hdr.dst_addr, + zero.dst_addr, sizeof(mask->hdr.dst_addr))) { + ret = mrvl_parse_ip6_dip(spec, mask, flow); + if (ret) + goto out; + } + + flow_mask = rte_be_to_cpu_32(mask->hdr.vtc_flow) & MRVL_IPV6_FLOW_MASK; + if (flow_mask) { + ret = mrvl_parse_ip6_flow(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->hdr.proto) { + RTE_LOG(WARNING, PMD, "next header mask is ignored\n"); + ret = mrvl_parse_ip6_next_hdr(spec, mask, flow); + if (ret) + goto out; + } + + return 0; +out: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Reached maximum number of fields in cls tbl key\n"); + return -rte_errno; +} + +/** + * Parse tcp flow item. + * + * @param item Pointer to the flow item. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param fields Pointer to the parsed parsed fields enum. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_parse_tcp(const struct rte_flow_item *item, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_tcp *spec = NULL, *mask = NULL; + int ret; + + ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, + &rte_flow_item_ipv4_mask, + sizeof(struct rte_flow_item_ipv4), error); + if (ret) + return ret; + + if (mask->hdr.sent_seq || + mask->hdr.recv_ack || + mask->hdr.data_off || + mask->hdr.tcp_flags || + mask->hdr.rx_win || + mask->hdr.cksum || + mask->hdr.tcp_urp) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Not supported by classifier\n"); + return -rte_errno; + } + + if (mask->hdr.src_port) { + RTE_LOG(WARNING, PMD, "tcp sport mask is ignored\n"); + ret = mrvl_parse_tcp_sport(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->hdr.dst_port) { + RTE_LOG(WARNING, PMD, "tcp dport mask is ignored\n"); + ret = mrvl_parse_tcp_dport(spec, mask, flow); + if (ret) + goto out; + } + + return 0; +out: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Reached maximum number of fields in cls tbl key\n"); + return -rte_errno; +} + +/** + * Parse udp flow item. + * + * @param item Pointer to the flow item. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param fields Pointer to the parsed parsed fields enum. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_parse_udp(const struct rte_flow_item *item, + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item_udp *spec = NULL, *mask = NULL; + int ret; + + ret = mrvl_parse_init(item, (const void **)&spec, (const void **)&mask, + &rte_flow_item_ipv4_mask, + sizeof(struct rte_flow_item_ipv4), error); + if (ret) + return ret; + + if (mask->hdr.dgram_len || + mask->hdr.dgram_cksum) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ITEM, + NULL, "Not supported by classifier\n"); + return -rte_errno; + } + + if (mask->hdr.src_port) { + RTE_LOG(WARNING, PMD, "udp sport mask is ignored\n"); + ret = mrvl_parse_udp_sport(spec, mask, flow); + if (ret) + goto out; + } + + if (mask->hdr.dst_port) { + RTE_LOG(WARNING, PMD, "udp dport mask is ignored\n"); + ret = mrvl_parse_udp_dport(spec, mask, flow); + if (ret) + goto out; + } + + return 0; +out: + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Reached maximum number of fields in cls tbl key\n"); + return -rte_errno; +} + +/** + * Parse flow pattern composed of the the eth item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_eth(pattern, flow, error); +} + +/** + * Parse flow pattern composed of the eth and vlan items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_vlan(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_eth(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + + return mrvl_parse_vlan(item, flow, error); +} + +/** + * Parse flow pattern composed of the eth, vlan and ip4/ip6 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_vlan_ip4_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int ip6) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_eth(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + ret = mrvl_parse_vlan(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + + return ip6 ? mrvl_parse_ip6(item, flow, error) : + mrvl_parse_ip4(item, flow, error); +} + +/** + * Parse flow pattern composed of the eth, vlan and ipv4 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_vlan_ip4(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the eth, vlan and ipv6 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_vlan_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_vlan_ip4_ip6(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the eth and ip4/ip6 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_ip4_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int ip6) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_eth(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + + return ip6 ? mrvl_parse_ip6(item, flow, error) : + mrvl_parse_ip4(item, flow, error); +} + +/** + * Parse flow pattern composed of the eth and ipv4 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_eth_ip4(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the eth and ipv6 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_eth_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the eth, ip4 and tcp/udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param tcp 1 to parse tcp item, 0 to parse udp item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_ip4_tcp_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int tcp) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 0); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + item = mrvl_next_item(item + 1); + + if (tcp) + return mrvl_parse_tcp(item, flow, error); + + return mrvl_parse_udp(item, flow, error); +} + +/** + * Parse flow pattern composed of the eth, ipv4 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_eth_ip4_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the eth, ipv4 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_eth_ip4_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_ip4_tcp_udp(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the eth, ipv6 and tcp/udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param tcp 1 to parse tcp item, 0 to parse udp item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_eth_ip6_tcp_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int tcp) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_pattern_eth_ip4_ip6(pattern, flow, error, 1); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + item = mrvl_next_item(item + 1); + + if (tcp) + return mrvl_parse_tcp(item, flow, error); + + return mrvl_parse_udp(item, flow, error); +} + +/** + * Parse flow pattern composed of the eth, ipv6 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_eth_ip6_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the eth, ipv6 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_eth_ip6_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_eth_ip6_tcp_udp(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the vlan item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_vlan(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + + return mrvl_parse_vlan(item, flow, error); +} + +/** + * Parse flow pattern composed of the vlan and ip4/ip6 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_vlan_ip4_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int ip6) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_vlan(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + + return ip6 ? mrvl_parse_ip6(item, flow, error) : + mrvl_parse_ip4(item, flow, error); +} + +/** + * Parse flow pattern composed of the vlan and ipv4 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_vlan_ip4(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the vlan, ipv4 and tcp/udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_vlan_ip_tcp_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int tcp) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 0); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + item = mrvl_next_item(item + 1); + + if (tcp) + return mrvl_parse_tcp(item, flow, error); + + return mrvl_parse_udp(item, flow, error); +} + +/** + * Parse flow pattern composed of the vlan, ipv4 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_vlan_ip_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the vlan, ipv4 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_vlan_ip_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_vlan_ip_tcp_udp(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the vlan and ipv6 items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_vlan_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the vlan, ipv6 and tcp/udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_vlan_ip6_tcp_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int tcp) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = mrvl_parse_pattern_vlan_ip4_ip6(pattern, flow, error, 1); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + item = mrvl_next_item(item + 1); + + if (tcp) + return mrvl_parse_tcp(item, flow, error); + + return mrvl_parse_udp(item, flow, error); +} + +/** + * Parse flow pattern composed of the vlan, ipv6 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_vlan_ip6_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the vlan, ipv6 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_vlan_ip6_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_vlan_ip6_tcp_udp(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the ip4/ip6 item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_ip4_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int ip6) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + + return ip6 ? mrvl_parse_ip6(item, flow, error) : + mrvl_parse_ip4(item, flow, error); +} + +/** + * Parse flow pattern composed of the ipv4 item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_ip4(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the ipv6 item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_ip6(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_ip4_ip6(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the ip4/ip6 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @param ip6 1 to parse ip6 item, 0 to parse ip4 item. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_ip4_ip6_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int ip6) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = ip6 ? mrvl_parse_ip6(item, flow, error) : + mrvl_parse_ip4(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + + return mrvl_parse_tcp(item, flow, error); +} + +/** + * Parse flow pattern composed of the ipv4 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_ip4_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the ipv6 and tcp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_ip6_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_ip4_ip6_tcp(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the ipv4/ipv6 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_ip4_ip6_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error, int ip6) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + int ret; + + ret = ip6 ? mrvl_parse_ip6(item, flow, error) : + mrvl_parse_ip4(item, flow, error); + if (ret) + return ret; + + item = mrvl_next_item(item + 1); + + return mrvl_parse_udp(item, flow, error); +} + +/** + * Parse flow pattern composed of the ipv4 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_ip4_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 0); +} + +/** + * Parse flow pattern composed of the ipv6 and udp items. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static inline int +mrvl_parse_pattern_ip6_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + return mrvl_parse_pattern_ip4_ip6_udp(pattern, flow, error, 1); +} + +/** + * Parse flow pattern composed of the tcp item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_tcp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + + return mrvl_parse_tcp(item, flow, error); +} + +/** + * Parse flow pattern composed of the udp item. + * + * @param pattern Pointer to the flow pattern table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_parse_pattern_udp(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_item *item = mrvl_next_item(pattern); + + return mrvl_parse_udp(item, flow, error); +} + +/** + * Structure used to map specific flow pattern to the pattern parse callback + * which will iterate over each pattern item and extract relevant data. + */ +static const struct { + const enum rte_flow_item_type *pattern; + int (*parse)(const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error); +} mrvl_patterns[] = { + { pattern_eth, mrvl_parse_pattern_eth }, + { pattern_eth_vlan, mrvl_parse_pattern_eth_vlan }, + { pattern_eth_vlan_ip, mrvl_parse_pattern_eth_vlan_ip4 }, + { pattern_eth_vlan_ip6, mrvl_parse_pattern_eth_vlan_ip6 }, + { pattern_eth_ip4, mrvl_parse_pattern_eth_ip4 }, + { pattern_eth_ip4_tcp, mrvl_parse_pattern_eth_ip4_tcp }, + { pattern_eth_ip4_udp, mrvl_parse_pattern_eth_ip4_udp }, + { pattern_eth_ip6, mrvl_parse_pattern_eth_ip6 }, + { pattern_eth_ip6_tcp, mrvl_parse_pattern_eth_ip6_tcp }, + { pattern_eth_ip6_udp, mrvl_parse_pattern_eth_ip6_udp }, + { pattern_vlan, mrvl_parse_pattern_vlan }, + { pattern_vlan_ip, mrvl_parse_pattern_vlan_ip4 }, + { pattern_vlan_ip_tcp, mrvl_parse_pattern_vlan_ip_tcp }, + { pattern_vlan_ip_udp, mrvl_parse_pattern_vlan_ip_udp }, + { pattern_vlan_ip6, mrvl_parse_pattern_vlan_ip6 }, + { pattern_vlan_ip6_tcp, mrvl_parse_pattern_vlan_ip6_tcp }, + { pattern_vlan_ip6_udp, mrvl_parse_pattern_vlan_ip6_udp }, + { pattern_ip, mrvl_parse_pattern_ip4 }, + { pattern_ip_tcp, mrvl_parse_pattern_ip4_tcp }, + { pattern_ip_udp, mrvl_parse_pattern_ip4_udp }, + { pattern_ip6, mrvl_parse_pattern_ip6 }, + { pattern_ip6_tcp, mrvl_parse_pattern_ip6_tcp }, + { pattern_ip6_udp, mrvl_parse_pattern_ip6_udp }, + { pattern_tcp, mrvl_parse_pattern_tcp }, + { pattern_udp, mrvl_parse_pattern_udp } +}; + +/** + * Check whether provided pattern matches any of the supported ones. + * + * @param type_pattern Pointer to the pattern type. + * @param item_pattern Pointer to the flow pattern. + * @returns 1 in case of success, 0 value otherwise. + */ +static int +mrvl_patterns_match(const enum rte_flow_item_type *type_pattern, + const struct rte_flow_item *item_pattern) +{ + const enum rte_flow_item_type *type = type_pattern; + const struct rte_flow_item *item = item_pattern; + + for (;;) { + if (item->type == RTE_FLOW_ITEM_TYPE_VOID) { + item++; + continue; + } + + if (*type == RTE_FLOW_ITEM_TYPE_END || + item->type == RTE_FLOW_ITEM_TYPE_END) + break; + + if (*type != item->type) + break; + + item++; + type++; + } + + return *type == item->type; +} + +/** + * Parse flow attribute. + * + * This will check whether the provided attribute's flags are supported. + * + * @param priv Unused + * @param attr Pointer to the flow attribute. + * @param flow Unused + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_parse_attr(struct mrvl_priv *priv __rte_unused, + const struct rte_flow_attr *attr, + struct rte_flow *flow __rte_unused, + struct rte_flow_error *error) +{ + if (!attr) { + rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ATTR, + NULL, "NULL attribute"); + return -rte_errno; + } + + if (attr->group) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_GROUP, NULL, + "Groups are not supported"); + return -rte_errno; + } + if (attr->priority) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_PRIORITY, NULL, + "Priorities are not supported"); + return -rte_errno; + } + if (!attr->ingress) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_INGRESS, NULL, + "Only ingress is supported"); + return -rte_errno; + } + if (attr->egress) { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL, + "Egress is not supported"); + return -rte_errno; + } + + return 0; +} + +/** + * Parse flow pattern. + * + * Specific classifier rule will be created as well. + * + * @param priv Unused + * @param pattern Pointer to the flow pattern. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_parse_pattern(struct mrvl_priv *priv __rte_unused, + const struct rte_flow_item pattern[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + unsigned int i; + int ret; + + for (i = 0; i < RTE_DIM(mrvl_patterns); i++) { + if (!mrvl_patterns_match(mrvl_patterns[i].pattern, pattern)) + continue; + + ret = mrvl_patterns[i].parse(pattern, flow, error); + if (ret) + mrvl_free_all_key_mask(&flow->rule); + + return ret; + } + + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ITEM, NULL, + "Unsupported pattern"); + + return -rte_errno; +} + +/** + * Parse flow actions. + * + * @param priv Pointer to the port's private data. + * @param actions Pointer the action table. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_parse_actions(struct mrvl_priv *priv, + const struct rte_flow_action actions[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + const struct rte_flow_action *action = actions; + int specified = 0; + + for (; action->type != RTE_FLOW_ACTION_TYPE_END; action++) { + if (action->type == RTE_FLOW_ACTION_TYPE_VOID) + continue; + + if (action->type == RTE_FLOW_ACTION_TYPE_DROP) { + flow->cos.ppio = priv->ppio; + flow->cos.tc = 0; + flow->action.type = PP2_CLS_TBL_ACT_DROP; + flow->action.cos = &flow->cos; + specified++; + } else if (action->type == RTE_FLOW_ACTION_TYPE_QUEUE) { + const struct rte_flow_action_queue *q = + (const struct rte_flow_action_queue *) + action->conf; + + if (q->index > priv->nb_rx_queues) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + NULL, + "Queue index out of range"); + return -rte_errno; + } + + if (priv->rxq_map[q->index].tc == MRVL_UNKNOWN_TC) { + /* + * Unknown TC mapping, mapping will not have + * a correct queue. + */ + RTE_LOG(ERR, PMD, + "Unknown TC mapping for queue %hu eth%hhu\n", + q->index, priv->ppio_id); + + rte_flow_error_set(error, EFAULT, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, NULL); + return -rte_errno; + } + + RTE_LOG(DEBUG, PMD, + "Action: Assign packets to queue %d, tc:%d, q:%d\n", + q->index, priv->rxq_map[q->index].tc, + priv->rxq_map[q->index].inq); + + flow->cos.ppio = priv->ppio; + flow->cos.tc = priv->rxq_map[q->index].tc; + flow->action.type = PP2_CLS_TBL_ACT_DONE; + flow->action.cos = &flow->cos; + specified++; + } else { + rte_flow_error_set(error, ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, NULL, + "Action not supported"); + return -rte_errno; + } + + } + + if (!specified) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Action not specified"); + return -rte_errno; + } + + return 0; +} + +/** + * Parse flow attribute, pattern and actions. + * + * @param priv Pointer to the port's private data. + * @param attr Pointer to the flow attribute. + * @param pattern Pointer to the flow pattern. + * @param actions Pointer to the flow actions. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_flow_parse(struct mrvl_priv *priv, const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow *flow, + struct rte_flow_error *error) +{ + int ret; + + ret = mrvl_flow_parse_attr(priv, attr, flow, error); + if (ret) + return ret; + + ret = mrvl_flow_parse_pattern(priv, pattern, flow, error); + if (ret) + return ret; + + return mrvl_flow_parse_actions(priv, actions, flow, error); +} + +static inline enum pp2_cls_tbl_type +mrvl_engine_type(const struct rte_flow *flow) +{ + int i, size = 0; + + for (i = 0; i < flow->rule.num_fields; i++) + size += flow->rule.fields[i].size; + + /* + * For maskable engine type the key size must be up to 8 bytes. + * For keys with size bigger than 8 bytes, engine type must + * be set to exact match. + */ + if (size > 8) + return PP2_CLS_TBL_EXACT_MATCH; + + return PP2_CLS_TBL_MASKABLE; +} + +static int +mrvl_create_cls_table(struct rte_eth_dev *dev, struct rte_flow *first_flow) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct pp2_cls_tbl_key *key = &priv->cls_tbl_params.key; + int ret; + + if (priv->cls_tbl) { + pp2_cls_tbl_deinit(priv->cls_tbl); + priv->cls_tbl = NULL; + } + + memset(&priv->cls_tbl_params, 0, sizeof(priv->cls_tbl_params)); + + priv->cls_tbl_params.type = mrvl_engine_type(first_flow); + RTE_LOG(INFO, PMD, "Setting cls search engine type to %s\n", + priv->cls_tbl_params.type == PP2_CLS_TBL_EXACT_MATCH ? + "exact" : "maskable"); + priv->cls_tbl_params.max_num_rules = MRVL_CLS_MAX_NUM_RULES; + priv->cls_tbl_params.default_act.type = PP2_CLS_TBL_ACT_DONE; + priv->cls_tbl_params.default_act.cos = &first_flow->cos; + + if (first_flow->pattern & F_DMAC) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH; + key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_DA; + key->key_size += 6; + key->num_fields += 1; + } + + if (first_flow->pattern & F_SMAC) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH; + key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_SA; + key->key_size += 6; + key->num_fields += 1; + } + + if (first_flow->pattern & F_TYPE) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_ETH; + key->proto_field[key->num_fields].field.eth = MV_NET_ETH_F_TYPE; + key->key_size += 2; + key->num_fields += 1; + } + + if (first_flow->pattern & F_VLAN_ID) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN; + key->proto_field[key->num_fields].field.vlan = MV_NET_VLAN_F_ID; + key->key_size += 2; + key->num_fields += 1; + } + + if (first_flow->pattern & F_VLAN_PRI) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_VLAN; + key->proto_field[key->num_fields].field.vlan = + MV_NET_VLAN_F_PRI; + key->key_size += 1; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP4_TOS) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; + key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_TOS; + key->key_size += 1; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP4_SIP) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; + key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_SA; + key->key_size += 4; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP4_DIP) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; + key->proto_field[key->num_fields].field.ipv4 = MV_NET_IP4_F_DA; + key->key_size += 4; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP4_PROTO) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP4; + key->proto_field[key->num_fields].field.ipv4 = + MV_NET_IP4_F_PROTO; + key->key_size += 1; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP6_SIP) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; + key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_SA; + key->key_size += 16; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP6_DIP) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; + key->proto_field[key->num_fields].field.ipv6 = MV_NET_IP6_F_DA; + key->key_size += 16; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP6_FLOW) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; + key->proto_field[key->num_fields].field.ipv6 = + MV_NET_IP6_F_FLOW; + key->key_size += 3; + key->num_fields += 1; + } + + if (first_flow->pattern & F_IP6_NEXT_HDR) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_IP6; + key->proto_field[key->num_fields].field.ipv6 = + MV_NET_IP6_F_NEXT_HDR; + key->key_size += 1; + key->num_fields += 1; + } + + if (first_flow->pattern & F_TCP_SPORT) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP; + key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP; + key->key_size += 2; + key->num_fields += 1; + } + + if (first_flow->pattern & F_TCP_DPORT) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_TCP; + key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_DP; + key->key_size += 2; + key->num_fields += 1; + } + + if (first_flow->pattern & F_UDP_SPORT) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP; + key->proto_field[key->num_fields].field.tcp = MV_NET_TCP_F_SP; + key->key_size += 2; + key->num_fields += 1; + } + + if (first_flow->pattern & F_UDP_DPORT) { + key->proto_field[key->num_fields].proto = MV_NET_PROTO_UDP; + key->proto_field[key->num_fields].field.udp = MV_NET_TCP_F_DP; + key->key_size += 2; + key->num_fields += 1; + } + + ret = pp2_cls_tbl_init(&priv->cls_tbl_params, &priv->cls_tbl); + if (!ret) + priv->cls_tbl_pattern = first_flow->pattern; + + return ret; +} + +/** + * Check whether new flow can be added to the table + * + * @param priv Pointer to the port's private data. + * @param flow Pointer to the new flow. + * @return 1 in case flow can be added, 0 otherwise. + */ +static inline int +mrvl_flow_can_be_added(struct mrvl_priv *priv, const struct rte_flow *flow) +{ + return flow->pattern == priv->cls_tbl_pattern && + mrvl_engine_type(flow) == priv->cls_tbl_params.type; +} + +/** + * DPDK flow create callback called when flow is to be created. + * + * @param dev Pointer to the device. + * @param attr Pointer to the flow attribute. + * @param pattern Pointer to the flow pattern. + * @param actions Pointer to the flow actions. + * @param error Pointer to the flow error. + * @returns Pointer to the created flow in case of success, NULL otherwise. + */ +static struct rte_flow * +mrvl_flow_create(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct rte_flow *flow, *first; + int ret; + + if (!dev->data->dev_started) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Port must be started first\n"); + return NULL; + } + + flow = rte_zmalloc_socket(NULL, sizeof(*flow), 0, rte_socket_id()); + if (!flow) + return NULL; + + ret = mrvl_flow_parse(priv, attr, pattern, actions, flow, error); + if (ret) + goto out; + + /* + * Four cases here: + * + * 1. In case table does not exist - create one. + * 2. In case table exists, is empty and new flow cannot be added + * recreate table. + * 3. In case table is not empty and new flow matches table format + * add it. + * 4. Otherwise flow cannot be added. + */ + first = LIST_FIRST(&priv->flows); + if (!priv->cls_tbl) { + ret = mrvl_create_cls_table(dev, flow); + } else if (!first && !mrvl_flow_can_be_added(priv, flow)) { + ret = mrvl_create_cls_table(dev, flow); + } else if (mrvl_flow_can_be_added(priv, flow)) { + ret = 0; + } else { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Pattern does not match cls table format\n"); + goto out; + } + + if (ret) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to create cls table\n"); + goto out; + } + + ret = pp2_cls_tbl_add_rule(priv->cls_tbl, &flow->rule, &flow->action); + if (ret) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to add rule\n"); + goto out; + } + + LIST_INSERT_HEAD(&priv->flows, flow, next); + + return flow; +out: + rte_free(flow); + return NULL; +} + +/** + * Remove classifier rule associated with given flow. + * + * @param priv Pointer to the port's private data. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_remove(struct mrvl_priv *priv, struct rte_flow *flow, + struct rte_flow_error *error) +{ + int ret; + + if (!priv->cls_tbl) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Classifier table not initialized"); + return -rte_errno; + } + + ret = pp2_cls_tbl_remove_rule(priv->cls_tbl, &flow->rule); + if (ret) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Failed to remove rule"); + return -rte_errno; + } + + mrvl_free_all_key_mask(&flow->rule); + + return 0; +} + +/** + * DPDK flow destroy callback called when flow is to be removed. + * + * @param priv Pointer to the port's private data. + * @param flow Pointer to the flow. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *flow, + struct rte_flow_error *error) +{ + struct mrvl_priv *priv = dev->data->dev_private; + struct rte_flow *f; + int ret; + + LIST_FOREACH(f, &priv->flows, next) { + if (f == flow) + break; + } + + if (!flow) { + rte_flow_error_set(error, EINVAL, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, + "Rule was not found"); + return -rte_errno; + } + + LIST_REMOVE(f, next); + + ret = mrvl_flow_remove(priv, flow, error); + if (ret) + return ret; + + rte_free(flow); + + return 0; +} + +/** + * DPDK flow callback called to verify given attribute, pattern and actions. + * + * @param dev Pointer to the device. + * @param attr Pointer to the flow attribute. + * @param pattern Pointer to the flow pattern. + * @param actions Pointer to the flow actions. + * @param error Pointer to the flow error. + * @returns 0 on success, negative value otherwise. + */ +static int +mrvl_flow_validate(struct rte_eth_dev *dev, + const struct rte_flow_attr *attr, + const struct rte_flow_item pattern[], + const struct rte_flow_action actions[], + struct rte_flow_error *error) +{ + static struct rte_flow *flow; + + flow = mrvl_flow_create(dev, attr, pattern, actions, error); + if (!flow) + return -rte_errno; + + mrvl_flow_destroy(dev, flow, error); + + return 0; +} + +/** + * DPDK flow flush callback called when flows are to be flushed. + * + * @param dev Pointer to the device. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_flush(struct rte_eth_dev *dev, struct rte_flow_error *error) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + while (!LIST_EMPTY(&priv->flows)) { + struct rte_flow *flow = LIST_FIRST(&priv->flows); + int ret = mrvl_flow_remove(priv, flow, error); + if (ret) + return ret; + + LIST_REMOVE(flow, next); + rte_free(flow); + } + + return 0; +} + +/** + * DPDK flow isolate callback called to isolate port. + * + * @param dev Pointer to the device. + * @param enable Pass 0/1 to disable/enable port isolation. + * @param error Pointer to the flow error. + * @returns 0 in case of success, negative value otherwise. + */ +static int +mrvl_flow_isolate(struct rte_eth_dev *dev, int enable, + struct rte_flow_error *error) +{ + struct mrvl_priv *priv = dev->data->dev_private; + + if (dev->data->dev_started) { + rte_flow_error_set(error, EBUSY, + RTE_FLOW_ERROR_TYPE_UNSPECIFIED, + NULL, "Port must be stopped first\n"); + return -rte_errno; + } + + priv->isolated = enable; + + return 0; +} + +const struct rte_flow_ops mrvl_flow_ops = { + .validate = mrvl_flow_validate, + .create = mrvl_flow_create, + .destroy = mrvl_flow_destroy, + .flush = mrvl_flow_flush, + .isolate = mrvl_flow_isolate +}; diff --git a/drivers/net/mvpp2/mrvl_qos.c b/drivers/net/mvpp2/mrvl_qos.c new file mode 100644 index 0000000000..741d3da7a3 --- /dev/null +++ b/drivers/net/mvpp2/mrvl_qos.c @@ -0,0 +1,894 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017 Marvell International Ltd. + * Copyright(c) 2017 Semihalf. + * All rights reserved. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include +#include + +/* Unluckily, container_of is defined by both DPDK and MUSDK, + * we'll declare only one version. + * + * Note that it is not used in this PMD anyway. + */ +#ifdef container_of +#undef container_of +#endif + +#include "mrvl_qos.h" + +/* Parsing tokens. Defined conveniently, so that any correction is easy. */ +#define MRVL_TOK_DEFAULT "default" +#define MRVL_TOK_DEFAULT_TC "default_tc" +#define MRVL_TOK_DSCP "dscp" +#define MRVL_TOK_MAPPING_PRIORITY "mapping_priority" +#define MRVL_TOK_IP "ip" +#define MRVL_TOK_IP_VLAN "ip/vlan" +#define MRVL_TOK_PCP "pcp" +#define MRVL_TOK_PORT "port" +#define MRVL_TOK_RXQ "rxq" +#define MRVL_TOK_TC "tc" +#define MRVL_TOK_TXQ "txq" +#define MRVL_TOK_VLAN "vlan" +#define MRVL_TOK_VLAN_IP "vlan/ip" + +/* egress specific configuration tokens */ +#define MRVL_TOK_BURST_SIZE "burst_size" +#define MRVL_TOK_RATE_LIMIT "rate_limit" +#define MRVL_TOK_RATE_LIMIT_ENABLE "rate_limit_enable" +#define MRVL_TOK_SCHED_MODE "sched_mode" +#define MRVL_TOK_SCHED_MODE_SP "sp" +#define MRVL_TOK_SCHED_MODE_WRR "wrr" +#define MRVL_TOK_WRR_WEIGHT "wrr_weight" + +/* policer specific configuration tokens */ +#define MRVL_TOK_PLCR_ENABLE "policer_enable" +#define MRVL_TOK_PLCR_UNIT "token_unit" +#define MRVL_TOK_PLCR_UNIT_BYTES "bytes" +#define MRVL_TOK_PLCR_UNIT_PACKETS "packets" +#define MRVL_TOK_PLCR_COLOR "color_mode" +#define MRVL_TOK_PLCR_COLOR_BLIND "blind" +#define MRVL_TOK_PLCR_COLOR_AWARE "aware" +#define MRVL_TOK_PLCR_CIR "cir" +#define MRVL_TOK_PLCR_CBS "cbs" +#define MRVL_TOK_PLCR_EBS "ebs" +#define MRVL_TOK_PLCR_DEFAULT_COLOR "default_color" +#define MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN "green" +#define MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW "yellow" +#define MRVL_TOK_PLCR_DEFAULT_COLOR_RED "red" + +/** Number of tokens in range a-b = 2. */ +#define MAX_RNG_TOKENS 2 + +/** Maximum possible value of PCP. */ +#define MAX_PCP 7 + +/** Maximum possible value of DSCP. */ +#define MAX_DSCP 63 + +/** Global QoS configuration. */ +struct mrvl_qos_cfg *mrvl_qos_cfg; + +/** + * Convert string to uint32_t with extra checks for result correctness. + * + * @param string String to convert. + * @param val Conversion result. + * @returns 0 in case of success, negative value otherwise. + */ +static int +get_val_securely(const char *string, uint32_t *val) +{ + char *endptr; + size_t len = strlen(string); + + if (len == 0) + return -1; + + errno = 0; + *val = strtoul(string, &endptr, 0); + if (errno != 0 || RTE_PTR_DIFF(endptr, string) != len) + return -2; + + return 0; +} + +/** + * Read out-queue configuration from file. + * + * @param file Path to the configuration file. + * @param port Port number. + * @param outq Out queue number. + * @param cfg Pointer to the Marvell QoS configuration structure. + * @returns 0 in case of success, negative value otherwise. + */ +static int +get_outq_cfg(struct rte_cfgfile *file, int port, int outq, + struct mrvl_qos_cfg *cfg) +{ + char sec_name[32]; + const char *entry; + uint32_t val; + + snprintf(sec_name, sizeof(sec_name), "%s %d %s %d", + MRVL_TOK_PORT, port, MRVL_TOK_TXQ, outq); + + /* Skip non-existing */ + if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0) + return 0; + + /* Read scheduling mode */ + entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_SCHED_MODE); + if (entry) { + if (!strncmp(entry, MRVL_TOK_SCHED_MODE_SP, + strlen(MRVL_TOK_SCHED_MODE_SP))) { + cfg->port[port].outq[outq].sched_mode = + PP2_PPIO_SCHED_M_SP; + } else if (!strncmp(entry, MRVL_TOK_SCHED_MODE_WRR, + strlen(MRVL_TOK_SCHED_MODE_WRR))) { + cfg->port[port].outq[outq].sched_mode = + PP2_PPIO_SCHED_M_WRR; + } else { + RTE_LOG(ERR, PMD, "Unknown token: %s\n", entry); + return -1; + } + } + + /* Read wrr weight */ + if (cfg->port[port].outq[outq].sched_mode == PP2_PPIO_SCHED_M_WRR) { + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_WRR_WEIGHT); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + cfg->port[port].outq[outq].weight = val; + } + } + + /* + * There's no point in setting rate limiting for specific outq as + * global port rate limiting has priority. + */ + if (cfg->port[port].rate_limit_enable) { + RTE_LOG(WARNING, PMD, "Port %d rate limiting already enabled\n", + port); + return 0; + } + + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_RATE_LIMIT_ENABLE); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + cfg->port[port].outq[outq].rate_limit_enable = val; + } + + if (!cfg->port[port].outq[outq].rate_limit_enable) + return 0; + + /* Read CBS (in kB) */ + entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_BURST_SIZE); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + cfg->port[port].outq[outq].rate_limit_params.cbs = val; + } + + /* Read CIR (in kbps) */ + entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RATE_LIMIT); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + cfg->port[port].outq[outq].rate_limit_params.cir = val; + } + + return 0; +} + +/** + * Gets multiple-entry values and places them in table. + * + * Entry can be anything, e.g. "1 2-3 5 6 7-9". This needs to be converted to + * table entries, respectively: {1, 2, 3, 5, 6, 7, 8, 9}. + * As all result table's elements are always 1-byte long, we + * won't overcomplicate the function, but we'll keep API generic, + * check if someone hasn't changed element size and make it simple + * to extend to other sizes. + * + * This function is purely utilitary, it does not print any error, only returns + * different error numbers. + * + * @param entry[in] Values string to parse. + * @param tab[out] Results table. + * @param elem_sz[in] Element size (in bytes). + * @param max_elems[in] Number of results table elements available. + * @param max val[in] Maximum value allowed. + * @returns Number of correctly parsed elements in case of success. + * @retval -1 Wrong element size. + * @retval -2 More tokens than result table allows. + * @retval -3 Wrong range syntax. + * @retval -4 Wrong range values. + * @retval -5 Maximum value exceeded. + */ +static int +get_entry_values(const char *entry, uint8_t *tab, + size_t elem_sz, uint8_t max_elems, uint8_t max_val) +{ + /* There should not be more tokens than max elements. + * Add 1 for error trap. + */ + char *tokens[max_elems + 1]; + + /* Begin, End + error trap = 3. */ + char *rng_tokens[MAX_RNG_TOKENS + 1]; + long beg, end; + uint32_t token_val; + int nb_tokens, nb_rng_tokens; + int i; + int values = 0; + char val; + char entry_cpy[CFG_VALUE_LEN]; + + if (elem_sz != 1) + return -1; + + /* Copy the entry to safely use rte_strsplit(). */ + snprintf(entry_cpy, RTE_DIM(entry_cpy), "%s", entry); + + /* + * If there are more tokens than array size, rte_strsplit will + * not return error, just array size. + */ + nb_tokens = rte_strsplit(entry_cpy, strlen(entry_cpy), + tokens, max_elems + 1, ' '); + + /* Quick check, will be refined later. */ + if (nb_tokens > max_elems) + return -2; + + for (i = 0; i < nb_tokens; ++i) { + if (strchr(tokens[i], '-') != NULL) { + /* + * Split to begin and end tokens. + * We want to catch error cases too, thus we leave + * option for number of tokens to be more than 2. + */ + nb_rng_tokens = rte_strsplit(tokens[i], + strlen(tokens[i]), rng_tokens, + RTE_DIM(rng_tokens), '-'); + if (nb_rng_tokens != 2) + return -3; + + /* Range and sanity checks. */ + if (get_val_securely(rng_tokens[0], &token_val) < 0) + return -4; + beg = (char)token_val; + if (get_val_securely(rng_tokens[1], &token_val) < 0) + return -4; + end = (char)token_val; + if (beg < 0 || beg > UCHAR_MAX || + end < 0 || end > UCHAR_MAX || end < beg) + return -4; + + for (val = beg; val <= end; ++val) { + if (val > max_val) + return -5; + + *tab = val; + tab = RTE_PTR_ADD(tab, elem_sz); + ++values; + if (values >= max_elems) + return -2; + } + } else { + /* Single values. */ + if (get_val_securely(tokens[i], &token_val) < 0) + return -5; + val = (char)token_val; + if (val > max_val) + return -5; + + *tab = val; + tab = RTE_PTR_ADD(tab, elem_sz); + ++values; + if (values >= max_elems) + return -2; + } + } + + return values; +} + +/** + * Parse Traffic Class'es mapping configuration. + * + * @param file Config file handle. + * @param port Which port to look for. + * @param tc Which Traffic Class to look for. + * @param cfg[out] Parsing results. + * @returns 0 in case of success, negative value otherwise. + */ +static int +parse_tc_cfg(struct rte_cfgfile *file, int port, int tc, + struct mrvl_qos_cfg *cfg) +{ + char sec_name[32]; + const char *entry; + int n; + + snprintf(sec_name, sizeof(sec_name), "%s %d %s %d", + MRVL_TOK_PORT, port, MRVL_TOK_TC, tc); + + /* Skip non-existing */ + if (rte_cfgfile_num_sections(file, sec_name, strlen(sec_name)) <= 0) + return 0; + + entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_RXQ); + if (entry) { + n = get_entry_values(entry, + cfg->port[port].tc[tc].inq, + sizeof(cfg->port[port].tc[tc].inq[0]), + RTE_DIM(cfg->port[port].tc[tc].inq), + MRVL_PP2_RXQ_MAX); + if (n < 0) { + RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", + n, entry); + return n; + } + cfg->port[port].tc[tc].inqs = n; + } + + entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_PCP); + if (entry) { + n = get_entry_values(entry, + cfg->port[port].tc[tc].pcp, + sizeof(cfg->port[port].tc[tc].pcp[0]), + RTE_DIM(cfg->port[port].tc[tc].pcp), + MAX_PCP); + if (n < 0) { + RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", + n, entry); + return n; + } + cfg->port[port].tc[tc].pcps = n; + } + + entry = rte_cfgfile_get_entry(file, sec_name, MRVL_TOK_DSCP); + if (entry) { + n = get_entry_values(entry, + cfg->port[port].tc[tc].dscp, + sizeof(cfg->port[port].tc[tc].dscp[0]), + RTE_DIM(cfg->port[port].tc[tc].dscp), + MAX_DSCP); + if (n < 0) { + RTE_LOG(ERR, PMD, "Error %d while parsing: %s\n", + n, entry); + return n; + } + cfg->port[port].tc[tc].dscps = n; + } + + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_DEFAULT_COLOR); + if (entry) { + if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN, + sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_GREEN))) { + cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_GREEN; + } else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW, + sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_YELLOW))) { + cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_YELLOW; + } else if (!strncmp(entry, MRVL_TOK_PLCR_DEFAULT_COLOR_RED, + sizeof(MRVL_TOK_PLCR_DEFAULT_COLOR_RED))) { + cfg->port[port].tc[tc].color = PP2_PPIO_COLOR_RED; + } else { + RTE_LOG(ERR, PMD, "Error while parsing: %s\n", entry); + return -1; + } + } + + return 0; +} + +/** + * Parse QoS configuration - rte_kvargs_process handler. + * + * Opens configuration file and parses its content. + * + * @param key Unused. + * @param path Path to config file. + * @param extra_args Pointer to configuration structure. + * @returns 0 in case of success, exits otherwise. + */ +int +mrvl_get_qoscfg(const char *key __rte_unused, const char *path, + void *extra_args) +{ + struct mrvl_qos_cfg **cfg = extra_args; + struct rte_cfgfile *file = rte_cfgfile_load(path, 0); + uint32_t val; + int n, i, ret; + const char *entry; + char sec_name[32]; + + if (file == NULL) + rte_exit(EXIT_FAILURE, "Cannot load configuration %s\n", path); + + /* Create configuration. This is never accessed on the fast path, + * so we can ignore socket. + */ + *cfg = rte_zmalloc("mrvl_qos_cfg", sizeof(struct mrvl_qos_cfg), 0); + if (*cfg == NULL) + rte_exit(EXIT_FAILURE, "Cannot allocate configuration %s\n", + path); + + n = rte_cfgfile_num_sections(file, MRVL_TOK_PORT, + sizeof(MRVL_TOK_PORT) - 1); + + if (n == 0) { + /* This is weird, but not bad. */ + RTE_LOG(WARNING, PMD, "Empty configuration file?\n"); + return 0; + } + + /* Use the number of ports given as vdev parameters. */ + for (n = 0; n < (PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC); ++n) { + snprintf(sec_name, sizeof(sec_name), "%s %d %s", + MRVL_TOK_PORT, n, MRVL_TOK_DEFAULT); + + /* Skip ports non-existing in configuration. */ + if (rte_cfgfile_num_sections(file, sec_name, + strlen(sec_name)) <= 0) { + (*cfg)->port[n].use_global_defaults = 1; + (*cfg)->port[n].mapping_priority = + PP2_CLS_QOS_TBL_VLAN_IP_PRI; + continue; + } + + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_DEFAULT_TC); + if (entry) { + if (get_val_securely(entry, &val) < 0 || + val > USHRT_MAX) + return -1; + (*cfg)->port[n].default_tc = (uint8_t)val; + } else { + RTE_LOG(ERR, PMD, + "Default Traffic Class required in custom configuration!\n"); + return -1; + } + + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_ENABLE); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].policer_enable = val; + } + + if ((*cfg)->port[n].policer_enable) { + enum pp2_cls_plcr_token_unit unit; + + /* Read policer token unit */ + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_UNIT); + if (entry) { + if (!strncmp(entry, MRVL_TOK_PLCR_UNIT_BYTES, + sizeof(MRVL_TOK_PLCR_UNIT_BYTES))) { + unit = PP2_CLS_PLCR_BYTES_TOKEN_UNIT; + } else if (!strncmp(entry, + MRVL_TOK_PLCR_UNIT_PACKETS, + sizeof(MRVL_TOK_PLCR_UNIT_PACKETS))) { + unit = PP2_CLS_PLCR_PACKETS_TOKEN_UNIT; + } else { + RTE_LOG(ERR, PMD, "Unknown token: %s\n", + entry); + return -1; + } + (*cfg)->port[n].policer_params.token_unit = + unit; + } + + /* Read policer color mode */ + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_COLOR); + if (entry) { + enum pp2_cls_plcr_color_mode mode; + + if (!strncmp(entry, MRVL_TOK_PLCR_COLOR_BLIND, + sizeof(MRVL_TOK_PLCR_COLOR_BLIND))) { + mode = PP2_CLS_PLCR_COLOR_BLIND_MODE; + } else if (!strncmp(entry, + MRVL_TOK_PLCR_COLOR_AWARE, + sizeof(MRVL_TOK_PLCR_COLOR_AWARE))) { + mode = PP2_CLS_PLCR_COLOR_AWARE_MODE; + } else { + RTE_LOG(ERR, PMD, + "Error in parsing: %s\n", + entry); + return -1; + } + (*cfg)->port[n].policer_params.color_mode = + mode; + } + + /* Read policer cir */ + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_CIR); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].policer_params.cir = val; + } + + /* Read policer cbs */ + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_CBS); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].policer_params.cbs = val; + } + + /* Read policer ebs */ + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_PLCR_EBS); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].policer_params.ebs = val; + } + } + + /* + * Read per-port rate limiting. Setting that will + * disable per-queue rate limiting. + */ + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_RATE_LIMIT_ENABLE); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].rate_limit_enable = val; + } + + if ((*cfg)->port[n].rate_limit_enable) { + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_BURST_SIZE); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].rate_limit_params.cbs = val; + } + + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_RATE_LIMIT); + if (entry) { + if (get_val_securely(entry, &val) < 0) + return -1; + (*cfg)->port[n].rate_limit_params.cir = val; + } + } + + entry = rte_cfgfile_get_entry(file, sec_name, + MRVL_TOK_MAPPING_PRIORITY); + if (entry) { + if (!strncmp(entry, MRVL_TOK_VLAN_IP, + sizeof(MRVL_TOK_VLAN_IP))) + (*cfg)->port[n].mapping_priority = + PP2_CLS_QOS_TBL_VLAN_IP_PRI; + else if (!strncmp(entry, MRVL_TOK_IP_VLAN, + sizeof(MRVL_TOK_IP_VLAN))) + (*cfg)->port[n].mapping_priority = + PP2_CLS_QOS_TBL_IP_VLAN_PRI; + else if (!strncmp(entry, MRVL_TOK_IP, + sizeof(MRVL_TOK_IP))) + (*cfg)->port[n].mapping_priority = + PP2_CLS_QOS_TBL_IP_PRI; + else if (!strncmp(entry, MRVL_TOK_VLAN, + sizeof(MRVL_TOK_VLAN))) + (*cfg)->port[n].mapping_priority = + PP2_CLS_QOS_TBL_VLAN_PRI; + else + rte_exit(EXIT_FAILURE, + "Error in parsing %s value (%s)!\n", + MRVL_TOK_MAPPING_PRIORITY, entry); + } else { + (*cfg)->port[n].mapping_priority = + PP2_CLS_QOS_TBL_VLAN_IP_PRI; + } + + for (i = 0; i < MRVL_PP2_RXQ_MAX; ++i) { + ret = get_outq_cfg(file, n, i, *cfg); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Error %d parsing port %d outq %d!\n", + ret, n, i); + } + + for (i = 0; i < MRVL_PP2_TC_MAX; ++i) { + ret = parse_tc_cfg(file, n, i, *cfg); + if (ret < 0) + rte_exit(EXIT_FAILURE, + "Error %d parsing port %d tc %d!\n", + ret, n, i); + } + } + + return 0; +} + +/** + * Setup Traffic Class. + * + * Fill in TC parameters in single MUSDK TC config entry. + * @param param TC parameters entry. + * @param inqs Number of MUSDK in-queues in this TC. + * @param bpool Bpool for this TC. + * @param color Default color for this TC. + * @returns 0 in case of success, exits otherwise. + */ +static int +setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs, + struct pp2_bpool *bpool, enum pp2_ppio_color color) +{ + struct pp2_ppio_inq_params *inq_params; + + param->pkt_offset = MRVL_PKT_OFFS; + param->pools[0] = bpool; + param->default_color = color; + + inq_params = rte_zmalloc_socket("inq_params", + inqs * sizeof(*inq_params), + 0, rte_socket_id()); + if (!inq_params) + return -ENOMEM; + + param->num_in_qs = inqs; + + /* Release old config if necessary. */ + if (param->inqs_params) + rte_free(param->inqs_params); + + param->inqs_params = inq_params; + + return 0; +} + +/** + * Setup ingress policer. + * + * @param priv Port's private data. + * @param params Pointer to the policer's configuration. + * @returns 0 in case of success, negative values otherwise. + */ +static int +setup_policer(struct mrvl_priv *priv, struct pp2_cls_plcr_params *params) +{ + char match[16]; + int ret; + + snprintf(match, sizeof(match), "policer-%d:%d\n", + priv->pp_id, priv->ppio_id); + params->match = match; + + ret = pp2_cls_plcr_init(params, &priv->policer); + if (ret) { + RTE_LOG(ERR, PMD, "Failed to setup %s\n", match); + return -1; + } + + priv->ppio_params.inqs_params.plcr = priv->policer; + + return 0; +} + +/** + * Configure RX Queues in a given port. + * + * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping. + * + * @param priv Port's private data + * @param portid DPDK port ID + * @param max_queues Maximum number of queues to configure. + * @returns 0 in case of success, negative value otherwise. + */ +int +mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, + uint16_t max_queues) +{ + size_t i, tc; + + if (mrvl_qos_cfg == NULL || + mrvl_qos_cfg->port[portid].use_global_defaults) { + /* + * No port configuration, use default: 1 TC, no QoS, + * TC color set to green. + */ + priv->ppio_params.inqs_params.num_tcs = 1; + setup_tc(&priv->ppio_params.inqs_params.tcs_params[0], + max_queues, priv->bpool, PP2_PPIO_COLOR_GREEN); + + /* Direct mapping of queues i.e. 0->0, 1->1 etc. */ + for (i = 0; i < max_queues; ++i) { + priv->rxq_map[i].tc = 0; + priv->rxq_map[i].inq = i; + } + return 0; + } + + /* We need only a subset of configuration. */ + struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid]; + + priv->qos_tbl_params.type = port_cfg->mapping_priority; + + /* + * We need to reverse mapping, from tc->pcp (better from usability + * point of view) to pcp->tc (configurable in MUSDK). + * First, set all map elements to "default". + */ + for (i = 0; i < RTE_DIM(priv->qos_tbl_params.pcp_cos_map); ++i) + priv->qos_tbl_params.pcp_cos_map[i].tc = port_cfg->default_tc; + + /* Then, fill in all known values. */ + for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { + if (port_cfg->tc[tc].pcps > RTE_DIM(port_cfg->tc[0].pcp)) { + /* Better safe than sorry. */ + RTE_LOG(ERR, PMD, + "Too many PCPs configured in TC %zu!\n", tc); + return -1; + } + for (i = 0; i < port_cfg->tc[tc].pcps; ++i) { + priv->qos_tbl_params.pcp_cos_map[ + port_cfg->tc[tc].pcp[i]].tc = tc; + } + } + + /* + * The same logic goes with DSCP. + * First, set all map elements to "default". + */ + for (i = 0; i < RTE_DIM(priv->qos_tbl_params.dscp_cos_map); ++i) + priv->qos_tbl_params.dscp_cos_map[i].tc = + port_cfg->default_tc; + + /* Fill in all known values. */ + for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { + if (port_cfg->tc[tc].dscps > RTE_DIM(port_cfg->tc[0].dscp)) { + /* Better safe than sorry. */ + RTE_LOG(ERR, PMD, + "Too many DSCPs configured in TC %zu!\n", tc); + return -1; + } + for (i = 0; i < port_cfg->tc[tc].dscps; ++i) { + priv->qos_tbl_params.dscp_cos_map[ + port_cfg->tc[tc].dscp[i]].tc = tc; + } + } + + /* + * Surprisingly, similar logic goes with queue mapping. + * We need only to store qid->tc mapping, + * to know TC when queue is read. + */ + for (i = 0; i < RTE_DIM(priv->rxq_map); ++i) + priv->rxq_map[i].tc = MRVL_UNKNOWN_TC; + + /* Set up DPDKq->(TC,inq) mapping. */ + for (tc = 0; tc < RTE_DIM(port_cfg->tc); ++tc) { + if (port_cfg->tc[tc].inqs > RTE_DIM(port_cfg->tc[0].inq)) { + /* Overflow. */ + RTE_LOG(ERR, PMD, + "Too many RX queues configured per TC %zu!\n", + tc); + return -1; + } + for (i = 0; i < port_cfg->tc[tc].inqs; ++i) { + uint8_t idx = port_cfg->tc[tc].inq[i]; + + if (idx > RTE_DIM(priv->rxq_map)) { + RTE_LOG(ERR, PMD, "Bad queue index %d!\n", idx); + return -1; + } + + priv->rxq_map[idx].tc = tc; + priv->rxq_map[idx].inq = i; + } + } + + /* + * Set up TC configuration. TCs need to be sequenced: 0, 1, 2 + * with no gaps. Empty TC means end of processing. + */ + for (i = 0; i < MRVL_PP2_TC_MAX; ++i) { + if (port_cfg->tc[i].inqs == 0) + break; + setup_tc(&priv->ppio_params.inqs_params.tcs_params[i], + port_cfg->tc[i].inqs, + priv->bpool, port_cfg->tc[i].color); + } + + priv->ppio_params.inqs_params.num_tcs = i; + + if (port_cfg->policer_enable) + return setup_policer(priv, &port_cfg->policer_params); + + return 0; +} + +/** + * Configure TX Queues in a given port. + * + * Sets up TX queues egress scheduler and limiter. + * + * @param priv Port's private data + * @param portid DPDK port ID + * @param max_queues Maximum number of queues to configure. + * @returns 0 in case of success, negative value otherwise. + */ +int +mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid, + uint16_t max_queues) +{ + /* We need only a subset of configuration. */ + struct port_cfg *port_cfg = &mrvl_qos_cfg->port[portid]; + int i; + + if (mrvl_qos_cfg == NULL) + return 0; + + priv->ppio_params.rate_limit_enable = port_cfg->rate_limit_enable; + if (port_cfg->rate_limit_enable) + priv->ppio_params.rate_limit_params = + port_cfg->rate_limit_params; + + for (i = 0; i < max_queues; i++) { + struct pp2_ppio_outq_params *params = + &priv->ppio_params.outqs_params.outqs_params[i]; + + params->sched_mode = port_cfg->outq[i].sched_mode; + params->weight = port_cfg->outq[i].weight; + params->rate_limit_enable = port_cfg->outq[i].rate_limit_enable; + params->rate_limit_params = port_cfg->outq[i].rate_limit_params; + } + + return 0; +} + +/** + * Start QoS mapping. + * + * Finalize QoS table configuration and initialize it in SDK. It can be done + * only after port is started, so we have a valid ppio reference. + * + * @param priv Port's private (configuration) data. + * @returns 0 in case of success, exits otherwise. + */ +int +mrvl_start_qos_mapping(struct mrvl_priv *priv) +{ + size_t i; + + if (priv->ppio == NULL) { + RTE_LOG(ERR, PMD, "ppio must not be NULL here!\n"); + return -1; + } + + for (i = 0; i < RTE_DIM(priv->qos_tbl_params.pcp_cos_map); ++i) + priv->qos_tbl_params.pcp_cos_map[i].ppio = priv->ppio; + + for (i = 0; i < RTE_DIM(priv->qos_tbl_params.dscp_cos_map); ++i) + priv->qos_tbl_params.dscp_cos_map[i].ppio = priv->ppio; + + /* Initialize Classifier QoS table. */ + + return pp2_cls_qos_tbl_init(&priv->qos_tbl_params, &priv->qos_tbl); +} diff --git a/drivers/net/mvpp2/mrvl_qos.h b/drivers/net/mvpp2/mrvl_qos.h new file mode 100644 index 0000000000..fa9ddecb86 --- /dev/null +++ b/drivers/net/mvpp2/mrvl_qos.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017 Marvell International Ltd. + * Copyright(c) 2017 Semihalf. + * All rights reserved. + */ + +#ifndef _MRVL_QOS_H_ +#define _MRVL_QOS_H_ + +#include + +#include "mrvl_ethdev.h" + +/** Code Points per Traffic Class. Equals max(DSCP, PCP). */ +#define MRVL_CP_PER_TC (64) + +/** Value used as "unknown". */ +#define MRVL_UNKNOWN_TC (0xFF) + +/* QoS config. */ +struct mrvl_qos_cfg { + struct port_cfg { + int rate_limit_enable; + struct pp2_ppio_rate_limit_params rate_limit_params; + struct { + uint8_t inq[MRVL_PP2_RXQ_MAX]; + uint8_t dscp[MRVL_CP_PER_TC]; + uint8_t pcp[MRVL_CP_PER_TC]; + uint8_t inqs; + uint8_t dscps; + uint8_t pcps; + enum pp2_ppio_color color; + } tc[MRVL_PP2_TC_MAX]; + struct { + enum pp2_ppio_outq_sched_mode sched_mode; + uint8_t weight; + int rate_limit_enable; + struct pp2_ppio_rate_limit_params rate_limit_params; + } outq[MRVL_PP2_RXQ_MAX]; + enum pp2_cls_qos_tbl_type mapping_priority; + uint16_t inqs; + uint16_t outqs; + uint8_t default_tc; + uint8_t use_global_defaults; + struct pp2_cls_plcr_params policer_params; + uint8_t policer_enable; + } port[RTE_MAX_ETHPORTS]; +}; + +/** Global QoS configuration. */ +extern struct mrvl_qos_cfg *mrvl_qos_cfg; + +/** + * Parse QoS configuration - rte_kvargs_process handler. + * + * Opens configuration file and parses its content. + * + * @param key Unused. + * @param path Path to config file. + * @param extra_args Pointer to configuration structure. + * @returns 0 in case of success, exits otherwise. + */ +int +mrvl_get_qoscfg(const char *key __rte_unused, const char *path, + void *extra_args); + +/** + * Configure RX Queues in a given port. + * + * Sets up RX queues, their Traffic Classes and DPDK rxq->(TC,inq) mapping. + * + * @param priv Port's private data + * @param portid DPDK port ID + * @param max_queues Maximum number of queues to configure. + * @returns 0 in case of success, negative value otherwise. + */ +int +mrvl_configure_rxqs(struct mrvl_priv *priv, uint16_t portid, + uint16_t max_queues); + +/** + * Configure TX Queues in a given port. + * + * Sets up TX queues egress scheduler and limiter. + * + * @param priv Port's private data + * @param portid DPDK port ID + * @param max_queues Maximum number of queues to configure. + * @returns 0 in case of success, negative value otherwise. + */ +int +mrvl_configure_txqs(struct mrvl_priv *priv, uint16_t portid, + uint16_t max_queues); + +/** + * Start QoS mapping. + * + * Finalize QoS table configuration and initialize it in SDK. It can be done + * only after port is started, so we have a valid ppio reference. + * + * @param priv Port's private (configuration) data. + * @returns 0 in case of success, exits otherwise. + */ +int +mrvl_start_qos_mapping(struct mrvl_priv *priv); + +#endif /* _MRVL_QOS_H_ */ diff --git a/drivers/net/mvpp2/rte_pmd_mrvl_version.map b/drivers/net/mvpp2/rte_pmd_mrvl_version.map new file mode 100644 index 0000000000..a753031720 --- /dev/null +++ b/drivers/net/mvpp2/rte_pmd_mrvl_version.map @@ -0,0 +1,3 @@ +DPDK_17.11 { + local: *; +}; diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 94525dc80f..a9b4b0502f 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -164,7 +164,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += -lrte_pmd_mlx5 -ldl else _LDLIBS-$(CONFIG_RTE_LIBRTE_MLX5_PMD) += -lrte_pmd_mlx5 -libverbs -lmlx5 endif -_LDLIBS-$(CONFIG_RTE_LIBRTE_MRVL_PMD) += -lrte_pmd_mrvl -L$(LIBMUSDK_PATH)/lib -lmusdk +_LDLIBS-$(CONFIG_RTE_LIBRTE_MVPP2_PMD) += -lrte_pmd_mvpp2 -L$(LIBMUSDK_PATH)/lib -lmusdk _LDLIBS-$(CONFIG_RTE_LIBRTE_NFP_PMD) += -lrte_pmd_nfp _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_NULL) += -lrte_pmd_null _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_PCAP) += -lrte_pmd_pcap -lpcap