section of :ref:`the DPDK documentation <linux_gsg>` or refer to *DPDK
Release Notes*.
-Build options
--------------
-
-The default PMD configuration available in the common_linuxapp configuration file:
-
-CONFIG_RTE_LIBRTE_PMD_SOFTNIC=y
-
-Once the DPDK is built, all the DPDK applications include support for the
-Soft NIC PMD.
Soft NIC PMD arguments
----------------------
#. ``tm_n_queues``: number of traffic manager's scheduler queues. The traffic manager
is based on DPDK *librte_sched* library. (Optional: yes, Default value: 65,536 queues)
-#. ``tm_qsize0``: size of scheduler queue 0 per traffic class of the pipes/subscribers.
+#. ``tm_qsize0``: size of scheduler queue 0 (traffic class 0) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize1``: size of scheduler queue 1 (traffic class 1) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize2``: size of scheduler queue 2 (traffic class 2) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize3``: size of scheduler queue 3 (traffic class 3) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize4``: size of scheduler queue 4 (traffic class 4) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize5``: size of scheduler queue 5 (traffic class 5) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize6``: size of scheduler queue 6 (traffic class 6) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize7``: size of scheduler queue 7 (traffic class 7) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize8``: size of scheduler queue 8 (traffic class 8) of the pipes/subscribers.
+ (Optional: yes, Default: 64)
+
+#. ``tm_qsize9``: size of scheduler queue 9 (traffic class 9) of the pipes/subscribers.
(Optional: yes, Default: 64)
-#. ``tm_qsize1``: size of scheduler queue 1 per traffic class of the pipes/subscribers.
+#. ``tm_qsize10``: size of scheduler queue 10 (traffic class 10) of the pipes/subscribers.
(Optional: yes, Default: 64)
-#. ``tm_qsize2``: size of scheduler queue 2 per traffic class of the pipes/subscribers.
+#. ``tm_qsize11``: size of scheduler queue 11 (traffic class 11) of the pipes/subscribers.
(Optional: yes, Default: 64)
-#. ``tm_qsize3``: size of scheduler queue 3 per traffic class of the pipes/subscribers.
+#. ``tm_qsize12``: size of scheduler queue 12 (traffic class 12) of the pipes/subscribers.
(Optional: yes, Default: 64)
Soft NIC testing
----------------
-* Run testpmd application in Soft NIC forwarding mode with loopback feature
+* Run testpmd application with Soft NIC device with loopback feature
enabled on Soft NIC port:
.. code-block:: console
- ./testpmd -c 0x3 --vdev 'net_softnic0,firmware=<script path>/firmware.cli,cpu_id=0,conn_port=8086' -- -i
- --forward-mode=softnic --portmask=0x2
+ ./dpdk-testpmd -c 0x7 -s 0x4 --vdev 'net_softnic0,firmware=<script path>/firmware.cli,cpu_id=0,conn_port=8086' -- -i
+ --portmask=0x2
.. code-block:: console
pipeline TX table match stub
pipeline TX port in 0 table 0
- thread 1 pipeline RX enable
- thread 1 pipeline TX enable
+ thread 2 pipeline RX enable
+ thread 2 pipeline TX enable
Port 1: 00:00:00:00:00:00
Checking link statuses...
Done
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
-* Start remote client (e.g. telnet) to communicate with the softnic device:
+* Softnic device can be configured using remote client (e.g. telnet). However,
+ testpmd application doesn't support configuration through telnet :
.. code-block:: console
.. code-block:: console
- thread 1 pipeline RX enable (Soft NIC rx pipeline enable on cpu thread id 1)
- thread 1 pipeline TX enable (Soft NIC tx pipeline enable on cpu thread id 1)
+ thread 2 pipeline RX enable (Soft NIC rx pipeline enable on cpu thread id 2)
+ thread 2 pipeline TX enable (Soft NIC tx pipeline enable on cpu thread id 2)
+
+QoS API Support:
+----------------
+
+SoftNIC PMD implements ethdev traffic management APIs ``rte_tm.h`` that
+allow building and committing traffic manager hierarchy, configuring hierarchy
+nodes of the Quality of Service (QoS) scheduler supported by DPDK librte_sched
+library. Furthermore, APIs for run-time update to the traffic manager hierarchy
+are supported by PMD.
+
+SoftNIC PMD also implements ethdev traffic metering and policing APIs
+``rte_mtr.h`` that enables metering and marking of the packets with the
+appropriate color (green, yellow or red), according to the traffic metering
+algorithm. For the meter output color, policer actions like
+`keep the packet color same`, `change the packet color` or `drop the packet`
+can be configured.
+
+.. Note::
+
+ The SoftNIC does not support the meter object shared by several flows,
+ thus only supports creating meter object private to the flow. Once meter
+ object is successfully created, it can be linked to the specific flow by
+ specifying the ``meter`` flow action in the flow rule.
+
+Flow API support:
+-----------------
+
+The SoftNIC PMD implements ethdev flow APIs ``rte_flow.h`` that allow validating
+flow rules, adding flow rules to the SoftNIC pipeline as table rules, deleting
+and querying the flow rules. The PMD provides new cli command for creating the
+flow group and their mapping to the SoftNIC pipeline and table. This cli should
+be configured as part of firmware file.
+
+ .. code-block:: console
+
+ flowapi map group <group_id> ingress | egress pipeline <pipeline_name> \
+ table <table_id>
+
+From the flow attributes of the flow, PMD uses the group id to get the mapped
+pipeline and table. PMD supports number of flow actions such as
+``JMP, QUEUE, RSS, DROP, COUNT, METER, VXLAN`` etc.
+
+.. Note::
+
+ The flow must have one terminating actions i.e.
+ ``JMP or RSS or QUEUE or DROP``. For the count and drop actions the
+ underlying PMD doesn't support the functionality yet. So it is not
+ recommended for use.
+
+The flow API can be tested with the help of testpmd application. The SoftNIC
+firmware specifies CLI commands for port configuration, pipeline creation,
+action profile creation and table creation. Once application gets initialized,
+the flow rules can be added through the testpmd CLI.
+The PMD will translate the flow rules to the SoftNIC pipeline tables rules.
+
+Example:
+~~~~~~~~
+Example demonstrates the flow queue action using the SoftNIC firmware and testpmd
+commands.
+
+* Prepare SoftNIC firmware
+
+ .. code-block:: console
+
+ link LINK0 dev 0000:83:00.0
+ link LINK1 dev 0000:81:00.0
+ pipeline RX period 10 offset_port_id 0
+ pipeline RX port in bsz 32 link LINK0 rxq 0
+ pipeline RX port in bsz 32 link LINK1 rxq 0
+ pipeline RX port out bsz 32 swq RXQ0
+ pipeline RX port out bsz 32 swq RXQ1
+ table action profile AP0 ipv4 offset 278 fwd
+ pipeline RX table match hash ext key 16 mask
+ 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF \
+ offset 278 buckets 16K size 65K action AP0
+ pipeline RX port in 0 table 0
+ pipeline RX port in 1 table 0
+ flowapi map group 0 ingress pipeline RX table 0
+ pipeline TX period 10 offset_port_id 0
+ pipeline TX port in bsz 32 swq TXQ0
+ pipeline TX port in bsz 32 swq TXQ1
+ pipeline TX port out bsz 32 link LINK0 txq 0
+ pipeline TX port out bsz 32 link LINK1 txq 0
+ pipeline TX table match hash ext key 16 mask
+ 00FF0000FFFFFFFFFFFFFFFFFFFFFFFF \
+ offset 278 buckets 16K size 65K action AP0
+ pipeline TX port in 0 table 0
+ pipeline TX port in 1 table 0
+ pipeline TX table 0 rule add match hash ipv4_5tuple
+ 1.10.11.12 2.20.21.22 100 200 6 action fwd port 0
+ pipeline TX table 0 rule add match hash ipv4_5tuple
+ 1.10.11.13 2.20.21.23 100 200 6 action fwd port 1
+ thread 2 pipeline RX enable
+ thread 2 pipeline TX enable
+
+* Run testpmd:
+
+ .. code-block:: console
+
+ ./<build_dir>/app/dpdk-testpmd -c 0x7 -s 0x4 -n 4 \
+ --vdev 'net_softnic0, \
+ firmware=./drivers/net/softnic/ \
+ firmware.cli, \
+ cpu_id=1,conn_port=8086' -- \
+ -i --rxq=2, \
+ --txq=2, --disable-rss --portmask=0x4
+
+* Configure flow rules on softnic:
+
+ .. code-block:: console
+
+ flow create 2 group 0 ingress pattern eth / ipv4 proto mask 255 src \
+ mask 255.255.255.255 dst mask 255.255.255.255 src spec
+ 1.10.11.12 dst spec 2.20.21.22 proto spec 6 / tcp src mask 65535 \
+ dst mask 65535 src spec 100 dst spec 200 / end actions queue \
+ index 0 / end
+ flow create 2 group 0 ingress pattern eth / ipv4 proto mask 255 src \
+ mask 255.255.255.255 dst mask 255.255.255.255 src spec 1.10.11.13 \
+ dst spec 2.20.21.23 proto spec 6 / tcp src mask 65535 dst mask \
+ 65535 src spec 100 dst spec 200 / end actions queue index 1 / end