1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Cavium, Inc
4 ThunderX NICVF Poll Mode Driver
5 ===============================
7 The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
8 support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
9 as well as their virtual functions (VF) in SR-IOV context.
11 More information can be found at `Cavium, Inc Official Website
12 <http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
17 Features of the ThunderX PMD are:
19 - Multiple queues for TX and RX
20 - Receive Side Scaling (RSS)
21 - Packet type information
25 - Port hardware statistics
27 - Link state information
28 - Scattered and gather for TX and RX
32 - Multi queue set support (up to 96 queues (12 queue sets)) per port
35 Supported ThunderX SoCs
36 -----------------------
43 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
45 Pre-Installation Configuration
46 ------------------------------
51 The following options can be modified in the ``config`` file.
52 Please note that enabling debugging options may affect system performance.
54 - ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``y``)
56 Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
58 - ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
60 Toggle asserts of receive fast path.
62 - ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
64 Toggle asserts of transmit fast path.
66 Driver compilation and testing
67 ------------------------------
69 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
72 To compile the ThunderX NICVF PMD for Linux arm64 gcc,
73 use arm64-thunderx-linux-gcc as target.
78 SR-IOV: Prerequisites and sample Application Notes
79 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
81 Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
82 automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
83 This section provides instructions to configure SR-IOV with Linux OS.
85 #. Verify PF devices capabilities using ``lspci``:
87 .. code-block:: console
93 .. code-block:: console
95 0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
97 Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
99 Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
101 Kernel driver in use: thunder-nic
106 Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
108 #. Verify VF devices capabilities and drivers using ``lspci``:
110 .. code-block:: console
116 .. code-block:: console
118 0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
120 Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
122 Kernel driver in use: thunder-nicvf
125 0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
127 Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
129 Kernel driver in use: thunder-nicvf
134 Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
136 #. Pass VF device to VM context (PCIe Passthrough):
138 The VF devices may be passed through to the guest VM using qemu or
139 virt-manager or virsh etc.
141 Example qemu guest launch command:
143 .. code-block:: console
145 sudo qemu-system-aarch64 -name vm1 \
146 -machine virt,gic_version=3,accel=kvm,usb=off \
148 -smp 4,sockets=1,cores=8,threads=1 \
149 -nographic -nodefaults \
150 -kernel <kernel image> \
151 -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
152 -device vfio-pci,host=0002:01:00.1 \
153 -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw \
154 -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
155 -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
156 -device virtio-net-device,netdev=net0 \
160 #. Enable **VFIO-NOIOMMU** mode (optional):
162 .. code-block:: console
164 echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
168 **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
172 Follow instructions available in the document
173 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
178 .. code-block:: console
180 ./arm64-thunderx-linux-gcc/app/testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
181 -- -i --no-flush-rx \
186 PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
189 EAL: probe driver: 177d:11 rte_nicvf_pmd
190 EAL: using IOMMU type 1 (Type 1)
191 EAL: PCI memory mapped at 0x3ffade50000
192 EAL: Trying to map BAR 4 that contains the MSI-X table.
193 Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
194 EAL: PCI memory mapped at 0x3ffadc60000
195 PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
196 PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
197 loopback_supported=true
198 PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
199 Interactive-mode selected
200 Configuring Port 0 (socket 0)
203 PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
204 Port 0: A6:C6:D9:17:78:01
205 Checking link statuses...
206 Port 0 Link Up - speed 10000 Mbps - full-duplex
210 Multiple Queue Set per DPDK port configuration
211 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
213 There are two types of VFs:
218 Each port consists of a primary VF and n secondary VF(s). Each VF provides 8 Tx/Rx queues to a port.
219 When a given port is configured to use more than 8 queues, it requires one (or more) secondary VF.
220 Each secondary VF adds 8 additional queues to the queue set.
222 During PMD driver initialization, the primary VF's are enumerated by checking the
223 specific flag (see sqs message in DPDK boot log - sqs indicates secondary queue set).
224 They are at the beginning of VF list (the remain ones are secondary VF's).
226 The primary VFs are used as master queue sets. Secondary VFs provide
227 additional queue sets for primary ones. If a port is configured for more then
228 8 queues than it will request for additional queues from secondary VFs.
230 Secondary VFs cannot be shared between primary VFs.
232 Primary VFs are present on the beginning of the 'Network devices using kernel
233 driver' list, secondary VFs are on the remaining on the remaining part of the list.
237 The VNIC driver in the multiqueue setup works differently than other drivers like `ixgbe`.
238 We need to bind separately each specific queue set device with the ``usertools/dpdk-devbind.py`` utility.
242 Depending on the hardware used, the kernel driver sets a threshold ``vf_id``. VFs that try to attached with an id below or equal to
243 this boundary are considered primary VFs. VFs that try to attach with an id above this boundary are considered secondary VFs.
246 Example device binding
247 ~~~~~~~~~~~~~~~~~~~~~~
249 If a system has three interfaces, a total of 18 VF devices will be created
250 on a non-NUMA machine.
254 NUMA systems have 12 VFs per port and non-NUMA 6 VFs per port.
256 .. code-block:: console
258 # usertools/dpdk-devbind.py --status
260 Network devices using DPDK-compatible driver
261 ============================================
264 Network devices using kernel driver
265 ===================================
266 0000:01:10.0 'Device a026' if= drv=thunder-BGX unused=vfio-pci,uio_pci_generic
267 0000:01:10.1 'Device a026' if= drv=thunder-BGX unused=vfio-pci,uio_pci_generic
268 0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci,uio_pci_generic
269 0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
270 0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
271 0002:01:00.3 'Device 0011' if=eth2 drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
272 0002:01:00.4 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
273 0002:01:00.5 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
274 0002:01:00.6 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
275 0002:01:00.7 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
276 0002:01:01.0 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
277 0002:01:01.1 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
278 0002:01:01.2 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
279 0002:01:01.3 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
280 0002:01:01.4 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
281 0002:01:01.5 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
282 0002:01:01.6 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
283 0002:01:01.7 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
284 0002:01:02.0 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
285 0002:01:02.1 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
286 0002:01:02.2 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
288 Other network devices
289 =====================
290 0002:00:03.0 'Device a01f' unused=vfio-pci,uio_pci_generic
293 We want to bind two physical interfaces with 24 queues each device, we attach two primary VFs
294 and four secondary queues. In our example we choose two 10G interfaces eth1 (0002:01:00.2) and eth2 (0002:01:00.3).
295 We will choose four secondary queue sets from the ending of the list (0002:01:01.7-0002:01:02.2).
298 #. Bind two primary VFs to the ``vfio-pci`` driver:
300 .. code-block:: console
302 usertools/dpdk-devbind.py -b vfio-pci 0002:01:00.2
303 usertools/dpdk-devbind.py -b vfio-pci 0002:01:00.3
305 #. Bind four primary VFs to the ``vfio-pci`` driver:
307 .. code-block:: console
309 usertools/dpdk-devbind.py -b vfio-pci 0002:01:01.7
310 usertools/dpdk-devbind.py -b vfio-pci 0002:01:02.0
311 usertools/dpdk-devbind.py -b vfio-pci 0002:01:02.1
312 usertools/dpdk-devbind.py -b vfio-pci 0002:01:02.2
314 The nicvf thunderx driver will make use of attached secondary VFs automatically during the interface configuration stage.
322 This feature is used to create a hole between HEADROOM and actual data. Size of hole is specified
323 in bytes as module param("skip_data_bytes") to pmd.
324 This scheme is useful when application would like to insert vlan header without disturbing HEADROOM.
327 .. code-block:: console
329 -w 0002:01:00.2,skip_data_bytes=8
337 The ThunderX SoC family NICs strip the CRC for every packets coming into the
338 host interface irrespective of the offload configuration.
340 Maximum packet length
341 ~~~~~~~~~~~~~~~~~~~~~
343 The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
344 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
345 member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
346 up to 9200 bytes can still reach the host interface.
348 Maximum packet segments
349 ~~~~~~~~~~~~~~~~~~~~~~~
351 The ThunderX SoC family NICs support up to 12 segments per packet when working
352 in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
353 frame size does not fit in the maximum number of segments.
358 Maximum limit of skip_data_bytes is 128 bytes and number of bytes should be multiple of 8.