1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Cavium, Inc
4 ThunderX NICVF Poll Mode Driver
5 ===============================
7 The ThunderX NICVF PMD (**librte_pmd_thunderx_nicvf**) provides poll mode driver
8 support for the inbuilt NIC found in the **Cavium ThunderX** SoC family
9 as well as their virtual functions (VF) in SR-IOV context.
11 More information can be found at `Cavium, Inc Official Website
12 <http://www.cavium.com/ThunderX_ARM_Processors.html>`_.
17 Features of the ThunderX PMD are:
19 - Multiple queues for TX and RX
20 - Receive Side Scaling (RSS)
21 - Packet type information
25 - Port hardware statistics
27 - Link state information
28 - Scattered and gather for TX and RX
32 - Multi queue set support (up to 96 queues (12 queue sets)) per port
34 Supported ThunderX SoCs
35 -----------------------
42 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
44 Pre-Installation Configuration
45 ------------------------------
50 The following options can be modified in the ``config`` file.
51 Please note that enabling debugging options may affect system performance.
53 - ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_PMD`` (default ``y``)
55 Toggle compilation of the ``librte_pmd_thunderx_nicvf`` driver.
57 - ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_RX`` (default ``n``)
59 Toggle asserts of receive fast path.
61 - ``CONFIG_RTE_LIBRTE_THUNDERX_NICVF_DEBUG_TX`` (default ``n``)
63 Toggle asserts of transmit fast path.
65 Driver compilation and testing
66 ------------------------------
68 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
71 To compile the ThunderX NICVF PMD for Linux arm64 gcc,
72 use arm64-thunderx-linuxapp-gcc as target.
77 SR-IOV: Prerequisites and sample Application Notes
78 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
80 Current ThunderX NIC PF/VF kernel modules maps each physical Ethernet port
81 automatically to virtual function (VF) and presented them as PCIe-like SR-IOV device.
82 This section provides instructions to configure SR-IOV with Linux OS.
84 #. Verify PF devices capabilities using ``lspci``:
86 .. code-block:: console
92 .. code-block:: console
94 0002:01:00.0 Ethernet controller: Cavium Networks Device a01e (rev 01)
96 Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
98 Capabilities: [180 v1] Single Root I/O Virtualization (SR-IOV)
100 Kernel driver in use: thunder-nic
105 Unless ``thunder-nic`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_PF`` setting.
107 #. Verify VF devices capabilities and drivers using ``lspci``:
109 .. code-block:: console
115 .. code-block:: console
117 0002:01:00.1 Ethernet controller: Cavium Networks Device 0011 (rev 01)
119 Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
121 Kernel driver in use: thunder-nicvf
124 0002:01:00.2 Ethernet controller: Cavium Networks Device 0011 (rev 01)
126 Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI)
128 Kernel driver in use: thunder-nicvf
133 Unless ``thunder-nicvf`` driver is in use make sure your kernel config includes ``CONFIG_THUNDER_NIC_VF`` setting.
135 #. Pass VF device to VM context (PCIe Passthrough):
137 The VF devices may be passed through to the guest VM using qemu or
138 virt-manager or virsh etc.
140 Example qemu guest launch command:
142 .. code-block:: console
144 sudo qemu-system-aarch64 -name vm1 \
145 -machine virt,gic_version=3,accel=kvm,usb=off \
147 -smp 4,sockets=1,cores=8,threads=1 \
148 -nographic -nodefaults \
149 -kernel <kernel image> \
150 -append "root=/dev/vda console=ttyAMA0 rw hugepagesz=512M hugepages=3" \
151 -device vfio-pci,host=0002:01:00.1 \
152 -drive file=<rootfs.ext3>,if=none,id=disk1,format=raw \
153 -device virtio-blk-device,scsi=off,drive=disk1,id=virtio-disk1,bootindex=1 \
154 -netdev tap,id=net0,ifname=tap0,script=/etc/qemu-ifup_thunder \
155 -device virtio-net-device,netdev=net0 \
159 #. Enable **VFIO-NOIOMMU** mode (optional):
161 .. code-block:: console
163 echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode
167 **VFIO-NOIOMMU** is required only when running in VM context and should not be enabled otherwise.
171 Follow instructions available in the document
172 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
177 .. code-block:: console
179 ./arm64-thunderx-linuxapp-gcc/app/testpmd -l 0-3 -n 4 -w 0002:01:00.2 \
180 -- -i --no-flush-rx \
185 PMD: rte_nicvf_pmd_init(): librte_pmd_thunderx nicvf version 1.0
188 EAL: probe driver: 177d:11 rte_nicvf_pmd
189 EAL: using IOMMU type 1 (Type 1)
190 EAL: PCI memory mapped at 0x3ffade50000
191 EAL: Trying to map BAR 4 that contains the MSI-X table.
192 Trying offsets: 0x40000000000:0x0000, 0x10000:0x1f0000
193 EAL: PCI memory mapped at 0x3ffadc60000
194 PMD: nicvf_eth_dev_init(): nicvf: device (177d:11) 2:1:0:2
195 PMD: nicvf_eth_dev_init(): node=0 vf=1 mode=tns-bypass sqs=false
196 loopback_supported=true
197 PMD: nicvf_eth_dev_init(): Port 0 (177d:11) mac=a6:c6:d9:17:78:01
198 Interactive-mode selected
199 Configuring Port 0 (socket 0)
202 PMD: nicvf_dev_configure(): Configured ethdev port0 hwcap=0x0
203 Port 0: A6:C6:D9:17:78:01
204 Checking link statuses...
205 Port 0 Link Up - speed 10000 Mbps - full-duplex
209 Multiple Queue Set per DPDK port configuration
210 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
212 There are two types of VFs:
217 Each port consists of a primary VF and n secondary VF(s). Each VF provides 8 Tx/Rx queues to a port.
218 When a given port is configured to use more than 8 queues, it requires one (or more) secondary VF.
219 Each secondary VF adds 8 additional queues to the queue set.
221 During PMD driver initialization, the primary VF's are enumerated by checking the
222 specific flag (see sqs message in DPDK boot log - sqs indicates secondary queue set).
223 They are at the beginning of VF list (the remain ones are secondary VF's).
225 The primary VFs are used as master queue sets. Secondary VFs provide
226 additional queue sets for primary ones. If a port is configured for more then
227 8 queues than it will request for additional queues from secondary VFs.
229 Secondary VFs cannot be shared between primary VFs.
231 Primary VFs are present on the beginning of the 'Network devices using kernel
232 driver' list, secondary VFs are on the remaining on the remaining part of the list.
236 The VNIC driver in the multiqueue setup works differently than other drivers like `ixgbe`.
237 We need to bind separately each specific queue set device with the ``usertools/dpdk-devbind.py`` utility.
241 Depending on the hardware used, the kernel driver sets a threshold ``vf_id``. VFs that try to attached with an id below or equal to
242 this boundary are considered primary VFs. VFs that try to attach with an id above this boundary are considered secondary VFs.
245 Example device binding
246 ~~~~~~~~~~~~~~~~~~~~~~
248 If a system has three interfaces, a total of 18 VF devices will be created
249 on a non-NUMA machine.
253 NUMA systems have 12 VFs per port and non-NUMA 6 VFs per port.
255 .. code-block:: console
257 # usertools/dpdk-devbind.py --status
259 Network devices using DPDK-compatible driver
260 ============================================
263 Network devices using kernel driver
264 ===================================
265 0000:01:10.0 'Device a026' if= drv=thunder-BGX unused=vfio-pci,uio_pci_generic
266 0000:01:10.1 'Device a026' if= drv=thunder-BGX unused=vfio-pci,uio_pci_generic
267 0002:01:00.0 'Device a01e' if= drv=thunder-nic unused=vfio-pci,uio_pci_generic
268 0002:01:00.1 'Device 0011' if=eth0 drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
269 0002:01:00.2 'Device 0011' if=eth1 drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
270 0002:01:00.3 'Device 0011' if=eth2 drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
271 0002:01:00.4 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
272 0002:01:00.5 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
273 0002:01:00.6 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
274 0002:01:00.7 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
275 0002:01:01.0 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
276 0002:01:01.1 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
277 0002:01:01.2 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
278 0002:01:01.3 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
279 0002:01:01.4 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
280 0002:01:01.5 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
281 0002:01:01.6 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
282 0002:01:01.7 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
283 0002:01:02.0 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
284 0002:01:02.1 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
285 0002:01:02.2 'Device 0011' if= drv=thunder-nicvf unused=vfio-pci,uio_pci_generic
287 Other network devices
288 =====================
289 0002:00:03.0 'Device a01f' unused=vfio-pci,uio_pci_generic
292 We want to bind two physical interfaces with 24 queues each device, we attach two primary VFs
293 and four secondary queues. In our example we choose two 10G interfaces eth1 (0002:01:00.2) and eth2 (0002:01:00.3).
294 We will choose four secondary queue sets from the ending of the list (0002:01:01.7-0002:01:02.2).
297 #. Bind two primary VFs to the ``vfio-pci`` driver:
299 .. code-block:: console
301 usertools/dpdk-devbind.py -b vfio-pci 0002:01:00.2
302 usertools/dpdk-devbind.py -b vfio-pci 0002:01:00.3
304 #. Bind four primary VFs to the ``vfio-pci`` driver:
306 .. code-block:: console
308 usertools/dpdk-devbind.py -b vfio-pci 0002:01:01.7
309 usertools/dpdk-devbind.py -b vfio-pci 0002:01:02.0
310 usertools/dpdk-devbind.py -b vfio-pci 0002:01:02.1
311 usertools/dpdk-devbind.py -b vfio-pci 0002:01:02.2
313 The nicvf thunderx driver will make use of attached secondary VFs automatically during the interface configuration stage.
321 The ThunderX SoC family NICs strip the CRC for every packets coming into the
322 host interface irrespective of the offload configuration.
324 Maximum packet length
325 ~~~~~~~~~~~~~~~~~~~~~
327 The ThunderX SoC family NICs support a maximum of a 9K jumbo frame. The value
328 is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
329 member of ``struct rte_eth_conf`` is set to a value lower than 9200, frames
330 up to 9200 bytes can still reach the host interface.
332 Maximum packet segments
333 ~~~~~~~~~~~~~~~~~~~~~~~
335 The ThunderX SoC family NICs support up to 12 segments per packet when working
336 in scatter/gather mode. So, setting MTU will result with ``EINVAL`` when the
337 frame size does not fit in the maximum number of segments.