1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2019 Intel Corporation
4 Intel(R) FPGA 5GNR FEC Poll Mode Driver
5 =======================================
7 The BBDEV FPGA 5GNR FEC poll mode driver (PMD) supports an FPGA implementation of a VRAN
8 LDPC Encode / Decode 5GNR wireless acceleration function, using Intel's PCI-e and FPGA
9 based Vista Creek device.
14 FPGA 5GNR FEC PMD supports the following features:
16 - LDPC Encode in the DL
17 - LDPC Decode in the UL
18 - 8 VFs per PF (physical device)
19 - Maximum of 32 UL queues per VF
20 - Maximum of 32 DL queues per VF
21 - PCIe Gen-3 x8 Interface
25 FPGA 5GNR FEC PMD supports the following BBDEV capabilities:
27 * For the LDPC encode operation:
28 - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH`` : set to attach CRC24B to CB(s)
29 - ``RTE_BBDEV_LDPC_RATE_MATCH`` : if set then do not do Rate Match bypass
31 * For the LDPC decode operation:
32 - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK`` : check CRC24B from CB(s)
33 - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE`` : disable early termination
34 - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP`` : drops CRC24B bits appended while decoding
35 - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE`` : provides an input for HARQ combining
36 - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` : provides an input for HARQ combining
37 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` : HARQ memory input is internal
38 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` : HARQ memory output is internal
39 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK`` : loopback data to/from HARQ memory
40 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` : HARQ memory includes the fillers bits
46 FPGA 5GNR FEC does not support the following:
48 - Scatter-Gather function
54 Section 3 of the DPDK manual provides instuctions on installing and compiling DPDK. The
55 default set of bbdev compile flags may be found in config/common_base, where for example
56 the flag to build the FPGA 5GNR FEC device, ``CONFIG_RTE_LIBRTE_PMD_BBDEV_FPGA_5GNR_FEC``,
57 is already set. It is assumed DPDK has been compiled using for instance:
59 .. code-block:: console
61 make install T=x86_64-native-linuxapp-gcc
64 DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual.
65 The bbdev test application has been tested with a configuration 40 x 1GB hugepages. The
66 hugepage configuration of a server may be examined using:
68 .. code-block:: console
70 grep Huge* /proc/meminfo
76 When the device first powers up, its PCI Physical Functions (PF) can be listed through this command:
78 .. code-block:: console
80 sudo lspci -vd8086:0d8f
82 The physical and virtual functions are compatible with Linux UIO drivers:
83 ``vfio`` and ``igb_uio``. However, in order to work the FPGA 5GNR FEC device firstly needs
84 to be bound to one of these linux drivers through DPDK.
90 Install the DPDK igb_uio driver, bind it with the PF PCI device ID and use
91 ``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK UIO driver.
93 The igb_uio driver may be bound to the PF PCI device using one of three methods:
96 1. PCI functions (physical or virtual, depending on the use case) can be bound to
97 the UIO driver by repeating this command for every function.
99 .. code-block:: console
101 cd <dpdk-top-level-directory>
102 insmod ./build/kmod/igb_uio.ko
103 echo "8086 0d8f" > /sys/bus/pci/drivers/igb_uio/new_id
107 2. Another way to bind PF with DPDK UIO driver is by using the ``dpdk-devbind.py`` tool
109 .. code-block:: console
111 cd <dpdk-top-level-directory>
112 ./usertools/dpdk-devbind.py -b igb_uio 0000:06:00.0
114 where the PCI device ID (example: 0000:06:00.0) is obtained using lspci -vd8086:0d8f
117 3. A third way to bind is to use ``dpdk-setup.sh`` tool
119 .. code-block:: console
121 cd <dpdk-top-level-directory>
122 ./usertools/dpdk-setup.sh
124 select 'Bind Ethernet/Crypto/Baseband device to IGB UIO module'
126 select 'Bind Ethernet/Crypto/Baseband device to VFIO module' depending on driver required
128 select 'Display current Ethernet/Crypto/Baseband device settings' to confirm binding
131 In the same way the FPGA 5GNR FEC PF can be bound with vfio, but vfio driver does not
132 support SR-IOV configuration right out of the box, so it will need to be patched.
135 Enable Virtual Functions
136 ~~~~~~~~~~~~~~~~~~~~~~~~
138 Now, it should be visible in the printouts that PCI PF is under igb_uio control
139 "``Kernel driver in use: igb_uio``"
141 To show the number of available VFs on the device, read ``sriov_totalvfs`` file..
143 .. code-block:: console
145 cat /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_totalvfs
147 where 0000\:<b>\:<d>.<f> is the PCI device ID
150 To enable VFs via igb_uio, echo the number of virtual functions intended to
151 enable to ``max_vfs`` file..
153 .. code-block:: console
155 echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/max_vfs
158 Afterwards, all VFs must be bound to appropriate UIO drivers as required, same
159 way it was done with the physical function previously.
161 Enabling SR-IOV via vfio driver is pretty much the same, except that the file
164 .. code-block:: console
166 echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_numvfs
169 Configure the VFs through PF
170 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
172 The PCI virtual functions must be configured before working or getting assigned
173 to VMs/Containers. The configuration involves allocating the number of hardware
174 queues, priorities, load balance, bandwidth and other settings necessary for the
175 device to perform FEC functions.
177 This configuration needs to be executed at least once after reboot or PCI FLR and can
178 be achieved by using the function ``fpga_5gnr_fec_configure()``, which sets up the
179 parameters defined in ``fpga_5gnr_fec_conf`` structure:
183 struct fpga_5gnr_fec_conf {
185 uint8_t vf_ul_queues_number[FPGA_5GNR_FEC_NUM_VFS];
186 uint8_t vf_dl_queues_number[FPGA_5GNR_FEC_NUM_VFS];
187 uint8_t ul_bandwidth;
188 uint8_t dl_bandwidth;
189 uint8_t ul_load_balance;
190 uint8_t dl_load_balance;
191 uint16_t flr_time_out;
194 - ``pf_mode_en``: identifies whether only PF is to be used, or the VFs. PF and
195 VFs are mutually exclusive and cannot run simultaneously.
196 Set to 1 for PF mode enabled.
197 If PF mode is enabled all queues available in the device are assigned
198 exclusively to PF and 0 queues given to VFs.
200 - ``vf_*l_queues_number``: defines the hardware queue mapping for every VF.
202 - ``*l_bandwidth``: in case of congestion on PCIe interface. The device
203 allocates different bandwidth to UL and DL. The weight is configured by this
204 setting. The unit of weight is 3 code blocks. For example, if the code block
205 cbps (code block per second) ratio between UL and DL is 12:1, then the
206 configuration value should be set to 36:3. The schedule algorithm is based
207 on code block regardless the length of each block.
209 - ``*l_load_balance``: hardware queues are load-balanced in a round-robin
210 fashion. Queues get filled first-in first-out until they reach a pre-defined
211 watermark level, if exceeded, they won't get assigned new code blocks..
212 This watermark is defined by this setting.
214 If all hardware queues exceeds the watermark, no code blocks will be
215 streamed in from UL/DL code block FIFO.
217 - ``flr_time_out``: specifies how many 16.384us to be FLR time out. The
218 time_out = flr_time_out x 16.384us. For instance, if you want to set 10ms for
219 the FLR time out then set this setting to 0x262=610.
222 An example configuration code calling the function ``fpga_5gnr_fec_configure()`` is shown
227 struct fpga_5gnr_fec_conf conf;
230 memset(&conf, 0, sizeof(struct fpga_5gnr_fec_conf));
233 for (i = 0; i < FPGA_5GNR_FEC_NUM_VFS; ++i) {
234 conf.vf_ul_queues_number[i] = 4;
235 conf.vf_dl_queues_number[i] = 4;
237 conf.ul_bandwidth = 12;
238 conf.dl_bandwidth = 5;
239 conf.dl_load_balance = 64;
240 conf.ul_load_balance = 64;
243 ret = fpga_5gnr_fec_configure(info->dev_name, &conf);
244 TEST_ASSERT_SUCCESS(ret,
245 "Failed to configure 4G FPGA PF for bbdev %s",
252 BBDEV provides a test application, ``test-bbdev.py`` and range of test data for testing
253 the functionality of FPGA 5GNR FEC encode and decode, depending on the device's
254 capabilities. The test application is located under app->test-bbdev folder and has the
257 .. code-block:: console
259 "-p", "--testapp-path": specifies path to the bbdev test app.
260 "-e", "--eal-params" : EAL arguments which are passed to the test app.
261 "-t", "--timeout" : Timeout in seconds (default=300).
262 "-c", "--test-cases" : Defines test cases to run. Run all if not specified.
263 "-v", "--test-vector" : Test vector path (default=dpdk_path+/app/test-bbdev/test_vectors/bbdev_null.data).
264 "-n", "--num-ops" : Number of operations to process on device (default=32).
265 "-b", "--burst-size" : Operations enqueue/dequeue burst size (default=32).
266 "-l", "--num-lcores" : Number of lcores to run (default=16).
267 "-i", "--init-device" : Initialise PF device with default values.
270 To execute the test application tool using simple decode or encode data,
271 type one of the following:
273 .. code-block:: console
275 ./test-bbdev.py -c validation -n 64 -b 1 -v ./ldpc_dec_default.data
276 ./test-bbdev.py -c validation -n 64 -b 1 -v ./ldpc_enc_default.data
279 The test application ``test-bbdev.py``, supports the ability to configure the PF device with
280 a default set of values, if the "-i" or "- -init-device" option is included. The default values
281 are defined in test_bbdev_perf.c as:
283 - VF_UL_QUEUE_VALUE 4
284 - VF_DL_QUEUE_VALUE 4
287 - UL_LOAD_BALANCE 128
288 - DL_LOAD_BALANCE 128
295 In addition to the simple LDPC decoder and LDPC encoder tests, bbdev also provides
296 a range of additional tests under the test_vectors folder, which may be useful. The results
297 of these tests will depend on the FPGA 5GNR FEC capabilities.