1 .. SPDX-License-Identifier: BSD-3-Clause
3 Copyright 2019 Netcope Technologies
5 NFB poll mode driver library
6 =================================
8 The NFB poll mode driver library implements support for the Netcope
9 FPGA Boards (**NFB-40G2, NFB-100G2, NFB-200G2QL**) and Silicom **FB2CGG3** card,
10 FPGA-based programmable NICs. The NFB PMD uses interface provided by the libnfb
11 library to communicate with these cards over the nfb layer.
13 More information about the
14 `NFB cards <https://www.liberouter.org/technologies/cards/>`_
16 (`Network Development Kit <https://www.liberouter.org/ndk/>`_)
17 can be found on the `Liberouter website <http://www.liberouter.org/>`_.
21 Currently the driver is supported only on x86_64 architectures.
22 Only x86_64 versions of the external libraries are provided.
27 This PMD requires kernel modules which are responsible for initialization and
28 allocation of resources needed for nfb layer function.
29 Communication between PMD and kernel modules is mediated by libnfb library.
30 These kernel modules and library are not part of DPDK and must be installed
35 The library provides API for initialization of nfb transfers, receiving and
36 transmitting data segments.
42 Kernel modules manage initialization of hardware, allocation and
43 sharing of resources for user space applications.
45 Dependencies can be found here:
46 `Netcope common <https://github.com/CESNET/ndk-sw>`_.
48 Versions of the packages
49 ~~~~~~~~~~~~~~~~~~~~~~~~
51 The minimum version of the provided packages:
60 The PMD supports hardware timestamps of frame receipt on physical network interface. In order to use
61 the timestamps, the hardware timestamping unit must be enabled (follow the documentation of the NFB
62 products). The standard `RTE_ETH_RX_OFFLOAD_TIMESTAMP` flag can be used for this feature.
64 When the timestamps are enabled, a timestamp validity flag is set in the MBUFs
65 containing received frames and timestamp is inserted into the `rte_mbuf` struct.
67 The timestamp is an `uint64_t` field. Its lower 32 bits represent *seconds* portion of the timestamp
68 (number of seconds elapsed since 1.1.1970 00:00:00 UTC) and its higher 32 bits represent
69 *nanosecond* portion of the timestamp (number of nanoseconds elapsed since the beginning of the
70 second in the *seconds* portion.
74 ----------------------
76 Kernel modules have to be loaded before running the DPDK application.
81 The NFB cards are multi-port multi-queue cards, where (generally) data from any
82 Ethernet port may be sent to any queue.
83 They are represented in DPDK as a single port.
85 NFB-200G2QL card employs an add-on cable which allows to connect it to two
86 physical PCI-E slots at the same time (see the diagram below).
87 This is done to allow 200 Gbps of traffic to be transferred through the PCI-E
88 bus (note that a single PCI-E 3.0 x16 slot provides only 125 Gbps theoretical
91 Although each slot may be connected to a different CPU and therefore to a different
92 NUMA node, the card is represented as a single port in DPDK. To work with data
93 from the individual queues on the right NUMA node, connection of NUMA nodes on
94 first and last queue (each NUMA node has half of the queues) need to be checked.
99 Driver is usable only on Linux architecture, namely on CentOS.
101 Since a card is always represented as a single port, but can be connected to two
102 NUMA nodes, there is need for manual check where master/slave is connected.
107 Read packets from 0. and 1. receive queue and write them to 0. and 1.
110 .. code-block:: console
112 ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 2 \
113 -- --port-topology=chained --rxq=2 --txq=2 --nb-cores=2 -i -a
117 .. code-block:: console
120 EAL: PCI device 0000:06:00.0 on NUMA socket -1
121 EAL: probe driver: 1b26:c1c1 net_nfb
122 PMD: Initializing NFB device (0000:06:00.0)
123 PMD: Available DMA queues RX: 8 TX: 8
124 PMD: NFB device (0000:06:00.0) successfully initialized
125 Interactive-mode selected
127 Configuring Port 0 (socket 0)
128 Port 0: 00:11:17:00:00:00
129 Checking link statuses...
130 Port 0 Link Up - speed 10000 Mbps - full-duplex
132 Start automatic packet forwarding
133 io packet forwarding - CRC stripping disabled - packets/burst=32
134 nb forwarding cores=2 - nb forwarding ports=1
135 RX queues=2 - RX desc=128 - RX free threshold=0
136 RX threshold registers: pthresh=0 hthresh=0 wthresh=0
137 TX queues=2 - TX desc=512 - TX free threshold=0
138 TX threshold registers: pthresh=0 hthresh=0 wthresh=0
139 TX RS bit threshold=0 - TXQ flags=0x0