1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2016 Intel Corporation.
7 The i40e PMD (librte_pmd_i40e) provides poll mode driver support for
8 10/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on
9 the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet
10 Connection X722 (only support part of features).
16 Features of the i40e PMD are:
18 - Multiple queues for TX and RX
19 - Receiver Side Scaling (RSS)
21 - Packet type information
25 - VLAN/QinQ stripping and inserting
29 - Port hardware statistics
31 - Link state information
33 - Mirror on port, VLAN and VSI
34 - Interrupt mode for RX
35 - Scattered and gather for TX and RX
36 - Vector Poll mode driver
41 - IEEE1588/802.1AS timestamping
42 - VF Daemon (VFD) - EXPERIMENTAL
43 - Dynamic Device Personalization (DDP)
44 - Queue region configuration
45 - Virtual Function Port Representors
50 - Identifying your adapter using `Intel Support
51 <http://www.intel.com/support>`_ and get the latest NVM/FW images.
53 - Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
55 - To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
56 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
58 - Upgrade the NVM/FW version following the `Intel® Ethernet NVM Update Tool Quick Usage Guide for Linux
59 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ and `Intel® Ethernet NVM Update Tool: Quick Usage Guide for EFI <https://www.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-efi-usage-guide.html>`_ if needed.
61 Pre-Installation Configuration
62 ------------------------------
67 The following options can be modified in the ``config`` file.
68 Please note that enabling debugging options may affect system performance.
70 - ``CONFIG_RTE_LIBRTE_I40E_PMD`` (default ``y``)
72 Toggle compilation of the ``librte_pmd_i40e`` driver.
74 - ``CONFIG_RTE_LIBRTE_I40E_DEBUG_*`` (default ``n``)
76 Toggle display of generic debugging messages.
78 - ``CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC`` (default ``y``)
80 Toggle bulk allocation for RX.
82 - ``CONFIG_RTE_LIBRTE_I40E_INC_VECTOR`` (default ``n``)
84 Toggle the use of Vector PMD instead of normal RX/TX path.
85 To enable vPMD for RX, bulk allocation for Rx must be allowed.
87 - ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` (default ``n``)
89 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte.
91 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
93 Number of queues reserved for PF.
95 - ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
97 Number of queues reserved for each VMDQ Pool.
99 Runtime Config Options
100 ~~~~~~~~~~~~~~~~~~~~~~
102 - ``Number of Queues per VF`` (default ``4``)
104 The number of queue per VF is determined by its host PF. If the PCI address
105 of an i40e PF is aaaa:bb.cc, the number of queues per VF can be configured
106 with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n. The value n can be
107 1, 2, 4, 8 or 16. If no such parameter is configured, the number of queues
108 per VF is 4 by default.
110 - ``Support multiple driver`` (default ``disable``)
112 There was a multiple driver support issue during use of 700 series Ethernet
113 Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
114 parameter ``support-multi-driver`` is introduced, for example::
116 -w 84:00.0,support-multi-driver=1
118 With the above configuration, DPDK PMD will not change global registers, and
119 will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
120 DPDK and Linux Kernel.
122 - ``Support VF Port Representor`` (default ``not enabled``)
124 The i40e PF PMD supports the creation of VF port representors for the control
125 and monitoring of i40e virtual function devices. Each port representor
126 corresponds to a single virtual function of that device. Using the ``devargs``
127 option ``representor`` the user can specify which virtual functions to create
128 port representors for on initialization of the PF PMD by passing the VF IDs of
129 the VFs which are required.::
131 -w DBDF,representor=[0,1,4]
133 Currently hot-plugging of representor ports is not supported so all required
134 representors must be specified on the creation of the PF.
136 Driver compilation and testing
137 ------------------------------
139 Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
143 SR-IOV: Prerequisites and sample Application Notes
144 --------------------------------------------------
146 #. Load the kernel module:
148 .. code-block:: console
152 Check the output in dmesg:
154 .. code-block:: console
156 i40e 0000:83:00.1 ens802f0: renamed from eth0
158 #. Bring up the PF ports:
160 .. code-block:: console
164 #. Create VF device(s):
166 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
171 .. code-block:: console
173 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
176 #. Assign VF MAC address:
178 Assign MAC address to the VF using iproute2 utility. The syntax is:
180 .. code-block:: console
182 ip link set <PF netdev id> vf <VF id> mac <macaddr>
186 .. code-block:: console
188 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
190 #. Assign VF to VM, and bring up the VM.
191 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
195 Follow instructions available in the document
196 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
201 .. code-block:: console
204 EAL: PCI device 0000:83:00.0 on NUMA socket 1
205 EAL: probe driver: 8086:1572 rte_i40e_pmd
206 EAL: PCI memory mapped at 0x7f7f80000000
207 EAL: PCI memory mapped at 0x7f7f80800000
208 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
209 Interactive-mode selected
210 Configuring Port 0 (socket 0)
213 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
214 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
217 Port 0: 68:05:CA:26:85:84
218 Checking link statuses...
219 Port 0 Link Up - speed 10000 Mbps - full-duplex
225 Sample Application Notes
226 ------------------------
231 Vlan filter only works when Promiscuous mode is off.
233 To start ``testpmd``, and add vlan 10 to port 0:
235 .. code-block:: console
237 ./app/testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
240 testpmd> set promisc 0 off
241 testpmd> rx_vlan add 10 0
247 The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
248 The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
250 The default input set of each flow type is::
252 ipv4-other : src_ip_address, dst_ip_address
253 ipv4-frag : src_ip_address, dst_ip_address
254 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port
255 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port
256 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
258 ipv6-other : src_ip_address, dst_ip_address
259 ipv6-frag : src_ip_address, dst_ip_address
260 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port
261 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port
262 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port,
264 l2_payload : ether_type
266 The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
268 Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
270 .. code-block:: console
272 ./app/testpmd -l 0-15 -n 4 -- -i --disable-rss --pkt-filter-mode=perfect \
273 --rxq=8 --txq=8 --nb-cores=8 --nb-ports=1
275 Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
277 .. code-block:: console
279 testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \
280 src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \
281 fwd pf queue 1 fd_id 1
283 Check the flow director status:
285 .. code-block:: console
287 testpmd> show port fdir 0
289 ######################## FDIR infos for port 0 ####################
291 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
292 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
295 max_len: 16 payload_limit: 480
296 payload_unit: 2 payload_seg: 3
297 bitmask_unit: 2 bitmask_num: 2
300 src_ipv4: 0x00000000,
301 dst_ipv4: 0x00000000,
304 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
305 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
306 FLEX PAYLOAD SRC OFFSET:
307 L2_PAYLOAD: 0 1 2 3 4 5 6 ...
308 L3_PAYLOAD: 0 1 2 3 4 5 6 ...
309 L4_PAYLOAD: 0 1 2 3 4 5 6 ...
311 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
312 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
313 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
314 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
315 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
316 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
317 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
318 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
319 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
320 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
321 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
322 guarant_count: 1 best_count: 0
323 guarant_space: 512 best_space: 7168
330 Delete all flow director rules on a port:
332 .. code-block:: console
334 testpmd> flush_flow_director 0
339 The Intel® Ethernet 700 Series support a feature called
342 A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
343 for functionality that allows local switching between virtual endpoints within
344 a physical endpoint and also with an external bridge/network.
346 A "Floating" VEB doesn't have an uplink connection to the outside world so all
347 switching is done internally and remains within the host. As such, this
348 feature provides security benefits.
350 In addition, a Floating VEB overcomes a limitation of normal VEBs where they
351 cannot forward packets when the physical link is down. Floating VEBs don't need
352 to connect to the NIC port so they can still forward traffic from VF to VF
353 even when the physical link is down.
355 Therefore, with this feature enabled VFs can be limited to communicating with
356 each other but not an outside network, and they can do so even when there is
357 no physical uplink on the associated NIC port.
359 To enable this feature, the user should pass a ``devargs`` parameter to the
362 -w 84:00.0,enable_floating_veb=1
364 In this configuration the PMD will use the floating VEB feature for all the
365 VFs created by this PF device.
367 Alternatively, the user can specify which VFs need to connect to this floating
368 VEB using the ``floating_veb_list`` argument::
370 -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
372 In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
373 while other VFs connect to the normal VEB.
375 The current implementation only supports one floating VEB and one regular
376 VEB. VFs can connect to a floating VEB or a regular VEB according to the
377 configuration passed on the EAL command line.
379 The floating VEB functionality requires a NIC firmware version of 5.0
382 Dynamic Device Personalization (DDP)
383 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
385 The Intel® Ethernet 700 Series except for the Intel Ethernet Connection
386 X722 support a feature called "Dynamic Device Personalization (DDP)",
387 which is used to configure hardware by downloading a profile to support
388 protocols/filters which are not supported by default. The DDP
389 functionality requires a NIC firmware version of 6.0 or greater.
391 Current implementation supports GTP-C/GTP-U/PPPoE/PPPoL2TP,
392 steering can be used with rte_flow API.
394 Load a profile which supports GTP and store backup profile:
396 .. code-block:: console
398 testpmd> ddp add 0 ./gtp.pkgo,./backup.pkgo
400 Delete a GTP profile and restore backup profile:
402 .. code-block:: console
404 testpmd> ddp del 0 ./backup.pkgo
406 Get loaded DDP package info list:
408 .. code-block:: console
410 testpmd> ddp get list 0
412 Display information about a GTP profile:
414 .. code-block:: console
416 testpmd> ddp get info ./gtp.pkgo
418 Input set configuration
419 ~~~~~~~~~~~~~~~~~~~~~~~
420 Input set for any PCTYPE can be configured with user defined configuration,
421 For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
423 .. code-block:: console
425 testpmd> port config 0 pctype 43 hash_inset clear all
426 testpmd> port config 0 pctype 43 hash_inset set field 13
427 testpmd> port config 0 pctype 43 hash_inset set field 14
428 testpmd> port config 0 pctype 43 hash_inset set field 15
430 Queue region configuration
431 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
432 The Intel® Ethernet 700 Series supports a feature of queue regions
433 configuration for RSS in the PF, so that different traffic classes or
434 different packet classification types can be separated to different
435 queues in different queue regions. There is an API for configuration
436 of queue regions in RSS with a command line. It can parse the parameters
437 of the region index, queue number, queue start index, user priority, traffic
438 classes and so on. Depending on commands from the command line, it will call
439 i40e private APIs and start the process of setting or flushing the queue
440 region configuration. As this feature is specific for i40e only private
441 APIs are used. These new ``test_pmd`` commands are as shown below. For
442 details please refer to :doc:`../testpmd_app_ug/index`.
444 .. code-block:: console
446 testpmd> set port (port_id) queue-region region_id (value) \
447 queue_start_index (value) queue_num (value)
448 testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
449 testpmd> set port (port_id) queue-region UP (value) region_id (value)
450 testpmd> set port (port_id) queue-region flush (on|off)
451 testpmd> show port (port_id) queue-region
453 Limitations or Known issues
454 ---------------------------
456 MPLS packet classification
457 ~~~~~~~~~~~~~~~~~~~~~~~~~~
459 For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
460 The L2 Payload flow type in flow director can be used to classify MPLS packet
461 by using a command in testpmd like:
463 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
464 0x8847 flexbytes () fwd pf queue <N> fd_id <M>
466 With the NIC firmware version 5.0 or greater, some limited MPLS support
467 is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
468 new packet type, no classification or offload are possible. With this change,
469 L2 Payload flow type in flow director cannot be used to classify MPLS packet
470 as with previous firmware versions. Meanwhile, the Ethertype filter can be
471 used to classify MPLS packet by using a command in testpmd like:
473 testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \
476 16 Byte RX Descriptor setting on DPDK VF
477 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
479 Currently the VF's RX descriptor mode is decided by PF. There's no PF-VF
480 interface for VF to request the RX descriptor mode, also no interface to notify
481 VF its own RX descriptor mode.
482 For all available versions of the i40e driver, these drivers don't support 16
483 byte RX descriptor. If the Linux i40e kernel driver is used as host driver,
484 while DPDK i40e PMD is used as the VF driver, DPDK cannot choose 16 byte receive
485 descriptor. The reason is that the RX descriptor is already set to 32 byte by
486 the i40e kernel driver. That is to say, user should keep
487 ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n`` in config file.
488 In the future, if the Linux i40e driver supports 16 byte RX descriptor, user
489 should make sure the DPDK VF uses the same RX descriptor mode, 16 byte or 32
490 byte, as the PF driver.
492 The same rule for DPDK PF + DPDK VF. The PF and VF should use the same RX
493 descriptor mode. Or the VF RX will not work.
495 Receive packets with Ethertype 0x88A8
496 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
498 Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
499 only when floating VEB is disabled.
501 Incorrect Rx statistics when packet is oversize
502 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
504 When a packet is over maximum frame size, the packet is dropped.
505 However, the Rx statistics, when calling `rte_eth_stats_get` incorrectly
506 shows it as received.
508 VF & TC max bandwidth setting
509 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
511 The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
512 The behavior is different when handling per VF and per TC max bandwidth setting.
513 When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
514 enabled. If so, return failure.
515 When enabling per TC max bandwidth, SW will check if per VF max bandwidth
516 is enabled. If so, disable per VF max bandwidth and continue with per TC max
519 TC TX scheduling mode setting
520 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
522 There're 2 TX scheduling modes for TCs, round robin and strict priority mode.
523 If a TC is set to strict priority mode, it can consume unlimited bandwidth.
524 It means if APP has set the max bandwidth for that TC, it comes to no
526 It's suggested to set the strict priority mode for a TC that is latency
527 sensitive but no consuming much bandwidth.
529 VF performance is impacted by PCI extended tag setting
530 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
532 To reach maximum NIC performance in the VF the PCI extended tag must be
533 enabled. The DPDK i40e PF driver will set this feature during initialization,
534 but the kernel PF driver does not. So when running traffic on a VF which is
535 managed by the kernel PF driver, a significant NIC performance downgrade has
536 been observed (for 64 byte packets, there is about 25% line-rate downgrade for
537 a 25GbE device and about 35% for a 40GbE device).
539 For kernel version >= 4.11, the kernel's PCI driver will enable the extended
540 tag if it detects that the device supports it. So by default, this is not an
541 issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be
542 enabled using the steps below.
544 #. Get the current value of the PCI configure register::
546 setpci -s <XX:XX.X> a8.w
550 value = value | 0x100
552 #. Set the PCI configure register with new value::
554 setpci -s <XX:XX.X> a8.w=<value>
559 The VF vlan strip function is only supported in the i40e kernel driver >= 2.1.26.
564 DCB works only when RSS is enabled.
566 Global configuration warning
567 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
569 I40E PMD will set some global registers to enable some function or set some
570 configure. Then when using different ports of the same NIC with Linux kernel
571 and DPDK, the port with Linux kernel will be impacted by the port with DPDK.
572 For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e
573 PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A
574 with DPDK, then the configuration will also impact port B in the NIC with
575 kernel driver, which don't want to use the TPID.
576 So PMD reports warning to clarify what is changed by writing global register.
578 High Performance of Small Packets on 40GbE NIC
579 ----------------------------------------------
581 As there might be firmware fixes for performance enhancement in latest version
582 of firmware image, the firmware update might be needed for getting high performance.
583 Check the Intel support website for the latest firmware updates.
584 Users should consult the release notes specific to a DPDK release to identify
585 the validated firmware version for a NIC using the i40e driver.
587 Use 16 Bytes RX Descriptor Size
588 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
590 As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
591 Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors.
593 Example of getting best performance with l3fwd example
594 ------------------------------------------------------
596 The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with a
597 server with Intel Xeon processors and Intel Ethernet CNA XL710.
599 The example scenario is to get best performance with two Intel Ethernet CNA XL710 40GbE ports.
600 See :numref:`figure_intel_perf_test_setup` for the performance test setup.
602 .. _figure_intel_perf_test_setup:
604 .. figure:: img/intel_perf_test_setup.*
606 Performance Test Setup
609 1. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
610 The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
611 for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
612 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
614 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
615 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
617 2. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
619 3. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
620 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
622 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
623 cores from different cores (e.g core18 and core19).
625 4. Bind these two ports to igb_uio.
627 5. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
628 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
630 6. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
631 Compile the ``l3fwd sample`` with the default lpm mode.
633 7. The command line of running l3fwd would be something like the following::
635 ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \
636 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
638 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
639 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
641 8. Configure the traffic at a traffic generator.
643 * Start creating a stream on packet generator.
645 * Set the Ethernet II type to 0x0800.