4 .. **Read this first.**
6 The text in the sections below explains how to update the release notes.
8 Use proper spelling, capitalization and punctuation in all sections.
10 Variable and config names should be quoted as fixed width text:
13 Build the docs and view the output file to ensure the changes are correct::
17 xdg-open build/doc/html/guides/rel_notes/release_17_05.html
23 .. This section should contain new features added in this release. Sample
26 * **Add a title in the past tense with a full stop.**
28 Add a short 1-2 sentence description in the past tense. The description
29 should be enough to allow someone scanning the release notes to
30 understand the new feature.
32 If the feature adds a lot of sub-features you can use a bullet list like
35 * Added feature foo to do something.
36 * Enhanced feature bar to do something else.
38 Refer to the previous release notes for examples.
40 This section is a comment. do not overwrite or remove it.
41 Also, make sure to start the actual text at the margin.
42 =========================================================
44 * **Reorganized mbuf structure.**
46 The mbuf structure has been reorganized as follows:
48 * Align fields to facilitate the writing of ``data_off``, ``refcnt``, and
49 ``nb_segs`` in one operation.
50 * Use 2 bytes for port and number of segments.
51 * Move the sequence number to the second cache line.
52 * Add a timestamp field.
53 * Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free.
55 * **Added mbuf raw free API.**
57 Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to
60 * **Added free Tx mbuf on demand API.**
62 Added a new function ``rte_eth_tx_done_cleanup()`` which allows an
63 application to request the driver to release mbufs that are no longer in use
64 from a Tx ring, independent of whether or not the ``tx_rs_thresh`` has been
67 * **Added device removal interrupt.**
69 Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify
70 the sudden removal of a device.
71 This event can be advertised by PCI drivers and enabled accordingly.
73 * **Added EAL dynamic log framework.**
75 Added new APIs to dynamically register named log types, and control
76 the level of each type independently.
78 * **Added descriptor status ethdev API.**
80 Added a new API to get the status of a descriptor.
82 For Rx, it is almost similar to the ``rx_descriptor_done`` API, except
83 it differentiates descriptors which are held by the driver and not
84 returned to the hardware. For Tx, it is a new API.
86 * **Increased number of next hops for LPM IPv6 to 2^21.**
88 The next_hop field has been extended from 8 bits to 21 bits for IPv6.
90 * **Added VFIO hotplug support.**
92 Added hotplug support for VFIO in addition to the existing UIO support.
94 * **Added PowerPC support to pci probing for vfio-pci devices.**
96 Enabled sPAPR IOMMU based pci probing for vfio-pci devices.
98 * **Kept consistent PMD batching behavior.**
100 Removed the limit of fm10k/i40e/ixgbe Tx burst size and vhost Rx/Tx burst size
101 in order to support the same policy of "make an best effort to Rx/Tx pkts"
104 * **Updated the ixgbe base driver.**
106 Updated the ixgbe base driver, including the following changes:
108 * Add link block check for KR.
109 * Complete HW initialization even if SFP is not present.
110 * Add VF xcast promiscuous mode.
112 * **Added PowerPC support for i40e and its vector PMD.**
114 Enabled i40e PMD and its vector PMD by default in PowerPC.
116 * **Added VF max bandwidth setting in i40e.**
118 Enabled capability to set the max bandwidth for a VF in i40e.
120 * **Added VF TC min and max bandwidth setting in i40e.**
122 Enabled capability to set the min and max allocated bandwidth for a TC on a
125 * **Added TC strict priority mode setting on i40e.**
127 There are 2 Tx scheduling modes supported for TCs by i40e HW: round robin
128 mode and strict priority mode. By default the round robin mode is used. It
129 is now possible to change the Tx scheduling mode for a TC. This is a global
130 setting on a physical port.
132 * **Added i40e dynamic device personalization support.**
134 * Added dynamic device personalization processing to i40e firmware.
136 * **Added Cloud Filter for QinQ steering to i40e.**
138 * Added a QinQ cloud filter on the i40e PMD, for steering traffic to a VM
139 using both VLAN tags. Note, this feature is not supported in Vector Mode.
141 * **Updated mlx5 PMD.**
143 Updated the mlx5 driver, including the following changes:
145 * Added Generic flow API support for classification according to ether type.
146 * Extended Generic flow API support for classification of IPv6 flow
147 according to Vtc flow, Protocol and Hop limit.
148 * Added Generic flow API support for FLAG action.
149 * Added Generic flow API support for RSS action.
150 * Added support for TSO for non-tunneled and VXLAN packets.
151 * Added support for hardware Tx checksum offloads for VXLAN packets.
152 * Added support for user space Rx interrupt mode.
153 * Improved ConnectX-5 single core and maximum performance.
155 * **Updated mlx4 PMD.**
157 Updated the mlx4 driver, including the following changes:
159 * Added support for Generic flow API basic flow items and actions.
160 * Added support for device removal event.
162 * **Updated the sfc_efx driver.**
164 * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP
165 pattern items with QUEUE action for ingress traffic.
167 * Added support for virtual functions (VFs).
169 * **Added LiquidIO network PMD.**
171 Added poll mode driver support for Cavium LiquidIO II server adapter VFs.
173 * **Added Atomic Rules Arkville PMD.**
175 Added a new poll mode driver for the Arkville family of
176 devices from Atomic Rules. The net/ark PMD supports line-rate
177 agnostic, multi-queue data movement on Arkville core FPGA instances.
179 * **Added support for NXP DPAA2 - FSLMC bus.**
181 Added the new bus "fslmc" driver for NXP DPAA2 devices. See the
182 "Network Interface Controller Drivers" document for more details of this new
185 * **Added support for NXP DPAA2 Network PMD.**
187 Added the new "dpaa2" net driver for NXP DPAA2 devices. See the
188 "Network Interface Controller Drivers" document for more details of this new
191 * **Added support for the Wind River Systems AVP PMD.**
193 Added a new networking driver for the AVP device type. Theses devices are
194 specific to the Wind River Systems virtualization platforms.
196 * **Added vmxnet3 version 3 support.**
198 Added support for vmxnet3 version 3 which includes several
199 performance enhancements such as configurable Tx data ring, Receive
200 Data Ring, and the ability to register memory regions.
202 * **Updated the TAP driver.**
204 Updated the TAP PMD to:
206 * Support MTU modification.
207 * Support packet type for Rx.
208 * Support segmented packets on Rx and Tx.
209 * Speed up Rx on TAP when no packets are available.
210 * Support capturing traffic from another netdevice.
211 * Dynamically change link status when the underlying interface state changes.
212 * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and
213 TCP pattern items with DROP, QUEUE and PASSTHRU actions for ingress
216 * **Added MTU feature support to Virtio and Vhost.**
218 Implemented new Virtio MTU feature in Vhost and Virtio:
220 * Add ``rte_vhost_mtu_get()`` API to Vhost library.
221 * Enable Vhost PMD's MTU get feature.
222 * Get max MTU value from host in Virtio PMD
224 * **Added interrupt mode support for virtio-user.**
226 Implemented Rxq interrupt mode and LSC support for virtio-user as a virtual
227 device. Supported cases:
229 * Rxq interrupt for virtio-user + vhost-user as the backend.
230 * Rxq interrupt for virtio-user + vhost-kernel as the backend.
231 * LSC interrupt for virtio-user + vhost-user as the backend.
233 * **Added event driven programming model library (rte_eventdev).**
235 This API introduces an event driven programming model.
237 In a polling model, lcores poll ethdev ports and associated
238 Rx queues directly to look for a packet. By contrast in an event
239 driven model, lcores call the scheduler that selects packets for
240 them based on programmer-specified criteria. The Eventdev library
241 adds support for an event driven programming model, which offers
242 applications automatic multicore scaling, dynamic load balancing,
243 pipelining, packet ingress order maintenance and
244 synchronization services to simplify application packet processing.
246 By introducing an event driven programming model, DPDK can support
247 both polling and event driven programming models for packet processing,
248 and applications are free to choose whatever model
249 (or combination of the two) best suits their needs.
251 * **Added Software Eventdev PMD.**
253 Added support for the software eventdev PMD. The software eventdev is a
254 software based scheduler device that implements the eventdev API. This
255 PMD allows an application to configure a pipeline using the eventdev
256 library, and run the scheduling workload on a CPU core.
258 * **Added Cavium OCTEONTX Eventdev PMD.**
260 Added the new octeontx ssovf eventdev driver for OCTEONTX devices. See the
261 "Event Device Drivers" document for more details on this new driver.
263 * **Added information metrics library.**
265 Added a library that allows information metrics to be added and updated
266 by producers, typically other libraries, for later retrieval by
267 consumers such as applications. It is intended to provide a
268 reporting mechanism that is independent of other libraries such
271 * **Added bit-rate calculation library.**
273 Added a library that can be used to calculate device bit-rates. Calculated
274 bitrates are reported using the metrics library.
276 * **Added latency stats library.**
278 Added a library that measures packet latency. The collected statistics are
279 jitter and latency. For latency the minimum, average, and maximum is
282 * **Added NXP DPAA2 SEC crypto PMD.**
284 A new "dpaa2_sec" hardware based crypto PMD for NXP DPAA2 devices has been
285 added. See the "Crypto Device Drivers" document for more details on this
288 * **Updated the Cryptodev Scheduler PMD.**
290 * Added a packet-size based distribution mode, which distributes the enqueued
291 crypto operations among two slaves, based on their data lengths.
292 * Added fail-over scheduling mode, which enqueues crypto operations to a
293 primary slave first. Then, any operation that cannot be enqueued is
294 enqueued to a secondary slave.
295 * Added mode specific option support, so each scheduling mode can
296 now be configured individually by the new API.
298 * **Updated the QAT PMD.**
300 The QAT PMD has been updated with additional support for:
302 * AES DOCSIS BPI algorithm.
303 * DES DOCSIS BPI algorithm.
304 * ZUC EEA3/EIA3 algorithms.
306 * **Updated the AESNI MB PMD.**
308 The AESNI MB PMD has been updated with additional support for:
310 * AES DOCSIS BPI algorithm.
312 * **Updated the OpenSSL PMD.**
314 The OpenSSL PMD has been updated with additional support for:
316 * DES DOCSIS BPI algorithm.
322 .. This section should contain bug fixes added to the relevant
323 sections. Sample format:
325 * **code/section Fixed issue in the past tense with a full stop.**
327 Add a short 1-2 sentence description of the resolved issue in the past
330 The title should contain the code/lib section like a commit message.
332 Add the entries in alphabetic order in the relevant sections below.
334 This section is a comment. do not overwrite or remove it.
335 Also, make sure to start the actual text at the margin.
336 =========================================================
339 * **l2fwd-keepalive: Fixed unclean shutdowns.**
341 Added clean shutdown to l2fwd-keepalive so that it can free up
342 stale resources used for inter-process communication.
348 .. This section should contain new known issues in this release. Sample format:
350 * **Add title in present tense with full stop.**
352 Add a short 1-2 sentence description of the known issue in the present
353 tense. Add information on any known workarounds.
355 This section is a comment. do not overwrite or remove it.
356 Also, make sure to start the actual text at the margin.
357 =========================================================
359 * **LSC interrupt doesn't work for virtio-user + vhost-kernel.**
361 LSC interrupt cannot be detected when setting the backend, tap device,
362 up/down as we fail to find a way to monitor such event.
368 .. This section should contain API changes. Sample format:
370 * Add a short 1-2 sentence description of the API change. Use fixed width
371 quotes for ``rte_function_names`` or ``rte_struct_names``. Use the past
374 This section is a comment. do not overwrite or remove it.
375 Also, make sure to start the actual text at the margin.
376 =========================================================
378 * The LPM ``next_hop`` field is extended from 8 bits to 21 bits for IPv6
379 while keeping ABI compatibility.
381 * **Reworked rte_ring library.**
383 The rte_ring library has been reworked and updated. The following changes
384 have been made to it:
386 * Removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``.
387 * Removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``.
388 * Removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``.
389 * Removed the function ``rte_ring_set_water_mark`` as part of a general
390 removal of watermarks support in the library.
391 * Added an extra parameter to the burst/bulk enqueue functions to
392 return the number of free spaces in the ring after enqueue. This can
393 be used by an application to implement its own watermark functionality.
394 * Added an extra parameter to the burst/bulk dequeue functions to return
395 the number elements remaining in the ring after dequeue.
396 * Changed the return value of the enqueue and dequeue bulk functions to
397 match that of the burst equivalents. In all cases, ring functions which
398 operate on multiple packets now return the number of elements enqueued
399 or dequeued, as appropriate. The updated functions are:
401 - ``rte_ring_mp_enqueue_bulk``
402 - ``rte_ring_sp_enqueue_bulk``
403 - ``rte_ring_enqueue_bulk``
404 - ``rte_ring_mc_dequeue_bulk``
405 - ``rte_ring_sc_dequeue_bulk``
406 - ``rte_ring_dequeue_bulk``
408 NOTE: the above functions all have different parameters as well as
409 different return values, due to the other listed changes above. This
410 means that all instances of the functions in existing code will be
411 flagged by the compiler. The return value usage should be checked
412 while fixing the compiler error due to the extra parameter.
414 * **Reworked rte_vhost library.**
416 The rte_vhost library has been reworked to make it generic enough so that
417 the user could build other vhost-user drivers on top of it. To achieve this
418 the following changes have been made:
420 * The following vhost-pmd APIs are removed:
422 * ``rte_eth_vhost_feature_disable``
423 * ``rte_eth_vhost_feature_enable``
424 * ``rte_eth_vhost_feature_get``
426 * The vhost API ``rte_vhost_driver_callback_register(ops)`` is reworked to
427 be per vhost-user socket file. Thus, it takes one more argument:
428 ``rte_vhost_driver_callback_register(path, ops)``.
430 * The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead,
431 ``rte_vhost_get_vring_num`` should be used.
433 * The following macros are removed in ``rte_virtio_net.h``
439 * The following net specific header files are removed in ``rte_virtio_net.h``
441 * ``linux/virtio_net.h``
446 * The vhost struct ``virtio_net_device_ops`` is renamed to
449 * The vhost API ``rte_vhost_driver_session_start`` is removed. Instead,
450 ``rte_vhost_driver_start`` should be used, and there is no need to create
453 * The vhost public header file ``rte_virtio_net.h`` is renamed to
460 .. This section should contain ABI changes. Sample format:
462 * Add a short 1-2 sentence description of the ABI change that was announced
463 in the previous releases and made in this release. Use fixed width quotes
464 for ``rte_function_names`` or ``rte_struct_names``. Use the past tense.
466 This section is a comment. do not overwrite or remove it.
467 Also, make sure to start the actual text at the margin.
468 =========================================================
470 * **Reorganized the mbuf structure.**
472 The order and size of the fields in the ``mbuf`` structure changed,
473 as described in the `New Features`_ section.
475 * The ``rte_cryptodev_info.sym`` structure has a new field ``max_nb_sessions_per_qp``
476 to support drivers which may support a limited number of sessions per queue_pair.
482 .. This section should contain removed items in this release. Sample format:
484 * Add a short 1-2 sentence description of the removed item in the past
487 This section is a comment. do not overwrite or remove it.
488 Also, make sure to start the actual text at the margin.
489 =========================================================
491 * KNI vhost support has been removed.
493 * The dpdk_qat sample application has been removed.
495 Shared Library Versions
496 -----------------------
498 .. Update any library version updated in this release and prepend with a ``+``
502 + librte_cfgfile.so.2
505 This section is a comment. do not overwrite or remove it.
506 =========================================================
509 The libraries prepended with a plus sign were incremented in this version.
514 + librte_bitratestats.so.1
517 librte_cryptodev.so.2
518 librte_distributor.so.1
521 + librte_eventdev.so.1
527 + librte_latencystats.so.1
532 + librte_metrics.so.1
551 .. This section should contain a list of platforms that were tested with this
556 * <vendor> platform with <vendor> <type of devices> combinations
561 * Other relevant details...
563 This section is a comment. do not overwrite or remove it.
564 Also, make sure to start the actual text at the margin.
565 =========================================================
567 * Intel(R) platforms with Intel(R) NICs combinations
571 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
572 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
573 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
574 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
575 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
576 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
577 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
578 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
585 * Red Hat Enterprise Linux Server release 7.3
586 * SUSE Enterprise Linux 12
593 * Intel(R) 82599ES 10 Gigabit Ethernet Controller
595 * Firmware version: 0x61bf0001
596 * Device id (pf/vf): 8086:10fb / 8086:10ed
597 * Driver version: 4.0.1-k (ixgbe)
599 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
601 * Firmware version: 0x800001cf
602 * Device id (pf/vf): 8086:15ad / 8086:15a8
603 * Driver version: 4.2.5 (ixgbe)
605 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
607 * Firmware version: 5.05
608 * Device id (pf/vf): 8086:1572 / 8086:154c
609 * Driver version: 1.5.23 (i40e)
611 * Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G)
613 * Firmware version: 5.05
614 * Device id (pf/vf): 8086:1572 / 8086:154c
615 * Driver version: 1.5.23 (i40e)
617 * Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G)
619 * Firmware version: 5.05
620 * Device id (pf/vf): 8086:1584 / 8086:154c
621 * Driver version: 1.5.23 (i40e)
623 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
625 * Firmware version: 5.05
626 * Device id (pf/vf): 8086:1583 / 8086:154c
627 * Driver version: 1.5.23 (i40e)
629 * Intel(R) Corporation I350 Gigabit Network Connection
631 * Firmware version: 1.48, 0x800006e7
632 * Device id (pf/vf): 8086:1521 / 8086:1520
633 * Driver version: 5.2.13-k (igb)
635 * Intel(R) platforms with Mellanox(R) NICs combinations
639 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
640 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
641 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
642 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
646 * Red Hat Enterprise Linux Server release 7.3 (Maipo)
647 * Red Hat Enterprise Linux Server release 7.2 (Maipo)
652 * MLNX_OFED: 4.0-2.0.0.0
656 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
658 * Host interface: PCI Express 3.0 x8
659 * Device ID: 15b3:1007
660 * Firmware version: 2.40.5030
662 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
664 * Host interface: PCI Express 3.0 x8
665 * Device ID: 15b3:1013
666 * Firmware version: 12.18.2000
668 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
670 * Host interface: PCI Express 3.0 x8
671 * Device ID: 15b3:1013
672 * Firmware version: 12.18.2000
674 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
676 * Host interface: PCI Express 3.0 x8
677 * Device ID: 15b3:1013
678 * Firmware version: 12.18.2000
680 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
682 * Host interface: PCI Express 3.0 x8
683 * Device ID: 15b3:1013
684 * Firmware version: 12.18.2000
686 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
688 * Host interface: PCI Express 3.0 x8
689 * Device ID: 15b3:1013
690 * Firmware version: 12.18.2000
692 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
694 * Host interface: PCI Express 3.0 x16
695 * Device ID: 15b3:1013
696 * Firmware version: 12.18.2000
698 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
700 * Host interface: PCI Express 3.0 x8
701 * Device ID: 15b3:1013
702 * Firmware version: 12.18.2000
704 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
706 * Host interface: PCI Express 3.0 x8
707 * Device ID: 15b3:1013
708 * Firmware version: 12.18.2000
710 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
712 * Host interface: PCI Express 3.0 x16
713 * Device ID: 15b3:1013
714 * Firmware version: 12.18.2000
716 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
718 * Host interface: PCI Express 3.0 x16
719 * Device ID: 15b3:1013
720 * Firmware version: 12.18.2000
722 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
724 * Host interface: PCI Express 3.0 x16
725 * Device ID: 15b3:1013
726 * Firmware version: 12.18.2000
728 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
730 * Host interface: PCI Express 3.0 x8
731 * Device ID: 15b3:1015
732 * Firmware version: 14.18.2000
734 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
736 * Host interface: PCI Express 3.0 x8
737 * Device ID: 15b3:1015
738 * Firmware version: 14.18.2000
740 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
742 * Host interface: PCI Express 3.0 x16
743 * Device ID: 15b3:1017
744 * Firmware version: 16.19.1200
746 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
748 * Host interface: PCI Express 4.0 x16
749 * Device ID: 15b3:1019
750 * Firmware version: 16.19.1200
752 * IBM(R) Power8(R) with Mellanox(R) NICs combinations
756 * Processor: POWER8E (raw), AltiVec supported
757 * type-model: 8247-22L
758 * Firmware FW810.21 (SV810_108)
760 * OS: Ubuntu 16.04 LTS PPC le
762 * MLNX_OFED: 4.0-2.0.0.0
766 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
768 * Host interface: PCI Express 3.0 x8
769 * Device ID: 15b3:1013
770 * Firmware version: 12.18.2000
772 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
774 * Host interface: PCI Express 3.0 x8
775 * Device ID: 15b3:1013
776 * Firmware version: 12.18.2000
778 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
780 * Host interface: PCI Express 3.0 x8
781 * Device ID: 15b3:1013
782 * Firmware version: 12.18.2000
784 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
786 * Host interface: PCI Express 3.0 x8
787 * Device ID: 15b3:1013
788 * Firmware version: 12.18.2000
790 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
792 * Host interface: PCI Express 3.0 x8
793 * Device ID: 15b3:1013
794 * Firmware version: 12.18.2000
796 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
798 * Host interface: PCI Express 3.0 x16
799 * Device ID: 15b3:1013
800 * Firmware version: 12.18.2000
802 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
804 * Host interface: PCI Express 3.0 x8
805 * Device ID: 15b3:1013
806 * Firmware version: 12.18.2000
808 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
810 * Host interface: PCI Express 3.0 x8
811 * Device ID: 15b3:1013
812 * Firmware version: 12.18.2000
814 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
816 * Host interface: PCI Express 3.0 x16
817 * Device ID: 15b3:1013
818 * Firmware version: 12.18.2000
820 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
822 * Host interface: PCI Express 3.0 x16
823 * Device ID: 15b3:1013
824 * Firmware version: 12.18.2000
826 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
828 * Host interface: PCI Express 3.0 x16
829 * Device ID: 15b3:1013
830 * Firmware version: 12.18.2000