1 .. SPDX-License-Identifier: BSD-3-Clause
2 Copyright(c) 2021 Marvell.
4 Marvell cnxk SSO Eventdev Driver
5 ================================
7 The SSO PMD (**librte_event_cnxk**) and provides poll mode
8 eventdev driver support for the inbuilt event device found in the
9 **Marvell OCTEON cnxk** SoC family.
11 More information about OCTEON cnxk SoC can be found at `Marvell Official Website
12 <https://www.marvell.com/embedded-processors/infrastructure-processors/>`_.
14 Supported OCTEON cnxk SoCs
15 --------------------------
23 Features of the OCTEON cnxk SSO PMD are:
26 - 26 (dual) and 52 (single) Event ports on CN9XX
27 - 52 Event ports on CN10XX
29 - Supports 1M flows per event queue
30 - Flow based event pipelining
31 - Flow pinning support in flow based event pipelining
32 - Queue based event pipelining
33 - Supports ATOMIC, ORDERED, PARALLEL schedule types per flow
34 - Event scheduling QoS based on event queue priority
35 - Open system with configurable amount of outstanding events limited only by
37 - HW accelerated dequeue timeout support to enable power management
38 - HW managed event timers support through TIM, with high precision and
39 time granularity of 2.5us on CN9K and 1us on CN10K.
40 - Up to 256 TIM rings a.k.a event timer adapters.
41 - Up to 8 rings traversed in parallel.
42 - HW managed packets enqueued from ethdev to eventdev exposed through event eth
44 - N:1 ethernet device Rx queue to Event queue mapping.
45 - Lockfree Tx from event eth Tx adapter using ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE``
46 capability while maintaining receive packet order.
47 - Full Rx/Tx offload support defined through ethdev queue configuration.
48 - HW managed event vectorization on CN10K for packets enqueued from ethdev to
49 eventdev configurable per each Rx queue in Rx adapter.
50 - Event vector transmission via Tx adapter.
52 Prerequisites and Compilation procedure
53 ---------------------------------------
55 See :doc:`../platform/cnxk` for setup information.
58 Runtime Config Options
59 ----------------------
61 - ``Maximum number of in-flight events`` (default ``8192``)
63 In **Marvell OCTEON cnxk** the max number of in-flight events are only limited
64 by DRAM size, the ``xae_cnt`` devargs parameter is introduced to provide
65 upper limit for in-flight events.
69 -a 0002:0e:00.0,xae_cnt=16384
71 - ``CN9K Getwork mode``
73 CN9K ``single_ws`` devargs parameter is introduced to select single workslot
74 mode in SSO and disable the default dual workslot mode.
78 -a 0002:0e:00.0,single_ws=1
80 - ``CN10K Getwork mode``
82 CN10K supports multiple getwork prefetch modes, by default the prefetch
87 -a 0002:0e:00.0,gw_mode=1
89 - ``Event Group QoS support``
91 SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight
92 events. By default the buffers are assigned to the SSO GGRPs to
93 satisfy minimum HW requirements. SSO is free to assign the remaining
94 buffers to GGRPs based on a preconfigured threshold.
95 We can control the QoS of SSO GGRP by modifying the above mentioned
96 thresholds. GGRPs that have higher importance can be assigned higher
97 thresholds than the rest. The dictionary format is as follows
98 [Qx-XAQ-TAQ-IAQ][Qz-XAQ-TAQ-IAQ] expressed in percentages, 0 represents
103 -a 0002:0e:00.0,qos=[1-50-50-50]
105 - ``Force Rx Back pressure``
107 Force Rx back pressure when same mempool is used across ethernet device
108 connected to event device.
112 -a 0002:0e:00.0,force_rx_bp=1
114 - ``TIM disable NPA``
116 By default chunks are allocated from NPA then TIM can automatically free
117 them when traversing the list of chunks. The ``tim_disable_npa`` devargs
118 parameter disables NPA and uses software mempool to manage chunks
122 -a 0002:0e:00.0,tim_disable_npa=1
124 - ``TIM modify chunk slots``
126 The ``tim_chnk_slots`` devargs can be used to modify number of chunk slots.
127 Chunks are used to store event timers, a chunk can be visualised as an array
128 where the last element points to the next chunk and rest of them are used to
129 store events. TIM traverses the list of chunks and enqueues the event timers
130 to SSO. The default value is 255 and the max value is 4095.
134 -a 0002:0e:00.0,tim_chnk_slots=1023
136 - ``TIM enable arm/cancel statistics``
138 The ``tim_stats_ena`` devargs can be used to enable arm and cancel stats of
143 -a 0002:0e:00.0,tim_stats_ena=1
145 - ``TIM limit max rings reserved``
147 The ``tim_rings_lmt`` devargs can be used to limit the max number of TIM
148 rings i.e. event timer adapter reserved on probe. Since, TIM rings are HW
149 resources we can avoid starving other applications by not grabbing all the
154 -a 0002:0e:00.0,tim_rings_lmt=5
156 - ``TIM ring control internal parameters``
158 When using multiple TIM rings the ``tim_ring_ctl`` devargs can be used to
159 control each TIM rings internal parameters uniquely. The following dict
160 format is expected [ring-chnk_slots-disable_npa-stats_ena]. 0 represents
165 -a 0002:0e:00.0,tim_ring_ctl=[2-1023-1-0]
167 - ``TIM external clock frequency``
169 The ``tim_eclk_freq`` devagrs can be used to pass external clock frequencies
170 when external clock source is selected.
172 External clock frequencies are mapped as follows::
174 RTE_EVENT_TIMER_ADAPTER_EXT_CLK0 = TIM_CLK_SRC_10NS,
175 RTE_EVENT_TIMER_ADAPTER_EXT_CLK1 = TIM_CLK_SRC_GPIO,
176 RTE_EVENT_TIMER_ADAPTER_EXT_CLK2 = TIM_CLK_SRC_PTP,
177 RTE_EVENT_TIMER_ADAPTER_EXT_CLK3 = TIM_CLK_SRC_SYNCE
179 The order of frequencies supplied to device args should be GPIO-PTP-SYNCE.
183 -a 0002:0e:00.0,tim_eclk_freq=122880000-1000000000-0
188 .. _table_octeon_cnxk_event_debug_options:
190 .. table:: OCTEON cnxk event device debug options
192 +---+------------+-------------------------------------------------------+
193 | # | Component | EAL log command |
194 +===+============+=======================================================+
195 | 1 | SSO | --log-level='pmd\.event\.cnxk,8' |
196 +---+------------+-------------------------------------------------------+
197 | 2 | TIM | --log-level='pmd\.event\.cnxk\.timer,8' |
198 +---+------------+-------------------------------------------------------+
206 Using the same mempool for all the ethernet device ports connected to
207 event device would cause back pressure to be asserted only on the first
209 Back pressure is automatically disabled when using same mempool for all the
210 ethernet devices connected to event device to override this applications can
211 use `force_rx_bp=1` device arguments.
212 Using unique mempool per each ethernet device is recommended when they are
213 connected to event device.