2 Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
4 Redistribution and use in source and binary forms, with or without
5 modification, are permitted provided that the following conditions
8 * Redistributions of source code must retain the above copyright
9 notice, this list of conditions and the following disclaimer.
10 * Redistributions in binary form must reproduce the above copyright
11 notice, this list of conditions and the following disclaimer in
12 the documentation and/or other materials provided with the
14 * Neither the name of Intel Corporation nor the names of its
15 contributors may be used to endorse or promote products derived
16 from this software without specific prior written permission.
18 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
21 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
23 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
24 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
25 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
26 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 Cryptography Device Library
32 ===========================
34 The cryptodev library provides a Crypto device framework for management and
35 provisioning of hardware and software Crypto poll mode drivers, defining generic
36 APIs which support a number of different Crypto operations. The framework
37 currently only supports cipher, authentication, chained cipher/authentication
38 and AEAD symmetric Crypto operations.
44 The cryptodev library follows the same basic principles as those used in DPDKs
45 Ethernet Device framework. The Crypto framework provides a generic Crypto device
46 framework which supports both physical (hardware) and virtual (software) Crypto
47 devices as well as a generic Crypto API which allows Crypto devices to be
48 managed and configured and supports Crypto operations to be provisioned on
49 Crypto poll mode driver.
58 Physical Crypto devices are discovered during the PCI probe/enumeration of the
59 EAL function which is executed at DPDK initialization, based on
60 their PCI device identifier, each unique PCI BDF (bus/bridge, device,
61 function). Specific physical Crypto devices, like other physical devices in DPDK
62 can be white-listed or black-listed using the EAL command line options.
64 Virtual devices can be created by two mechanisms, either using the EAL command
65 line options or from within the application using an EAL API directly.
67 From the command line using the --vdev EAL option
69 .. code-block:: console
71 --vdev 'crypto_aesni_mb0,max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0'
73 Our using the rte_vdev_init API within the application code.
77 rte_vdev_init("crypto_aesni_mb",
78 "max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0")
80 All virtual Crypto devices support the following initialization parameters:
82 * ``max_nb_queue_pairs`` - maximum number of queue pairs supported by the device.
83 * ``max_nb_sessions`` - maximum number of sessions supported by the device
84 * ``socket_id`` - socket on which to allocate the device resources on.
90 Each device, whether virtual or physical is uniquely designated by two
93 - A unique device index used to designate the Crypto device in all functions
94 exported by the cryptodev API.
96 - A device name used to designate the Crypto device in console messages, for
97 administration or debugging purposes. For ease of use, the port name includes
104 The configuration of each Crypto device includes the following operations:
106 - Allocation of resources, including hardware resources if a physical device.
107 - Resetting the device into a well-known default state.
108 - Initialization of statistics counters.
110 The rte_cryptodev_configure API is used to configure a Crypto device.
114 int rte_cryptodev_configure(uint8_t dev_id,
115 struct rte_cryptodev_config *config)
117 The ``rte_cryptodev_config`` structure is used to pass the configuration
118 parameters for socket selection and number of queue pairs.
122 struct rte_cryptodev_config {
124 /**< Socket to allocate resources on */
125 uint16_t nb_queue_pairs;
126 /**< Number of queue pairs to configure on device */
130 Configuration of Queue Pairs
131 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
133 Each Crypto devices queue pair is individually configured through the
134 ``rte_cryptodev_queue_pair_setup`` API.
135 Each queue pairs resources may be allocated on a specified socket.
139 int rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
140 const struct rte_cryptodev_qp_conf *qp_conf,
143 struct rte_cryptodev_qp_conf {
144 uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
148 Logical Cores, Memory and Queues Pair Relationships
149 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
151 The Crypto device Library as the Poll Mode Driver library support NUMA for when
152 a processor’s logical cores and interfaces utilize its local memory. Therefore
153 Crypto operations, and in the case of symmetric Crypto operations, the session
154 and the mbuf being operated on, should be allocated from memory pools created
155 in the local memory. The buffers should, if possible, remain on the local
156 processor to obtain the best performance results and buffer descriptors should
157 be populated with mbufs allocated from a mempool allocated from local memory.
159 The run-to-completion model also performs better, especially in the case of
160 virtual Crypto devices, if the Crypto operation and session and data buffer is
161 in local memory instead of a remote processor's memory. This is also true for
162 the pipe-line model provided all logical cores used are located on the same
165 Multiple logical cores should never share the same queue pair for enqueuing
166 operations or dequeuing operations on the same Crypto device since this would
167 require global locks and hinder performance. It is however possible to use a
168 different logical core to dequeue an operation on a queue pair from the logical
169 core which it was enqueued on. This means that a crypto burst enqueue/dequeue
170 APIs are a logical place to transition from one logical core to another in a
171 packet processing pipeline.
174 Device Features and Capabilities
175 ---------------------------------
177 Crypto devices define their functionality through two mechanisms, global device
178 features and algorithm capabilities. Global devices features identify device
179 wide level features which are applicable to the whole device such as
180 the device having hardware acceleration or supporting symmetric Crypto
183 The capabilities mechanism defines the individual algorithms/functions which
184 the device supports, such as a specific symmetric Crypto cipher,
185 authentication operation or Authenticated Encryption with Associated Data
192 Currently the following Crypto device features are defined:
194 * Symmetric Crypto operations
195 * Asymmetric Crypto operations
196 * Chaining of symmetric Crypto operations
197 * SSE accelerated SIMD vector operations
198 * AVX accelerated SIMD vector operations
199 * AVX2 accelerated SIMD vector operations
200 * AESNI accelerated instructions
201 * Hardware off-load processing
204 Device Operation Capabilities
205 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
207 Crypto capabilities which identify particular algorithm which the Crypto PMD
208 supports are defined by the operation type, the operation transform, the
209 transform identifier and then the particulars of the transform. For the full
210 scope of the Crypto capability see the definition of the structure in the
211 *DPDK API Reference*.
215 struct rte_cryptodev_capabilities;
217 Each Crypto poll mode driver defines its own private array of capabilities
218 for the operations it supports. Below is an example of the capabilities for a
219 PMD which supports the authentication algorithm SHA1_HMAC and the cipher
224 static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
226 .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
228 .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
230 .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
248 .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
250 .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
252 .algo = RTE_CRYPTO_CIPHER_AES_CBC,
270 Capabilities Discovery
271 ~~~~~~~~~~~~~~~~~~~~~~
273 Discovering the features and capabilities of a Crypto device poll mode driver
274 is achieved through the ``rte_cryptodev_info_get`` function.
278 void rte_cryptodev_info_get(uint8_t dev_id,
279 struct rte_cryptodev_info *dev_info);
281 This allows the user to query a specific Crypto PMD and get all the device
282 features and capabilities. The ``rte_cryptodev_info`` structure contains all the
283 relevant information for the device.
287 struct rte_cryptodev_info {
288 const char *driver_name;
290 struct rte_pci_device *pci_dev;
292 uint64_t feature_flags;
294 const struct rte_cryptodev_capabilities *capabilities;
296 unsigned max_nb_queue_pairs;
299 unsigned max_nb_sessions;
307 Scheduling of Crypto operations on DPDK's application data path is
308 performed using a burst oriented asynchronous API set. A queue pair on a Crypto
309 device accepts a burst of Crypto operations using enqueue burst API. On physical
310 Crypto devices the enqueue burst API will place the operations to be processed
311 on the devices hardware input queue, for virtual devices the processing of the
312 Crypto operations is usually completed during the enqueue call to the Crypto
313 device. The dequeue burst API will retrieve any processed operations available
314 from the queue pair on the Crypto device, from physical devices this is usually
315 directly from the devices processed queue, and for virtual device's from a
316 ``rte_ring`` where processed operations are place after being processed on the
320 Enqueue / Dequeue Burst APIs
321 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
323 The burst enqueue API uses a Crypto device identifier and a queue pair
324 identifier to specify the Crypto device queue pair to schedule the processing on.
325 The ``nb_ops`` parameter is the number of operations to process which are
326 supplied in the ``ops`` array of ``rte_crypto_op`` structures.
327 The enqueue function returns the number of operations it actually enqueued for
328 processing, a return value equal to ``nb_ops`` means that all packets have been
333 uint16_t rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
334 struct rte_crypto_op **ops, uint16_t nb_ops)
336 The dequeue API uses the same format as the enqueue API of processed but
337 the ``nb_ops`` and ``ops`` parameters are now used to specify the max processed
338 operations the user wishes to retrieve and the location in which to store them.
339 The API call returns the actual number of processed operations returned, this
340 can never be larger than ``nb_ops``.
344 uint16_t rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
345 struct rte_crypto_op **ops, uint16_t nb_ops)
348 Operation Representation
349 ~~~~~~~~~~~~~~~~~~~~~~~~
351 An Crypto operation is represented by an rte_crypto_op structure, which is a
352 generic metadata container for all necessary information required for the
353 Crypto operation to be processed on a particular Crypto device poll mode driver.
355 .. figure:: img/crypto_op.*
357 The operation structure includes the operation type, the operation status
358 and the session type (session-based/less), a reference to the operation
359 specific data, which can vary in size and content depending on the operation
360 being provisioned. It also contains the source mempool for the operation,
361 if it allocated from a mempool.
363 If Crypto operations are allocated from a Crypto operation mempool, see next
364 section, there is also the ability to allocate private memory with the
365 operation for applications purposes.
367 Application software is responsible for specifying all the operation specific
368 fields in the ``rte_crypto_op`` structure which are then used by the Crypto PMD
369 to process the requested operation.
372 Operation Management and Allocation
373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
375 The cryptodev library provides an API set for managing Crypto operations which
376 utilize the Mempool Library to allocate operation buffers. Therefore, it ensures
377 that the crytpo operation is interleaved optimally across the channels and
378 ranks for optimal processing.
379 A ``rte_crypto_op`` contains a field indicating the pool that it originated from.
380 When calling ``rte_crypto_op_free(op)``, the operation returns to its original pool.
384 extern struct rte_mempool *
385 rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
386 unsigned nb_elts, unsigned cache_size, uint16_t priv_size,
389 During pool creation ``rte_crypto_op_init()`` is called as a constructor to
390 initialize each Crypto operation which subsequently calls
391 ``__rte_crypto_op_reset()`` to configure any operation type specific fields based
392 on the type parameter.
395 ``rte_crypto_op_alloc()`` and ``rte_crypto_op_bulk_alloc()`` are used to allocate
396 Crypto operations of a specific type from a given Crypto operation mempool.
397 ``__rte_crypto_op_reset()`` is called on each operation before being returned to
398 allocate to a user so the operation is always in a good known state before use
403 struct rte_crypto_op *rte_crypto_op_alloc(struct rte_mempool *mempool,
404 enum rte_crypto_op_type type)
406 unsigned rte_crypto_op_bulk_alloc(struct rte_mempool *mempool,
407 enum rte_crypto_op_type type,
408 struct rte_crypto_op **ops, uint16_t nb_ops)
410 ``rte_crypto_op_free()`` is called by the application to return an operation to
415 void rte_crypto_op_free(struct rte_crypto_op *op)
418 Symmetric Cryptography Support
419 ------------------------------
421 The cryptodev library currently provides support for the following symmetric
422 Crypto operations; cipher, authentication, including chaining of these
423 operations, as well as also supporting AEAD operations.
426 Session and Session Management
427 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
429 Sessions are used in symmetric cryptographic processing to store the immutable
430 data defined in a cryptographic transform which is used in the operation
431 processing of a packet flow. Sessions are used to manage information such as
432 expand cipher keys and HMAC IPADs and OPADs, which need to be calculated for a
433 particular Crypto operation, but are immutable on a packet to packet basis for
434 a flow. Crypto sessions cache this immutable data in a optimal way for the
435 underlying PMD and this allows further acceleration of the offload of
438 .. figure:: img/cryptodev_sym_sess.*
440 The Crypto device framework provides APIs to allocate and initizalize sessions
441 for crypto devices, where sessions are mempool objects.
442 It is the application's responsibility to create and manage the session mempools.
443 This approach allows for different scenarios such as having a single session
444 mempool for all crypto devices (where the mempool object size is big
445 enough to hold the private session of any crypto device), as well as having
446 multiple session mempools of different sizes for better memory usage.
448 An application can use ``rte_cryptodev_get_private_session_size()`` to
449 get the private session size of given crypto device. This function would allow
450 an application to calculate the max device session size of all crypto devices
451 to create a single session mempool.
452 If instead an application creates multiple session mempools, the Crypto device
453 framework also provides ``rte_cryptodev_get_header_session_size`` to get
454 the size of an uninitialized session.
456 Once the session mempools have been created, ``rte_cryptodev_sym_session_create()``
457 is used to allocate an uninitialized session from the given mempool.
458 The session then must be initialized using ``rte_cryptodev_sym_session_init()``
459 for each of the required crypto devices. A symmetric transform chain
460 is used to specify the operation and its parameters. See the section below for
461 details on transforms.
463 When a session is no longer used, user must call ``rte_cryptodev_sym_session_clear()``
464 for each of the crypto devices that are using the session, to free all driver
465 private session data. Once this is done, session should be freed using
466 ``rte_cryptodev_sym_session_free`` which returns them to their mempool.
469 Transforms and Transform Chaining
470 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
472 Symmetric Crypto transforms (``rte_crypto_sym_xform``) are the mechanism used
473 to specify the details of the Crypto operation. For chaining of symmetric
474 operations such as cipher encrypt and authentication generate, the next pointer
475 allows transform to be chained together. Crypto devices which support chaining
476 must publish the chaining of symmetric Crypto operations feature flag.
478 Currently there are three transforms types cipher, authentication and AEAD.
479 Also it is important to note that the order in which the
480 transforms are passed indicates the order of the chaining.
484 struct rte_crypto_sym_xform {
485 struct rte_crypto_sym_xform *next;
486 /**< next xform in chain */
487 enum rte_crypto_sym_xform_type type;
490 struct rte_crypto_auth_xform auth;
491 /**< Authentication / hash xform */
492 struct rte_crypto_cipher_xform cipher;
494 struct rte_crypto_aead_xform aead;
499 The API does not place a limit on the number of transforms that can be chained
500 together but this will be limited by the underlying Crypto device poll mode
501 driver which is processing the operation.
503 .. figure:: img/crypto_xform_chain.*
509 The symmetric Crypto operation structure contains all the mutable data relating
510 to performing symmetric cryptographic processing on a referenced mbuf data
511 buffer. It is used for either cipher, authentication, AEAD and chained
514 As a minimum the symmetric operation must have a source data buffer (``m_src``),
515 a valid session (or transform chain if in session-less mode) and the minimum
516 authentication/ cipher/ AEAD parameters required depending on the type of operation
517 specified in the session or the transform
522 struct rte_crypto_sym_op {
523 struct rte_mbuf *m_src;
524 struct rte_mbuf *m_dst;
527 struct rte_cryptodev_sym_session *session;
528 /**< Handle for the initialised session context */
529 struct rte_crypto_sym_xform *xform;
530 /**< Session-less API Crypto operation parameters */
538 } data; /**< Data offsets and length for AEAD */
542 rte_iova_t phys_addr;
543 } digest; /**< Digest parameters */
547 rte_iova_t phys_addr;
549 /**< Additional authentication parameters */
557 } data; /**< Data offsets and length for ciphering */
565 /**< Data offsets and length for authentication */
569 rte_iova_t phys_addr;
570 } digest; /**< Digest parameters */
579 There are various sample applications that show how to use the cryptodev library,
580 such as the L2fwd with Crypto sample application (L2fwd-crypto) and
581 the IPSec Security Gateway application (ipsec-secgw).
583 While these applications demonstrate how an application can be created to perform
584 generic crypto operation, the required complexity hides the basic steps of
585 how to use the cryptodev APIs.
587 The following sample code shows the basic steps to encrypt several buffers
588 with AES-CBC (although performing other crypto operations is similar),
589 using one of the crypto PMDs available in DPDK.
594 * Simple example to encrypt several buffers with AES-CBC using
595 * the Cryptodev APIs.
598 #define MAX_SESSIONS 1024
599 #define NUM_MBUFS 1024
600 #define POOL_CACHE_SIZE 128
601 #define BURST_SIZE 32
602 #define BUFFER_SIZE 1024
603 #define AES_CBC_IV_LENGTH 16
604 #define AES_CBC_KEY_LENGTH 16
605 #define IV_OFFSET (sizeof(struct rte_crypto_op) + \
606 sizeof(struct rte_crypto_sym_op))
608 struct rte_mempool *mbuf_pool, *crypto_op_pool, *session_pool;
609 unsigned int session_size;
612 /* Initialize EAL. */
613 ret = rte_eal_init(argc, argv);
615 rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
617 uint8_t socket_id = rte_socket_id();
619 /* Create the mbuf pool. */
620 mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool",
624 RTE_MBUF_DEFAULT_BUF_SIZE,
626 if (mbuf_pool == NULL)
627 rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
630 * The IV is always placed after the crypto operation,
631 * so some private data is required to be reserved.
633 unsigned int crypto_op_private_data = AES_CBC_IV_LENGTH;
635 /* Create crypto operation pool. */
636 crypto_op_pool = rte_crypto_op_pool_create("crypto_op_pool",
637 RTE_CRYPTO_OP_TYPE_SYMMETRIC,
640 crypto_op_private_data,
642 if (crypto_op_pool == NULL)
643 rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
645 /* Create the virtual crypto device. */
647 const char *crypto_name = "crypto_aesni_mb0";
648 snprintf(args, sizeof(args), "socket_id=%d", socket_id);
649 ret = rte_vdev_init(crypto_name, args);
651 rte_exit(EXIT_FAILURE, "Cannot create virtual device");
653 uint8_t cdev_id = rte_cryptodev_get_dev_id(crypto_name);
655 /* Get private session data size. */
656 session_size = rte_cryptodev_get_private_session_size(cdev_id);
659 * Create session mempool, with two objects per session,
660 * one for the session header and another one for the
661 * private session data for the crypto device.
663 session_pool = rte_mempool_create("session_pool",
671 /* Configure the crypto device. */
672 struct rte_cryptodev_config conf = {
674 .socket_id = socket_id
676 struct rte_cryptodev_qp_conf qp_conf = {
677 .nb_descriptors = 2048
680 if (rte_cryptodev_configure(cdev_id, &conf) < 0)
681 rte_exit(EXIT_FAILURE, "Failed to configure cryptodev %u", cdev_id);
683 if (rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
684 socket_id, session_pool) < 0)
685 rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n");
687 if (rte_cryptodev_start(cdev_id) < 0)
688 rte_exit(EXIT_FAILURE, "Failed to start device\n");
690 /* Create the crypto transform. */
691 uint8_t cipher_key[16] = {0};
692 struct rte_crypto_sym_xform cipher_xform = {
694 .type = RTE_CRYPTO_SYM_XFORM_CIPHER,
696 .op = RTE_CRYPTO_CIPHER_OP_ENCRYPT,
697 .algo = RTE_CRYPTO_CIPHER_AES_CBC,
700 .length = AES_CBC_KEY_LENGTH
704 .length = AES_CBC_IV_LENGTH
709 /* Create crypto session and initialize it for the crypto device. */
710 struct rte_cryptodev_sym_session *session;
711 session = rte_cryptodev_sym_session_create(session_pool);
713 rte_exit(EXIT_FAILURE, "Session could not be created\n");
715 if (rte_cryptodev_sym_session_init(cdev_id, session,
716 &cipher_xform, session_pool) < 0)
717 rte_exit(EXIT_FAILURE, "Session could not be initialized "
718 "for the crypto device\n");
720 /* Get a burst of crypto operations. */
721 struct rte_crypto_op *crypto_ops[BURST_SIZE];
722 if (rte_crypto_op_bulk_alloc(crypto_op_pool,
723 RTE_CRYPTO_OP_TYPE_SYMMETRIC,
724 crypto_ops, BURST_SIZE) == 0)
725 rte_exit(EXIT_FAILURE, "Not enough crypto operations available\n");
727 /* Get a burst of mbufs. */
728 struct rte_mbuf *mbufs[BURST_SIZE];
729 if (rte_pktmbuf_alloc_bulk(mbuf_pool, mbufs, BURST_SIZE) < 0)
730 rte_exit(EXIT_FAILURE, "Not enough mbufs available");
732 /* Initialize the mbufs and append them to the crypto operations. */
734 for (i = 0; i < BURST_SIZE; i++) {
735 if (rte_pktmbuf_append(mbufs[i], BUFFER_SIZE) == NULL)
736 rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n");
737 crypto_ops[i]->sym->m_src = mbufs[i];
740 /* Set up the crypto operations. */
741 for (i = 0; i < BURST_SIZE; i++) {
742 struct rte_crypto_op *op = crypto_ops[i];
743 /* Modify bytes of the IV at the end of the crypto operation */
744 uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
747 generate_random_bytes(iv_ptr, AES_CBC_IV_LENGTH);
749 op->sym->cipher.data.offset = 0;
750 op->sym->cipher.data.length = BUFFER_SIZE;
752 /* Attach the crypto session to the operation */
753 rte_crypto_op_attach_sym_session(op, session);
756 /* Enqueue the crypto operations in the crypto device. */
757 uint16_t num_enqueued_ops = rte_cryptodev_enqueue_burst(cdev_id, 0,
758 crypto_ops, BURST_SIZE);
761 * Dequeue the crypto operations until all the operations
762 * are proccessed in the crypto device.
764 uint16_t num_dequeued_ops, total_num_dequeued_ops = 0;
766 struct rte_crypto_op *dequeued_ops[BURST_SIZE];
767 num_dequeued_ops = rte_cryptodev_dequeue_burst(cdev_id, 0,
768 dequeued_ops, BURST_SIZE);
769 total_num_dequeued_ops += num_dequeued_ops;
771 /* Check if operation was processed successfully */
772 for (i = 0; i < num_dequeued_ops; i++) {
773 if (dequeued_ops[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
774 rte_exit(EXIT_FAILURE,
775 "Some operations were not processed correctly");
778 rte_mempool_put_bulk(crypto_op_pool, (void **)dequeued_ops,
780 } while (total_num_dequeued_ops < num_enqueued_ops);
783 Asymmetric Cryptography
784 -----------------------
786 Asymmetric functionality is currently not supported by the cryptodev API.
792 The cryptodev Library API is described in the *DPDK API Reference* document.