2 Copyright(c) 2016-2017 Intel Corporation. All rights reserved.
4 Redistribution and use in source and binary forms, with or without
5 modification, are permitted provided that the following conditions
8 * Redistributions of source code must retain the above copyright
9 notice, this list of conditions and the following disclaimer.
10 * Redistributions in binary form must reproduce the above copyright
11 notice, this list of conditions and the following disclaimer in
12 the documentation and/or other materials provided with the
14 * Neither the name of Intel Corporation nor the names of its
15 contributors may be used to endorse or promote products derived
16 from this software without specific prior written permission.
18 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
21 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
22 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
23 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
24 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
25 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
26 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
27 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
28 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
31 Cryptography Device Library
32 ===========================
34 The cryptodev library provides a Crypto device framework for management and
35 provisioning of hardware and software Crypto poll mode drivers, defining generic
36 APIs which support a number of different Crypto operations. The framework
37 currently only supports cipher, authentication, chained cipher/authentication
38 and AEAD symmetric Crypto operations.
44 The cryptodev library follows the same basic principles as those used in DPDKs
45 Ethernet Device framework. The Crypto framework provides a generic Crypto device
46 framework which supports both physical (hardware) and virtual (software) Crypto
47 devices as well as a generic Crypto API which allows Crypto devices to be
48 managed and configured and supports Crypto operations to be provisioned on
49 Crypto poll mode driver.
58 Physical Crypto devices are discovered during the PCI probe/enumeration of the
59 EAL function which is executed at DPDK initialization, based on
60 their PCI device identifier, each unique PCI BDF (bus/bridge, device,
61 function). Specific physical Crypto devices, like other physical devices in DPDK
62 can be white-listed or black-listed using the EAL command line options.
64 Virtual devices can be created by two mechanisms, either using the EAL command
65 line options or from within the application using an EAL API directly.
67 From the command line using the --vdev EAL option
69 .. code-block:: console
71 --vdev 'crypto_aesni_mb0,max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0'
75 * If DPDK application requires multiple software crypto PMD devices then required
76 number of ``--vdev`` with appropriate libraries are to be added.
78 * An Application with crypto PMD instaces sharing the same library requires unique ID.
80 Example: ``--vdev 'crypto_aesni_mb0' --vdev 'crypto_aesni_mb1'``
82 Our using the rte_vdev_init API within the application code.
86 rte_vdev_init("crypto_aesni_mb",
87 "max_nb_queue_pairs=2,max_nb_sessions=1024,socket_id=0")
89 All virtual Crypto devices support the following initialization parameters:
91 * ``max_nb_queue_pairs`` - maximum number of queue pairs supported by the device.
92 * ``max_nb_sessions`` - maximum number of sessions supported by the device
93 * ``socket_id`` - socket on which to allocate the device resources on.
99 Each device, whether virtual or physical is uniquely designated by two
102 - A unique device index used to designate the Crypto device in all functions
103 exported by the cryptodev API.
105 - A device name used to designate the Crypto device in console messages, for
106 administration or debugging purposes. For ease of use, the port name includes
113 The configuration of each Crypto device includes the following operations:
115 - Allocation of resources, including hardware resources if a physical device.
116 - Resetting the device into a well-known default state.
117 - Initialization of statistics counters.
119 The rte_cryptodev_configure API is used to configure a Crypto device.
123 int rte_cryptodev_configure(uint8_t dev_id,
124 struct rte_cryptodev_config *config)
126 The ``rte_cryptodev_config`` structure is used to pass the configuration
127 parameters for socket selection and number of queue pairs.
131 struct rte_cryptodev_config {
133 /**< Socket to allocate resources on */
134 uint16_t nb_queue_pairs;
135 /**< Number of queue pairs to configure on device */
139 Configuration of Queue Pairs
140 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
142 Each Crypto devices queue pair is individually configured through the
143 ``rte_cryptodev_queue_pair_setup`` API.
144 Each queue pairs resources may be allocated on a specified socket.
148 int rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,
149 const struct rte_cryptodev_qp_conf *qp_conf,
152 struct rte_cryptodev_qp_conf {
153 uint32_t nb_descriptors; /**< Number of descriptors per queue pair */
157 Logical Cores, Memory and Queues Pair Relationships
158 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
160 The Crypto device Library as the Poll Mode Driver library support NUMA for when
161 a processor’s logical cores and interfaces utilize its local memory. Therefore
162 Crypto operations, and in the case of symmetric Crypto operations, the session
163 and the mbuf being operated on, should be allocated from memory pools created
164 in the local memory. The buffers should, if possible, remain on the local
165 processor to obtain the best performance results and buffer descriptors should
166 be populated with mbufs allocated from a mempool allocated from local memory.
168 The run-to-completion model also performs better, especially in the case of
169 virtual Crypto devices, if the Crypto operation and session and data buffer is
170 in local memory instead of a remote processor's memory. This is also true for
171 the pipe-line model provided all logical cores used are located on the same
174 Multiple logical cores should never share the same queue pair for enqueuing
175 operations or dequeuing operations on the same Crypto device since this would
176 require global locks and hinder performance. It is however possible to use a
177 different logical core to dequeue an operation on a queue pair from the logical
178 core which it was enqueued on. This means that a crypto burst enqueue/dequeue
179 APIs are a logical place to transition from one logical core to another in a
180 packet processing pipeline.
183 Device Features and Capabilities
184 ---------------------------------
186 Crypto devices define their functionality through two mechanisms, global device
187 features and algorithm capabilities. Global devices features identify device
188 wide level features which are applicable to the whole device such as
189 the device having hardware acceleration or supporting symmetric Crypto
192 The capabilities mechanism defines the individual algorithms/functions which
193 the device supports, such as a specific symmetric Crypto cipher,
194 authentication operation or Authenticated Encryption with Associated Data
201 Currently the following Crypto device features are defined:
203 * Symmetric Crypto operations
204 * Asymmetric Crypto operations
205 * Chaining of symmetric Crypto operations
206 * SSE accelerated SIMD vector operations
207 * AVX accelerated SIMD vector operations
208 * AVX2 accelerated SIMD vector operations
209 * AESNI accelerated instructions
210 * Hardware off-load processing
213 Device Operation Capabilities
214 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
216 Crypto capabilities which identify particular algorithm which the Crypto PMD
217 supports are defined by the operation type, the operation transform, the
218 transform identifier and then the particulars of the transform. For the full
219 scope of the Crypto capability see the definition of the structure in the
220 *DPDK API Reference*.
224 struct rte_cryptodev_capabilities;
226 Each Crypto poll mode driver defines its own private array of capabilities
227 for the operations it supports. Below is an example of the capabilities for a
228 PMD which supports the authentication algorithm SHA1_HMAC and the cipher
233 static const struct rte_cryptodev_capabilities pmd_capabilities[] = {
235 .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
237 .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH,
239 .algo = RTE_CRYPTO_AUTH_SHA1_HMAC,
257 .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC,
259 .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER,
261 .algo = RTE_CRYPTO_CIPHER_AES_CBC,
279 Capabilities Discovery
280 ~~~~~~~~~~~~~~~~~~~~~~
282 Discovering the features and capabilities of a Crypto device poll mode driver
283 is achieved through the ``rte_cryptodev_info_get`` function.
287 void rte_cryptodev_info_get(uint8_t dev_id,
288 struct rte_cryptodev_info *dev_info);
290 This allows the user to query a specific Crypto PMD and get all the device
291 features and capabilities. The ``rte_cryptodev_info`` structure contains all the
292 relevant information for the device.
296 struct rte_cryptodev_info {
297 const char *driver_name;
299 struct rte_pci_device *pci_dev;
301 uint64_t feature_flags;
303 const struct rte_cryptodev_capabilities *capabilities;
305 unsigned max_nb_queue_pairs;
308 unsigned max_nb_sessions;
316 Scheduling of Crypto operations on DPDK's application data path is
317 performed using a burst oriented asynchronous API set. A queue pair on a Crypto
318 device accepts a burst of Crypto operations using enqueue burst API. On physical
319 Crypto devices the enqueue burst API will place the operations to be processed
320 on the devices hardware input queue, for virtual devices the processing of the
321 Crypto operations is usually completed during the enqueue call to the Crypto
322 device. The dequeue burst API will retrieve any processed operations available
323 from the queue pair on the Crypto device, from physical devices this is usually
324 directly from the devices processed queue, and for virtual device's from a
325 ``rte_ring`` where processed operations are place after being processed on the
329 Enqueue / Dequeue Burst APIs
330 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~
332 The burst enqueue API uses a Crypto device identifier and a queue pair
333 identifier to specify the Crypto device queue pair to schedule the processing on.
334 The ``nb_ops`` parameter is the number of operations to process which are
335 supplied in the ``ops`` array of ``rte_crypto_op`` structures.
336 The enqueue function returns the number of operations it actually enqueued for
337 processing, a return value equal to ``nb_ops`` means that all packets have been
342 uint16_t rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,
343 struct rte_crypto_op **ops, uint16_t nb_ops)
345 The dequeue API uses the same format as the enqueue API of processed but
346 the ``nb_ops`` and ``ops`` parameters are now used to specify the max processed
347 operations the user wishes to retrieve and the location in which to store them.
348 The API call returns the actual number of processed operations returned, this
349 can never be larger than ``nb_ops``.
353 uint16_t rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,
354 struct rte_crypto_op **ops, uint16_t nb_ops)
357 Operation Representation
358 ~~~~~~~~~~~~~~~~~~~~~~~~
360 An Crypto operation is represented by an rte_crypto_op structure, which is a
361 generic metadata container for all necessary information required for the
362 Crypto operation to be processed on a particular Crypto device poll mode driver.
364 .. figure:: img/crypto_op.*
366 The operation structure includes the operation type, the operation status
367 and the session type (session-based/less), a reference to the operation
368 specific data, which can vary in size and content depending on the operation
369 being provisioned. It also contains the source mempool for the operation,
370 if it allocated from a mempool.
372 If Crypto operations are allocated from a Crypto operation mempool, see next
373 section, there is also the ability to allocate private memory with the
374 operation for applications purposes.
376 Application software is responsible for specifying all the operation specific
377 fields in the ``rte_crypto_op`` structure which are then used by the Crypto PMD
378 to process the requested operation.
381 Operation Management and Allocation
382 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
384 The cryptodev library provides an API set for managing Crypto operations which
385 utilize the Mempool Library to allocate operation buffers. Therefore, it ensures
386 that the crytpo operation is interleaved optimally across the channels and
387 ranks for optimal processing.
388 A ``rte_crypto_op`` contains a field indicating the pool that it originated from.
389 When calling ``rte_crypto_op_free(op)``, the operation returns to its original pool.
393 extern struct rte_mempool *
394 rte_crypto_op_pool_create(const char *name, enum rte_crypto_op_type type,
395 unsigned nb_elts, unsigned cache_size, uint16_t priv_size,
398 During pool creation ``rte_crypto_op_init()`` is called as a constructor to
399 initialize each Crypto operation which subsequently calls
400 ``__rte_crypto_op_reset()`` to configure any operation type specific fields based
401 on the type parameter.
404 ``rte_crypto_op_alloc()`` and ``rte_crypto_op_bulk_alloc()`` are used to allocate
405 Crypto operations of a specific type from a given Crypto operation mempool.
406 ``__rte_crypto_op_reset()`` is called on each operation before being returned to
407 allocate to a user so the operation is always in a good known state before use
412 struct rte_crypto_op *rte_crypto_op_alloc(struct rte_mempool *mempool,
413 enum rte_crypto_op_type type)
415 unsigned rte_crypto_op_bulk_alloc(struct rte_mempool *mempool,
416 enum rte_crypto_op_type type,
417 struct rte_crypto_op **ops, uint16_t nb_ops)
419 ``rte_crypto_op_free()`` is called by the application to return an operation to
424 void rte_crypto_op_free(struct rte_crypto_op *op)
427 Symmetric Cryptography Support
428 ------------------------------
430 The cryptodev library currently provides support for the following symmetric
431 Crypto operations; cipher, authentication, including chaining of these
432 operations, as well as also supporting AEAD operations.
435 Session and Session Management
436 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
438 Sessions are used in symmetric cryptographic processing to store the immutable
439 data defined in a cryptographic transform which is used in the operation
440 processing of a packet flow. Sessions are used to manage information such as
441 expand cipher keys and HMAC IPADs and OPADs, which need to be calculated for a
442 particular Crypto operation, but are immutable on a packet to packet basis for
443 a flow. Crypto sessions cache this immutable data in a optimal way for the
444 underlying PMD and this allows further acceleration of the offload of
447 .. figure:: img/cryptodev_sym_sess.*
449 The Crypto device framework provides APIs to allocate and initizalize sessions
450 for crypto devices, where sessions are mempool objects.
451 It is the application's responsibility to create and manage the session mempools.
452 This approach allows for different scenarios such as having a single session
453 mempool for all crypto devices (where the mempool object size is big
454 enough to hold the private session of any crypto device), as well as having
455 multiple session mempools of different sizes for better memory usage.
457 An application can use ``rte_cryptodev_get_private_session_size()`` to
458 get the private session size of given crypto device. This function would allow
459 an application to calculate the max device session size of all crypto devices
460 to create a single session mempool.
461 If instead an application creates multiple session mempools, the Crypto device
462 framework also provides ``rte_cryptodev_get_header_session_size`` to get
463 the size of an uninitialized session.
465 Once the session mempools have been created, ``rte_cryptodev_sym_session_create()``
466 is used to allocate an uninitialized session from the given mempool.
467 The session then must be initialized using ``rte_cryptodev_sym_session_init()``
468 for each of the required crypto devices. A symmetric transform chain
469 is used to specify the operation and its parameters. See the section below for
470 details on transforms.
472 When a session is no longer used, user must call ``rte_cryptodev_sym_session_clear()``
473 for each of the crypto devices that are using the session, to free all driver
474 private session data. Once this is done, session should be freed using
475 ``rte_cryptodev_sym_session_free`` which returns them to their mempool.
478 Transforms and Transform Chaining
479 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
481 Symmetric Crypto transforms (``rte_crypto_sym_xform``) are the mechanism used
482 to specify the details of the Crypto operation. For chaining of symmetric
483 operations such as cipher encrypt and authentication generate, the next pointer
484 allows transform to be chained together. Crypto devices which support chaining
485 must publish the chaining of symmetric Crypto operations feature flag.
487 Currently there are three transforms types cipher, authentication and AEAD.
488 Also it is important to note that the order in which the
489 transforms are passed indicates the order of the chaining.
493 struct rte_crypto_sym_xform {
494 struct rte_crypto_sym_xform *next;
495 /**< next xform in chain */
496 enum rte_crypto_sym_xform_type type;
499 struct rte_crypto_auth_xform auth;
500 /**< Authentication / hash xform */
501 struct rte_crypto_cipher_xform cipher;
503 struct rte_crypto_aead_xform aead;
508 The API does not place a limit on the number of transforms that can be chained
509 together but this will be limited by the underlying Crypto device poll mode
510 driver which is processing the operation.
512 .. figure:: img/crypto_xform_chain.*
518 The symmetric Crypto operation structure contains all the mutable data relating
519 to performing symmetric cryptographic processing on a referenced mbuf data
520 buffer. It is used for either cipher, authentication, AEAD and chained
523 As a minimum the symmetric operation must have a source data buffer (``m_src``),
524 a valid session (or transform chain if in session-less mode) and the minimum
525 authentication/ cipher/ AEAD parameters required depending on the type of operation
526 specified in the session or the transform
531 struct rte_crypto_sym_op {
532 struct rte_mbuf *m_src;
533 struct rte_mbuf *m_dst;
536 struct rte_cryptodev_sym_session *session;
537 /**< Handle for the initialised session context */
538 struct rte_crypto_sym_xform *xform;
539 /**< Session-less API Crypto operation parameters */
547 } data; /**< Data offsets and length for AEAD */
551 rte_iova_t phys_addr;
552 } digest; /**< Digest parameters */
556 rte_iova_t phys_addr;
558 /**< Additional authentication parameters */
566 } data; /**< Data offsets and length for ciphering */
574 /**< Data offsets and length for authentication */
578 rte_iova_t phys_addr;
579 } digest; /**< Digest parameters */
588 There are various sample applications that show how to use the cryptodev library,
589 such as the L2fwd with Crypto sample application (L2fwd-crypto) and
590 the IPSec Security Gateway application (ipsec-secgw).
592 While these applications demonstrate how an application can be created to perform
593 generic crypto operation, the required complexity hides the basic steps of
594 how to use the cryptodev APIs.
596 The following sample code shows the basic steps to encrypt several buffers
597 with AES-CBC (although performing other crypto operations is similar),
598 using one of the crypto PMDs available in DPDK.
603 * Simple example to encrypt several buffers with AES-CBC using
604 * the Cryptodev APIs.
607 #define MAX_SESSIONS 1024
608 #define NUM_MBUFS 1024
609 #define POOL_CACHE_SIZE 128
610 #define BURST_SIZE 32
611 #define BUFFER_SIZE 1024
612 #define AES_CBC_IV_LENGTH 16
613 #define AES_CBC_KEY_LENGTH 16
614 #define IV_OFFSET (sizeof(struct rte_crypto_op) + \
615 sizeof(struct rte_crypto_sym_op))
617 struct rte_mempool *mbuf_pool, *crypto_op_pool, *session_pool;
618 unsigned int session_size;
621 /* Initialize EAL. */
622 ret = rte_eal_init(argc, argv);
624 rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
626 uint8_t socket_id = rte_socket_id();
628 /* Create the mbuf pool. */
629 mbuf_pool = rte_pktmbuf_pool_create("mbuf_pool",
633 RTE_MBUF_DEFAULT_BUF_SIZE,
635 if (mbuf_pool == NULL)
636 rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n");
639 * The IV is always placed after the crypto operation,
640 * so some private data is required to be reserved.
642 unsigned int crypto_op_private_data = AES_CBC_IV_LENGTH;
644 /* Create crypto operation pool. */
645 crypto_op_pool = rte_crypto_op_pool_create("crypto_op_pool",
646 RTE_CRYPTO_OP_TYPE_SYMMETRIC,
649 crypto_op_private_data,
651 if (crypto_op_pool == NULL)
652 rte_exit(EXIT_FAILURE, "Cannot create crypto op pool\n");
654 /* Create the virtual crypto device. */
656 const char *crypto_name = "crypto_aesni_mb0";
657 snprintf(args, sizeof(args), "socket_id=%d", socket_id);
658 ret = rte_vdev_init(crypto_name, args);
660 rte_exit(EXIT_FAILURE, "Cannot create virtual device");
662 uint8_t cdev_id = rte_cryptodev_get_dev_id(crypto_name);
664 /* Get private session data size. */
665 session_size = rte_cryptodev_get_private_session_size(cdev_id);
668 * Create session mempool, with two objects per session,
669 * one for the session header and another one for the
670 * private session data for the crypto device.
672 session_pool = rte_mempool_create("session_pool",
680 /* Configure the crypto device. */
681 struct rte_cryptodev_config conf = {
683 .socket_id = socket_id
685 struct rte_cryptodev_qp_conf qp_conf = {
686 .nb_descriptors = 2048
689 if (rte_cryptodev_configure(cdev_id, &conf) < 0)
690 rte_exit(EXIT_FAILURE, "Failed to configure cryptodev %u", cdev_id);
692 if (rte_cryptodev_queue_pair_setup(cdev_id, 0, &qp_conf,
693 socket_id, session_pool) < 0)
694 rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n");
696 if (rte_cryptodev_start(cdev_id) < 0)
697 rte_exit(EXIT_FAILURE, "Failed to start device\n");
699 /* Create the crypto transform. */
700 uint8_t cipher_key[16] = {0};
701 struct rte_crypto_sym_xform cipher_xform = {
703 .type = RTE_CRYPTO_SYM_XFORM_CIPHER,
705 .op = RTE_CRYPTO_CIPHER_OP_ENCRYPT,
706 .algo = RTE_CRYPTO_CIPHER_AES_CBC,
709 .length = AES_CBC_KEY_LENGTH
713 .length = AES_CBC_IV_LENGTH
718 /* Create crypto session and initialize it for the crypto device. */
719 struct rte_cryptodev_sym_session *session;
720 session = rte_cryptodev_sym_session_create(session_pool);
722 rte_exit(EXIT_FAILURE, "Session could not be created\n");
724 if (rte_cryptodev_sym_session_init(cdev_id, session,
725 &cipher_xform, session_pool) < 0)
726 rte_exit(EXIT_FAILURE, "Session could not be initialized "
727 "for the crypto device\n");
729 /* Get a burst of crypto operations. */
730 struct rte_crypto_op *crypto_ops[BURST_SIZE];
731 if (rte_crypto_op_bulk_alloc(crypto_op_pool,
732 RTE_CRYPTO_OP_TYPE_SYMMETRIC,
733 crypto_ops, BURST_SIZE) == 0)
734 rte_exit(EXIT_FAILURE, "Not enough crypto operations available\n");
736 /* Get a burst of mbufs. */
737 struct rte_mbuf *mbufs[BURST_SIZE];
738 if (rte_pktmbuf_alloc_bulk(mbuf_pool, mbufs, BURST_SIZE) < 0)
739 rte_exit(EXIT_FAILURE, "Not enough mbufs available");
741 /* Initialize the mbufs and append them to the crypto operations. */
743 for (i = 0; i < BURST_SIZE; i++) {
744 if (rte_pktmbuf_append(mbufs[i], BUFFER_SIZE) == NULL)
745 rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n");
746 crypto_ops[i]->sym->m_src = mbufs[i];
749 /* Set up the crypto operations. */
750 for (i = 0; i < BURST_SIZE; i++) {
751 struct rte_crypto_op *op = crypto_ops[i];
752 /* Modify bytes of the IV at the end of the crypto operation */
753 uint8_t *iv_ptr = rte_crypto_op_ctod_offset(op, uint8_t *,
756 generate_random_bytes(iv_ptr, AES_CBC_IV_LENGTH);
758 op->sym->cipher.data.offset = 0;
759 op->sym->cipher.data.length = BUFFER_SIZE;
761 /* Attach the crypto session to the operation */
762 rte_crypto_op_attach_sym_session(op, session);
765 /* Enqueue the crypto operations in the crypto device. */
766 uint16_t num_enqueued_ops = rte_cryptodev_enqueue_burst(cdev_id, 0,
767 crypto_ops, BURST_SIZE);
770 * Dequeue the crypto operations until all the operations
771 * are proccessed in the crypto device.
773 uint16_t num_dequeued_ops, total_num_dequeued_ops = 0;
775 struct rte_crypto_op *dequeued_ops[BURST_SIZE];
776 num_dequeued_ops = rte_cryptodev_dequeue_burst(cdev_id, 0,
777 dequeued_ops, BURST_SIZE);
778 total_num_dequeued_ops += num_dequeued_ops;
780 /* Check if operation was processed successfully */
781 for (i = 0; i < num_dequeued_ops; i++) {
782 if (dequeued_ops[i]->status != RTE_CRYPTO_OP_STATUS_SUCCESS)
783 rte_exit(EXIT_FAILURE,
784 "Some operations were not processed correctly");
787 rte_mempool_put_bulk(crypto_op_pool, (void **)dequeued_ops,
789 } while (total_num_dequeued_ops < num_enqueued_ops);
792 Asymmetric Cryptography
793 -----------------------
795 Asymmetric functionality is currently not supported by the cryptodev API.
801 The cryptodev Library API is described in the *DPDK API Reference* document.