docs: core-api: document the IOVA-based API

Add an explanation of the newly added IOVA-based mapping API.

Signed-off-by: Christoph Hellwig <hch@lst.de>
Tested-by: Jens Axboe <axboe@kernel.dk>
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
This commit is contained in:
Christoph Hellwig 2025-05-05 10:01:46 +03:00 committed by Marek Szyprowski
parent 5f3b133a23
commit 3ee7d94963

View File

@ -530,6 +530,77 @@ routines, e.g.:::
....
}
Part Ie - IOVA-based DMA mappings
---------------------------------
These APIs allow a very efficient mapping when using an IOMMU. They are an
optional path that requires extra code and are only recommended for drivers
where DMA mapping performance, or the space usage for storing the DMA addresses
matter. All the considerations from the previous section apply here as well.
::
bool dma_iova_try_alloc(struct device *dev, struct dma_iova_state *state,
phys_addr_t phys, size_t size);
Is used to try to allocate IOVA space for mapping operation. If it returns
false this API can't be used for the given device and the normal streaming
DMA mapping API should be used. The ``struct dma_iova_state`` is allocated
by the driver and must be kept around until unmap time.
::
static inline bool dma_use_iova(struct dma_iova_state *state)
Can be used by the driver to check if the IOVA-based API is used after a
call to dma_iova_try_alloc. This can be useful in the unmap path.
::
int dma_iova_link(struct device *dev, struct dma_iova_state *state,
phys_addr_t phys, size_t offset, size_t size,
enum dma_data_direction dir, unsigned long attrs);
Is used to link ranges to the IOVA previously allocated. The start of all
but the first call to dma_iova_link for a given state must be aligned
to the DMA merge boundary returned by ``dma_get_merge_boundary())``, and
the size of all but the last range must be aligned to the DMA merge boundary
as well.
::
int dma_iova_sync(struct device *dev, struct dma_iova_state *state,
size_t offset, size_t size);
Must be called to sync the IOMMU page tables for IOVA-range mapped by one or
more calls to ``dma_iova_link()``.
For drivers that use a one-shot mapping, all ranges can be unmapped and the
IOVA freed by calling:
::
void dma_iova_destroy(struct device *dev, struct dma_iova_state *state,
size_t mapped_len, enum dma_data_direction dir,
unsigned long attrs);
Alternatively drivers can dynamically manage the IOVA space by unmapping
and mapping individual regions. In that case
::
void dma_iova_unlink(struct device *dev, struct dma_iova_state *state,
size_t offset, size_t size, enum dma_data_direction dir,
unsigned long attrs);
is used to unmap a range previously mapped, and
::
void dma_iova_free(struct device *dev, struct dma_iova_state *state);
is used to free the IOVA space. All regions must have been unmapped using
``dma_iova_unlink()`` before calling ``dma_iova_free()``.
Part II - Non-coherent DMA allocations
--------------------------------------