1.. SPDX-License-Identifier: GPL-2.0 2 3================================== 4Netmem Support for Network Drivers 5================================== 6 7This document outlines the requirements for network drivers to support netmem, 8an abstract memory type that enables features like device memory TCP. By 9supporting netmem, drivers can work with various underlying memory types 10with little to no modification. 11 12Benefits of Netmem : 13 14* Flexibility: Netmem can be backed by different memory types (e.g., struct 15 page, DMA-buf), allowing drivers to support various use cases such as device 16 memory TCP. 17* Future-proof: Drivers with netmem support are ready for upcoming 18 features that rely on it. 19* Simplified Development: Drivers interact with a consistent API, 20 regardless of the underlying memory implementation. 21 22Driver Requirements 23=================== 24 251. The driver must support page_pool. 26 272. The driver must support the tcp-data-split ethtool option. 28 293. The driver must use the page_pool netmem APIs for payload memory. The netmem 30 APIs currently 1-to-1 correspond with page APIs. Conversion to netmem should 31 be achievable by switching the page APIs to netmem APIs and tracking memory 32 via netmem_refs in the driver rather than struct page * : 33 34 - page_pool_alloc -> page_pool_alloc_netmem 35 - page_pool_get_dma_addr -> page_pool_get_dma_addr_netmem 36 - page_pool_put_page -> page_pool_put_netmem 37 38 Not all page APIs have netmem equivalents at the moment. If your driver 39 relies on a missing netmem API, feel free to add and propose to netdev@, or 40 reach out to the maintainers and/or almasrymina@google.com for help adding 41 the netmem API. 42 434. The driver must use the following PP_FLAGS: 44 45 - PP_FLAG_DMA_MAP: netmem is not dma-mappable by the driver. The driver 46 must delegate the dma mapping to the page_pool, which knows when 47 dma-mapping is (or is not) appropriate. 48 - PP_FLAG_DMA_SYNC_DEV: netmem dma addr is not necessarily dma-syncable 49 by the driver. The driver must delegate the dma syncing to the page_pool, 50 which knows when dma-syncing is (or is not) appropriate. 51 - PP_FLAG_ALLOW_UNREADABLE_NETMEM. The driver must specify this flag iff 52 tcp-data-split is enabled. 53 545. The driver must not assume the netmem is readable and/or backed by pages. 55 The netmem returned by the page_pool may be unreadable, in which case 56 netmem_address() will return NULL. The driver must correctly handle 57 unreadable netmem, i.e. don't attempt to handle its contents when 58 netmem_address() is NULL. 59 60 Ideally, drivers should not have to check the underlying netmem type via 61 helpers like netmem_is_net_iov() or convert the netmem to any of its 62 underlying types via netmem_to_page() or netmem_to_net_iov(). In most cases, 63 netmem or page_pool helpers that abstract this complexity are provided 64 (and more can be added). 65 666. The driver must use page_pool_dma_sync_netmem_for_cpu() in lieu of 67 dma_sync_single_range_for_cpu(). For some memory providers, dma_syncing for 68 CPU will be done by the page_pool, for others (particularly dmabuf memory 69 provider), dma syncing for CPU is the responsibility of the userspace using 70 dmabuf APIs. The driver must delegate the entire dma-syncing operation to 71 the page_pool which will do it correctly. 72 737. Avoid implementing driver-specific recycling on top of the page_pool. Drivers 74 cannot hold onto a struct page to do their own recycling as the netmem may 75 not be backed by a struct page. However, you may hold onto a page_pool 76 reference with page_pool_fragment_netmem() or page_pool_ref_netmem() for 77 that purpose, but be mindful that some netmem types might have longer 78 circulation times, such as when userspace holds a reference in zerocopy 79 scenarios. 80