Lines Matching full:vmap

6  *  Major rework to support vmap/vunmap, Christoph Hellwig, SGI, August 2002
795 * Walk a vmap address to the struct page it maps. Huge vmap mappings will
797 * matches small vmap mappings.
889 * This augment red-black tree represents the free vmap space.
891 * address. It is used for allocation and merging when a vmap
932 * An effective vmap-node logic. Users make use of nodes instead
956 * is fully disabled. Later on, after vmap is initialized these
963 /* A simple iterator over all vmap-nodes. */
1237 * there is no free vmap space. Normally it does not in get_va_next_sibling()
1764 * Also we can hit this path in case of regular "vmap" in va_clip()
2183 * vmap activity will not scale linearly with CPUs. Also, I want to be
2199 * Serialize vmap purging. There is no actual critical section protected
2339 * Purges all lazily-freed vmap areas.
2422 * Reclaim vmap areas by purging fragmented blocks and purge_vmap_area_list.
2441 * Free a vmap area, caller ensuring that the area has been unmapped,
2478 * Free and unmap a vmap area
2556 * vmap space is limited especially on 32 bit architectures. Ensure there is
2557 * room for at least 16 percpu vmap blocks per CPU.
2617 /* Queue of free and dirty vmap blocks, for allocation and flushing purposes */
2641 * |------|------|------|------|------|------|...<vmap address space>
2674 * out of partially filled vmap blocks. However vmap block sizing should be
3002 * vm_unmap_aliases - unmap outstanding lazy aliases in the vmap layer
3004 * The vmap/vmalloc layer lazily flushes kernel virtual mappings primarily
3007 * address by the vmap layer and so there might be some CPUs with TLB entries
3012 * from the vmap layer.
3061 * faster than vmap so it's good. But if you mix long-life and short-life
3135 * vm_area_add_early - add vmap area early during boot
3161 * vm_area_register_early - register vmap area early during boot
3490 * vunmap - release virtual mapping obtained by vmap()
3494 * which was created from the page array passed to vmap().
3518 * vmap - map an array of pages into virtually contiguous space
3527 * are transferred from the caller to vmap(), and will be freed / dropped when
3532 void *vmap(struct page **pages, unsigned int count, in vmap() function
3572 EXPORT_SYMBOL(vmap);
5267 seq_puts(m, " vmap"); in vmalloc_info_show()
5471 * Now we can initialize a free vmap space. in vmalloc_init()
5476 vmap_node_shrinker = shrinker_alloc(0, "vmap-node"); in vmalloc_init()
5478 pr_err("Failed to allocate vmap-node shrinker!\n"); in vmalloc_init()