Searched refs:hmm_range_fault (Results 1 – 9 of 9) sorted by path
162 int hmm_range_fault(struct hmm_range *range);184 ret = hmm_range_fault(&range);224 and calls hmm_range_fault() as described above. This will fill fault all pages239 After hmm_range_fault completes the flag bits are set to the current state of
129 int hmm_range_fault(struct hmm_range *range);156 ret = hmm_range_fault(&range);192 并如上所述调用 hmm_range_fault()。这将填充至少具有读取权限的范围内的所有页面。205 hmm_range_fault 完成后,标志位被设置为页表的当前状态,即 HMM_PFN_VALID | 如果页
212 r = hmm_range_fault(hmm_range); in amdgpu_hmm_range_get_pages()
684 ret = hmm_range_fault(&range); in nouveau_range_fault()
222 ret = hmm_range_fault(&hmm_range); in xe_hmm_userptr_populate_range()
398 ret = hmm_range_fault(&range); in ib_umem_odp_map_dma_and_lock()
105 int hmm_range_fault(struct hmm_range *range);
219 * Since we asked for hmm_range_fault() to populate pages, in dmirror_do_fault() 303 ret = hmm_range_fault(range); in dmirror_range_fault() 1137 ret = hmm_range_fault(range); in dmirror_range_snapshot()
533 * devices directly cannot be handled by hmm_range_fault(). in hmm_vma_walk_test() 564 * hmm_range_fault - try to fault some address in a virtual address range582 int hmm_range_fault(struct hmm_range *range)609 EXPORT_SYMBOL(hmm_range_fault); in hmm_range_fault() 587 int hmm_range_fault(struct hmm_range *range) hmm_range_fault() function