Lines Matching full:fault

43  * Returns 0 if mmiotrace is disabled, or if the fault is not
134 * If it was a exec (instruction fetch) fault on NX page, then in is_prefetch()
135 * do not ignore the fault: in is_prefetch()
219 * Handle a fault on the vmalloc or module mapping area
230 * unhandled page-fault when they are accessed.
413 * The OS sees this as a page fault with the upper 32bits of RIP cleared.
450 * We catch this in the page fault handler because these addresses
539 pr_alert("BUG: unable to handle page fault for address: %px\n", in show_fault_oops()
563 * contributory exception from user code and gets a page fault in show_fault_oops()
564 * during delivery, the page fault can be delivered as though in show_fault_oops()
658 * Stack overflow? During boot, we can fault near the initial in page_fault_oops()
667 * double-fault even before we get this far, in which case in page_fault_oops()
668 * we're fine: the double-fault handler will deal with it. in page_fault_oops()
671 * and then double-fault, though, because we're likely to in page_fault_oops()
684 * Buggy firmware could access regions which might page fault. If in page_fault_oops()
725 /* Are we prepared to handle this kernel fault? */ in kernelmode_fixup_or_oops()
730 * AMD erratum #91 manifests as a spurious page fault on a PREFETCH in kernelmode_fixup_or_oops()
801 * Valid to do another page fault here because this one came in __bad_area_nosemaphore()
881 * A protection key fault means that the PKRU value did not allow in bad_area_access_error()
888 * fault and that there was a VMA once we got in the fault in bad_area_access_error()
896 * 5. T1 : enters fault handler, takes mmap_lock, etc... in bad_area_access_error()
910 vm_fault_t fault) in do_sigbus() argument
919 /* User-space => ok to do another page fault: */ in do_sigbus()
931 if (fault & (VM_FAULT_HWPOISON|VM_FAULT_HWPOISON_LARGE)) { in do_sigbus()
936 "MCE: Killing %s:%d due to hardware memory corruption fault at %lx\n", in do_sigbus()
938 if (fault & VM_FAULT_HWPOISON_LARGE) in do_sigbus()
939 lsb = hstate_index_to_shift(VM_FAULT_GET_HINDEX(fault)); in do_sigbus()
940 if (fault & VM_FAULT_HWPOISON) in do_sigbus()
961 * Handle a spurious fault caused by a stale TLB entry.
976 * Returns non-zero if a spurious fault was handled, zero otherwise.
1059 * a follow-up action to resolve the fault, like a COW. in access_error()
1068 * fix the cause of the fault. Handle the fault as an access in access_error()
1148 * We can fault-in kernel-space virtual memory on-demand. The in do_kern_addr_fault()
1157 * fault is not any of the following: in do_kern_addr_fault()
1158 * 1. A fault on a PTE with a reserved bit set. in do_kern_addr_fault()
1159 * 2. A fault caused by a user-mode access. (Do not demand- in do_kern_addr_fault()
1160 * fault kernel memory due to user-mode accesses). in do_kern_addr_fault()
1161 * 3. A fault caused by a page-level protection violation. in do_kern_addr_fault()
1162 * (A demand fault would be on a non-present page which in do_kern_addr_fault()
1180 /* Was the fault spurious, caused by lazy TLB invalidation? */ in do_kern_addr_fault()
1191 * and handling kernel code that can fault, like get_user(). in do_kern_addr_fault()
1194 * fault we could otherwise deadlock: in do_kern_addr_fault()
1216 vm_fault_t fault; in do_user_addr_fault() local
1268 * in a region with pagefaults disabled then we must not take the fault in do_user_addr_fault()
1336 fault = handle_mm_fault(vma, address, flags | FAULT_FLAG_VMA_LOCK, regs); in do_user_addr_fault()
1337 if (!(fault & (VM_FAULT_RETRY | VM_FAULT_COMPLETED))) in do_user_addr_fault()
1340 if (!(fault & VM_FAULT_RETRY)) { in do_user_addr_fault()
1345 if (fault & VM_FAULT_MAJOR) in do_user_addr_fault()
1349 if (fault_signal_pending(fault, regs)) { in do_user_addr_fault()
1375 * If for any reason at all we couldn't handle the fault, in do_user_addr_fault()
1377 * the fault. Since we never set FAULT_FLAG_RETRY_NOWAIT, if in do_user_addr_fault()
1382 * repeat the page fault later with a VM_FAULT_NOPAGE retval in do_user_addr_fault()
1387 fault = handle_mm_fault(vma, address, flags, regs); in do_user_addr_fault()
1389 if (fault_signal_pending(fault, regs)) { in do_user_addr_fault()
1401 /* The fault is fully completed (including releasing mmap lock) */ in do_user_addr_fault()
1402 if (fault & VM_FAULT_COMPLETED) in do_user_addr_fault()
1410 if (unlikely(fault & VM_FAULT_RETRY)) { in do_user_addr_fault()
1417 if (likely(!(fault & VM_FAULT_ERROR))) in do_user_addr_fault()
1426 if (fault & VM_FAULT_OOM) { in do_user_addr_fault()
1437 * userspace (which will retry the fault, or kill us if we got in do_user_addr_fault()
1442 if (fault & (VM_FAULT_SIGBUS|VM_FAULT_HWPOISON| in do_user_addr_fault()
1444 do_sigbus(regs, error_code, address, fault); in do_user_addr_fault()
1445 else if (fault & VM_FAULT_SIGSEGV) in do_user_addr_fault()
1472 /* Was the fault on kernel-controlled part of the address space? */ in handle_page_fault()
1478 * User address page fault handling might have reenabled in handle_page_fault()
1497 * (asynchronous page fault mechanism). The event happens when a in DEFINE_IDTENTRY_RAW_ERRORCODE()
1522 * be invoked because a kernel fault on a user space address might in DEFINE_IDTENTRY_RAW_ERRORCODE()
1525 * In case the fault hit a RCU idle region the conditional entry in DEFINE_IDTENTRY_RAW_ERRORCODE()