Home
last modified time | relevance | path

Searched full:faults (Results 1 – 25 of 538) sorted by relevance

12345678910>>...22

/linux/drivers/iommu/
H A Dio-pgfault.c3 * Handle device page faults
46 list_for_each_entry_safe(iopf, next, &group->faults, list) { in __iopf_free_group()
99 INIT_LIST_HEAD(&group->faults); in iopf_group_alloc()
101 list_add(&group->last_fault.list, &group->faults); in iopf_group_alloc()
103 /* See if we have partial faults for this group */ in iopf_group_alloc()
108 list_move(&iopf->list, &group->faults); in iopf_group_alloc()
110 list_add(&group->pending_node, &iopf_param->faults); in iopf_group_alloc()
113 group->fault_count = list_count_nodes(&group->faults); in iopf_group_alloc()
134 * managed PASID table. Therefore page faults for in find_fault_handler()
181 * them before reporting faults. A PASID Stop Marker (LRW = 0b100) doesn't
[all …]
/linux/Documentation/admin-guide/mm/
H A Duserfaultfd.rst10 memory page faults, something otherwise only the kernel code could do.
19 regions of virtual memory with it. Then, any page faults which occur within the
26 1) ``read/POLLIN`` protocol to notify a userland thread of the faults
58 handle kernel page faults have been a useful tool for exploiting the kernel).
63 - Any user can always create a userfaultfd which traps userspace page faults
67 - In order to also trap kernel page faults for the address space, either the
80 to /dev/userfaultfd can always create userfaultfds that trap kernel page faults;
102 other than page faults are supported. These events are described in more
127 bitmask will specify to the kernel which kind of faults to track for
132 hugetlbfs), or all types of intercepted faults.
[all …]
/linux/Documentation/userspace-api/media/v4l/
H A Dext-ctrls-flash.rst63 presence of some faults. See V4L2_CID_FLASH_FAULT.
106 control may not be possible in presence of some faults. See
129 some faults. See V4L2_CID_FLASH_FAULT.
137 Faults related to the flash. The faults tell about specific problems
138 in the flash chip itself or the LEDs attached to it. Faults may
141 if the fault affects the flash LED. Exactly which faults have such
142 an effect is chip dependent. Reading the faults resets the control
/linux/arch/powerpc/platforms/powernv/
H A Dvas-fault.c24 * 8MB FIFO can be used if expects more faults for each VAS
57 * It can raise a single interrupt for multiple faults. Expects OS to
58 * process all valid faults and return credit for each fault on user
78 * VAS can interrupt with multiple page faults. So process all in vas_fault_thread_fn()
92 * fifo_in_progress is set. Means these new faults will be in vas_fault_thread_fn()
153 * NX sees faults only with user space windows. in vas_fault_thread_fn()
176 * NX can generate an interrupt for multiple faults. So the in vas_fault_handler()
178 * entry. In case if NX sees continuous faults, it is possible in vas_fault_handler()
197 * FIFO upon page faults.
/linux/Documentation/gpu/rfc/
H A Di915_vm_bind.rst96 newer VM_BIND mode, the VM_BIND mode with GPU page faults and possible future
98 The older execbuf mode and the newer VM_BIND mode without page faults manages
99 residency of backing storage using dma_fence. The VM_BIND mode with page faults
108 In future, when GPU page faults are supported, we can potentially use a
124 When GPU page faults are supported, the execbuf path do not take any of these
180 Where GPU page faults are not available, kernel driver upon buffer invalidation
210 GPU page faults
212 GPU page faults when supported (in future), will only be supported in the
214 binding will require using dma-fence to ensure residency, the GPU page faults
240 faults enabled.
/linux/Documentation/driver-api/
H A Ddma-buf.rst308 Recoverable Hardware Page Faults Implications
311 Modern hardware supports recoverable page faults, which has a lot of
317 means any workload using recoverable page faults cannot use DMA fences for
324 faults. Specifically this means implicit synchronization will not be possible.
325 The exception is when page faults are only used as migration hints and never to
327 faults on GPUs are limited to pure compute workloads.
331 job with a DMA fence and a compute workload using recoverable page faults are
362 to guarantee all pending GPU page faults are flushed.
365 allocating memory to repair hardware page faults, either through separate
369 robust to limit the impact of handling hardware page faults to the specific
[all …]
/linux/Documentation/admin-guide/cgroup-v1/
H A Dhugetlb.rst25 …rsvd.max_usage_in_bytes # show max "hugepagesize" hugetlb reservations and no-reserve faults
26 …svd.usage_in_bytes # show current reservations and no-reserve faults for "hugepagesize"…
28 …tlb.<hugepagesize>.limit_in_bytes # set/show limit of "hugepagesize" hugetlb faults
116 For shared HugeTLB memory, both HugeTLB reservation and page faults are charged
127 When a HugeTLB cgroup goes offline with some reservations or faults still
138 complex compared to the tracking of HugeTLB faults, so it is significantly
/linux/drivers/gpu/drm/xe/
H A Dxe_vm_doc.h214 * idle to ensure no faults. This done by waiting on all of VM's dma-resv slots.
311 * A VM in fault mode can be enabled on devices that support page faults. If
312 * page faults are enabled, using dma fences can potentially induce a deadlock:
331 * Page faults are received in the G2H worker under the CT lock which is in the
332 * path of dma fences (no memory allocations are allowed, faults require memory
333 * allocations) thus we cannot process faults under the CT lock. Another issue
334 * is faults issue TLB invalidations which require G2H credits and we cannot
339 * To work around the above issue with processing faults in the G2H worker, we
340 * sink faults to a buffer which is large enough to sink all possible faults on
341 * the GT (1 per hardware engine) and kick a worker to process the faults. Since
[all …]
H A Dxe_gt_types.h243 * @usm.pf_queue: Page fault queue used to sync faults so faults can
245 * it can sync all possible faults (1 per physical engine).
246 * Multiple queues exists for page faults from different VMs are
262 * moved by worker which processes faults (consumer).
272 /** @usm.pf_queue.worker: to process page faults */
/linux/tools/testing/selftests/powerpc/mm/
H A Dstress_code_patching.sh20 echo "Testing for spurious faults when mapping kernel memory..."
44 echo "FAILED: Mapping kernel memory causes spurious faults" 1>&2
47 echo "OK: Mapping kernel memory does not cause spurious faults"
H A Dpkey_exec_prot.c52 /* Check if too many faults have occurred for a single test case */ in segv_handler()
54 sigsafe_err("got too many faults for the same address\n"); in segv_handler()
228 * This should generate two faults. First, a pkey fault in test()
256 * This should generate pkey faults based on IAMR bits which in test()
/linux/drivers/hwmon/
H A Dltc4260.c98 if (fault) /* Clear reported faults in chip register */ in ltc4260_bool_show()
110 * UV/OV faults are associated with the input voltage, and the POWER BAD and
111 * FET SHORT faults are associated with the output voltage.
156 /* Clear faults */ in ltc4260_probe()
H A Dltc4222.c112 if (fault) /* Clear reported faults in chip register */ in ltc4222_bool_show()
126 * UV/OV faults are associated with the input voltage, and power bad and fet
127 * faults are associated with the output voltage.
192 /* Clear faults */ in ltc4222_probe()
/linux/include/uapi/linux/
H A Dvirtio_balloon.h66 #define VIRTIO_BALLOON_S_MAJFLT 2 /* Number of major faults */
67 #define VIRTIO_BALLOON_S_MINFLT 3 /* Number of minor faults */
85 VIRTIO_BALLOON_S_NAMES_prefix "major-faults", \
86 VIRTIO_BALLOON_S_NAMES_prefix "minor-faults", \
H A Duserfaultfd.h206 * UFFD_FEATURE_MINOR_HUGETLBFS indicates that minor faults
214 * faults would be provided and the offset within the page would not be
318 * NOTE: Write protecting a region (WP=1) is unrelated to page faults,
382 * Create a userfaultfd that can handle page faults only in user mode.
/linux/lib/
H A Dtest_hmm_uapi.h21 * @faults: (out) number of device page faults seen
28 __u64 faults; member
/linux/Documentation/ABI/testing/
H A Dsysfs-class-led-flash54 Space separated list of flash faults that may have occurred.
55 Flash faults are re-read after strobing the flash. Possible
56 flash faults:
/linux/Documentation/virt/kvm/devices/
H A Ds390_flic.rst18 - enable/disable for the guest transparent async page faults
58 Enables async page faults for the guest. So in case of a major page fault
64 Disables async page faults for the guest and waits until already pending
65 async page faults are done. This is necessary to trigger a completion interrupt
/linux/tools/perf/pmu-events/arch/x86/amdzen2/
H A Dfloating-point.json119 "BriefDescription": "Floating Point Dispatch Faults. YMM spill fault.",
125 "BriefDescription": "Floating Point Dispatch Faults. YMM fill fault.",
131 "BriefDescription": "Floating Point Dispatch Faults. XMM fill fault.",
137 "BriefDescription": "Floating Point Dispatch Faults. x87 fill fault.",
/linux/Documentation/arch/arm64/
H A Dmemory-tagging-extension.rst58 Tag Check Faults
75 thread, asynchronously following one or multiple tag check faults,
87 - ``PR_MTE_TCF_NONE``  - *Ignore* tag check faults
92 If no modes are specified, tag check faults are ignored. If a single
172 - No tag checking modes are selected (tag check faults ignored)
321 * tag check faults (based on per-CPU preference) and allow all
/linux/tools/perf/pmu-events/arch/x86/amdzen3/
H A Dfloating-point.json118 "BriefDescription": "Floating Point Dispatch Faults. YMM spill fault.",
124 "BriefDescription": "Floating Point Dispatch Faults. YMM fill fault.",
130 "BriefDescription": "Floating Point Dispatch Faults. XMM fill fault.",
136 "BriefDescription": "Floating Point Dispatch Faults. x87 fill fault.",
/linux/drivers/gpu/drm/nouveau/
H A Dnouveau_svm.c344 * client, with replayable faults enabled). in nouveau_svmm_init()
754 /* Sort parsed faults by instance pointer to prevent unnecessary in nouveau_svm_fault()
756 * type to reduce the amount of work when handling the faults. in nouveau_svm_fault()
775 /* Process list of faults. */ in nouveau_svm_fault()
786 /* Cancel any faults from non-SVM channels. */ in nouveau_svm_fault()
793 /* We try and group handling of faults within a small in nouveau_svm_fault()
804 * permissions based on pending faults. in nouveau_svm_fault()
848 * same SVMM as faults are ordered by access type such in nouveau_svm_fault()
851 * ie. WRITE faults appear first, thus any handling of in nouveau_svm_fault()
852 * pending READ faults will already be satisfied. in nouveau_svm_fault()
[all …]
/linux/Documentation/i2c/
H A Dfault-codes.rst11 Not all fault reports imply errors; "page faults" should be a familiar
13 faults. There may be fancier recovery schemes that are appropriate in
86 about probe faults other than ENXIO and ENODEV.)
/linux/arch/x86/include/asm/
H A Dkfence.h50 * We need to avoid IPIs, as we may get KFENCE allocations or faults in kfence_protect_page()
53 * lazy fault handling takes care of faults after the page is PRESENT. in kfence_protect_page()
/linux/Documentation/scheduler/
H A Dsched-debug.rst14 high then the rate the kernel samples for NUMA hinting faults may be
35 Higher scan rates incur higher system overhead as page faults must be

12345678910>>...22