Home
last modified time | relevance | path

Searched refs:lazy (Results 1 – 25 of 34) sorted by relevance

12

/linux/Documentation/translations/zh_CN/mm/
H A Dactive_mm.rst64 计数器,即有多少 “真正的地址空间用户”,另一个是 “mm_count”计数器,即 “lazy
68 一个lazy的用户仍在活动,所以你实际上得到的情况是,你有一个地址空间 **只**
69lazy的用户使用。这通常是一个短暂的生命周期状态,因为一旦这个线程被安排给一
73 “init_mm”应该被认为只是一个 “没有其他上下文时的lazy上下文”,事实上,它主
/linux/kernel/rcu/
H A Dtree_nocb.h246 * can elapse before lazy callbacks are flushed. Lazy callbacks
248 * however, LAZY_FLUSH_JIFFIES will ensure no lazy callbacks are
315 unsigned long j, bool lazy) in rcu_nocb_do_flush_bypass()
332 * If the new CB requested was a lazy one, queue it onto the main in rcu_nocb_do_flush_bypass()
335 * the lazy CB is ordered with the existing CBs in the bypass list. in rcu_nocb_do_flush_bypass()
337 if (lazy && rhp) { in rcu_nocb_do_flush_bypass()
359 unsigned long j, bool lazy) in rcu_nocb_flush_bypass()
365 return rcu_nocb_do_flush_bypass(rdp, rhp, j, lazy);
383 * For lazy-only bypass queues, use the lazy flus
309 rcu_nocb_do_flush_bypass(struct rcu_data * rdp,struct rcu_head * rhp_in,unsigned long j,bool lazy) rcu_nocb_do_flush_bypass() argument
353 rcu_nocb_flush_bypass(struct rcu_data * rdp,struct rcu_head * rhp,unsigned long j,bool lazy) rcu_nocb_flush_bypass() argument
395 rcu_nocb_try_bypass(struct rcu_data * rdp,struct rcu_head * rhp,bool * was_alldone,unsigned long flags,bool lazy) rcu_nocb_try_bypass() argument
596 call_rcu_nocb(struct rcu_data * rdp,struct rcu_head * head,rcu_callback_t func,unsigned long flags,bool lazy) call_rcu_nocb() argument
656 bool lazy = false; nocb_gp_wait() local
1666 rcu_nocb_flush_bypass(struct rcu_data * rdp,struct rcu_head * rhp,unsigned long j,bool lazy) rcu_nocb_flush_bypass() argument
1672 call_rcu_nocb(struct rcu_data * rdp,struct rcu_head * head,rcu_callback_t func,unsigned long flags,bool lazy) call_rcu_nocb() argument
[all...]
H A Dtree.h295 long lazy_len; /* Length of buffered lazy callbacks. */
504 unsigned long j, bool lazy);
506 rcu_callback_t func, unsigned long flags, bool lazy);
H A Dtree.c3106 bool lazy; in __call_rcu_common()
3137 lazy = lazy_in && !rcu_async_should_hurry(); in __call_rcu_common()
3153 call_rcu_nocb(rdp, head, func, flags, lazy);
3165 * flush all lazy callbacks (including the new one) to the main ->cblist while
3194 * By default the callbacks are 'lazy' and are kept hidden from the main
3771 * if it's fully lazy. in rcu_barrier_entrain()
3815 * pending lazy RCU callbacks. in rcu_barrier()
3094 bool lazy; __call_rcu_common() local
/linux/Documentation/mm/
H A Dactive_mm.rst5 Note, the mm_count refcount may no longer include the "lazy" users
7 with CONFIG_MMU_LAZY_TLB_REFCOUNT=n. Taking and releasing these lazy
63 and a "mm_count" counter that is the number of "lazy" users (ie anonymous
67 user exited on another CPU while a lazy user was still active, so you do
69 lazy users. That is often a short-lived state, because once that thread
74 more. "init_mm" should be considered just a "lazy context when no other
/linux/drivers/opp/
H A Dof.c139 list_del(&opp_table->lazy); in _opp_table_free_required_tables()
151 bool lazy = false; in _opp_table_alloc_required_tables() local
188 lazy = true; in _opp_table_alloc_required_tables()
192 if (lazy) { in _opp_table_alloc_required_tables()
198 list_add(&opp_table->lazy, &lazy_opp_tables); in _opp_table_alloc_required_tables()
355 list_for_each_entry_safe(opp_table, temp, &lazy_opp_tables, lazy) { in lazy_link_required_opp_table()
356 bool lazy = false; in lazy_link_required_opp_table() local
380 lazy = true; in lazy_link_required_opp_table()
390 lazy = false; in lazy_link_required_opp_table()
396 if (!lazy) { in lazy_link_required_opp_table()
[all …]
H A Dopp.h207 struct list_head node, lazy; member
268 return unlikely(!list_empty(&opp_table->lazy)); in lazy_linking_pending()
/linux/drivers/crypto/intel/qat/qat_common/
H A Dicp_qat_hw_20_comp.h71 __u16 lazy;
104 QAT_FIELD_SET(val32, csr.lazy, in ICP_QAT_FW_COMP_20_BUILD_CONFIG_UPPER()
69 __u16 lazy; global() member
/linux/Documentation/arch/arm/
H A Dkernel_mode_neon.rst30 The NEON/VFP register file is managed using lazy preserve (on UP systems) and
31 lazy restore (on both SMP and UP systems). This means that the register file is
45 mode will hit the lazy restore trap upon next use. This is handled by the
/linux/mm/
H A Dvmalloc.c944 struct rb_list lazy; member
2357 if (RB_EMPTY_ROOT(&vn->lazy.root)) in __purge_vmap_area_lazy()
2360 spin_lock(&vn->lazy.lock); in __purge_vmap_area_lazy()
2361 WRITE_ONCE(vn->lazy.root.rb_node, NULL); in __purge_vmap_area_lazy()
2362 list_replace_init(&vn->lazy.head, &vn->purge_list); in __purge_vmap_area_lazy()
2363 spin_unlock(&vn->lazy.lock); in __purge_vmap_area_lazy()
2460 spin_lock(&vn->lazy.lock); in free_vmap_area_noflush()
2461 insert_vmap_area(va, &vn->lazy.root, &vn->lazy.head); in free_vmap_area_noflush()
2462 spin_unlock(&vn->lazy in free_vmap_area_noflush()
[all...]
H A DKconfig1457 The architecture uses the lazy MMU mode. This allows changes to
1462 tristate "KUnit tests for the lazy MMU mode" if !KUNIT_ALL_TESTS
1467 Enable this option to check that the lazy MMU mode interface behaves
/linux/drivers/gpu/drm/nouveau/
H A Dnouveau_fence.h32 int nouveau_fence_wait(struct nouveau_fence *, bool lazy, bool intr);
H A Dnouveau_fence.c324 nouveau_fence_wait(struct nouveau_fence *fence, bool lazy, bool intr) in nouveau_fence_wait()
328 if (!lazy) in nouveau_fence_wait()
323 nouveau_fence_wait(struct nouveau_fence * fence,bool lazy,bool intr) nouveau_fence_wait() argument
/linux/tools/perf/Documentation/
H A Dperf-probe.txt171 3) Define event based on source file with lazy pattern
182 'FUNC' specifies a probed function name, and it may have one of the following options; '+OFFS' is the offset from function entry address in bytes, ':RLN' is the relative-line number from function entry line, and '%return' means that it probes function return. And ';PTN' means lazy matching pattern (see LAZY MATCHING). Note that ';PTN' must be the end of the probe point definition. In addition, '@SRC' specifies a source file which has that function.
183 It is also possible to specify a probe point by the source line number or lazy matching by using 'SRC:ALN' or 'SRC;PTN' syntax, where 'SRC' is the source file path, ':ALN' is the line number and ';PTN' is the lazy matching pattern.
235 The lazy line matching is similar to glob matching but ignoring spaces in both of pattern and target. So this accepts wildcards('*', '?') and character classes(e.g. [a-z], [!A-Z]).
/linux/drivers/gpu/drm/vmwgfx/
H A Dvmwgfx_fence.c230 int vmw_fence_obj_wait(struct vmw_fence_obj *fence, bool lazy, in vmw_fence_obj_wait()
465 ret = vmw_fence_obj_wait(fence, arg->lazy, true, timeout); in vmw_fence_obj_wait_ioctl()
229 vmw_fence_obj_wait(struct vmw_fence_obj * fence,bool lazy,bool interruptible,unsigned long timeout) vmw_fence_obj_wait() argument
H A Dvmwgfx_drv.h1022 bool lazy,
/linux/include/uapi/drm/
H A Dvmwgfx_drm.h647 __s32 lazy; member
/linux/Documentation/arch/parisc/
H A Dregisters.rst18 CR10 (CCR) lazy FPU saving*
/linux/Documentation/filesystems/fuse/
H A Dfuse.rst27 umounted. Note that detaching (or lazy umounting) the filesystem
213 filesystem is still attached (it hasn't been lazy unmounted)
/linux/Documentation/filesystems/
H A Dautofs-mount-control.rst23 Currently autofs uses "umount -l" (lazy umount) to clear active mounts
24 at restart. While using lazy umount works for most cases, anything that
/linux/Documentation/trace/rv/
H A Dmonitor_sched.rst187 This is not valid for the *lazy* variant of the flag, which causes only
/linux/Documentation/arch/powerpc/
H A Dtransactional_memory.rst94 Examples are glibc's getpid() and lazy symbol resolution.
/linux/include/hyperv/
H A Dhvgdk_mini.h683 u64 lazy : 1; member
/linux/Documentation/admin-guide/mm/
H A Dnuma_memory_policy.rst138 support allocation at fault time--a.k.a lazy allocation--so hugetlbfs
140 Although hugetlbfs segments now support lazy allocation, their support
/linux/arch/sparc/lib/
H A Dchecksum_32.S411 addx %g5, %g0, %g5 ! I am now to lazy to optimize this (question it

12