Lines Matching full:flush

48  *	More scalable flush, from Andi Kleen
50 * Implement flush IPI by CALL_FUNCTION_VECTOR, Alex Shi
61 /* Bits to set when tlbstate and flush is (re)initialized */
69 * its own ASID and flush/restart when we run out of ASID space.
190 * forces a TLB flush when the context is loaded.
206 /* Do not need to flush the current asid */ in clear_asid_other()
211 * this asid, we do a flush: in clear_asid_other()
293 * MAX_ASID_AVAILABLE, a global TLB flush guarantees that previously
296 * This way the global flush only needs to happen at ASID rollover
306 * The TLB flush above makes it safe to re-use the previously in reset_global_asid_space()
414 /* The global ASID can be re-used only after flush at wrap-around. */ in mm_free_global_asid()
486 * send a TLB flush IPI. The IPI should cause stragglers in finish_asid_transition()
540 * Given an ASID, flush the corresponding user ASID. We can delay this
591 * If so, our callers still expect us to flush the TLB, but there in leave_mm()
620 * affinity settings or CPU hotplug. This is part of the paranoid L1D flush
631 /* Flush L1D if the outgoing task requests it */ in l1d_flush_evaluate()
635 /* Check whether the incoming task opted in for L1D flush */ in l1d_flush_evaluate()
731 * Only flush when switching to a user space task with a in cond_mitigation()
741 * Flush L1D when the outgoing task requested it and/or in cond_mitigation()
815 * a global flush to minimize the chance of corruption. in switch_mm_irqs_off()
879 * process. No TLB flush required. in switch_mm_irqs_off()
885 * Read the tlb_gen to check whether a flush is needed. in switch_mm_irqs_off()
1077 * flush.
1102 /* Disable LAM, force ASID 0 and force a TLB flush. */ in initialize_tlbstate_and_flush()
1119 * TLB fills that happen after we flush the TLB are ordered after we
1121 * because all x86 flush operations are serializing and the
1132 * - f->new_tlb_gen: the generation that the requester of the flush in flush_tlb_func()
1176 * We're in lazy mode. We need to at least flush our in flush_tlb_func()
1179 * slower than a minimal flush, just switch to init_mm. in flush_tlb_func()
1210 * happen if two concurrent flushes happen -- the first flush to in flush_tlb_func()
1212 * the second flush. in flush_tlb_func()
1222 * This does not strictly imply that we need to flush (it's in flush_tlb_func()
1224 * going to need to flush in the very near future, so we might in flush_tlb_func()
1227 * The only question is whether to do a full or partial flush. in flush_tlb_func()
1229 * We do a partial flush if requested and two extra conditions in flush_tlb_func()
1235 * f->new_tlb_gen == 3, then we know that the flush needed to bring in flush_tlb_func()
1236 * us up to date for tlb_gen 3 is the partial flush we're in flush_tlb_func()
1240 * are two concurrent flushes. The first is a full flush that in flush_tlb_func()
1242 * flush that changes context.tlb_gen from 2 to 3. If they get in flush_tlb_func()
1247 * 1 without the full flush that's needed for tlb_gen 2. in flush_tlb_func()
1252 * to do a partial flush if that won't bring our TLB fully up to in flush_tlb_func()
1253 * date. By doing a full flush instead, we can increase in flush_tlb_func()
1255 * avoid another flush in the very near future. in flush_tlb_func()
1260 /* Partial flush */ in flush_tlb_func()
1263 /* Partial flush cannot have invalid generations */ in flush_tlb_func()
1266 /* Partial flush must have valid mm */ in flush_tlb_func()
1278 /* Full flush. */ in flush_tlb_func()
1313 /* No mm means kernel memory flush. */ in should_flush_tlb()
1319 * either the prev or next mm. Assume the worst and flush. in should_flush_tlb()
1352 * cases in which a remote TLB flush will be traced, but eventually in native_flush_tlb_multi()
1364 * CPUs in lazy TLB mode. They will flush the CPU themselves in native_flush_tlb_multi()
1390 * flush is about 100 ns, so this caps the maximum overhead at
1420 * If the number of flushes is so large that a full flush in get_flush_tlb_info()
1421 * would be faster, do a full flush. in get_flush_tlb_info()
1465 * a local TLB flush is needed. Optimize this use-case by calling in flush_tlb_mm_range()
1504 /* Flush an arbitrarily large range of memory with INVLPGB. */
1514 * flush. Break up large flushes. in invlpgb_kernel_range_flush()
1528 /* flush range by one by one 'invlpg' */ in do_kernel_range_flush()
1589 * Flush one page in the kernel mapping
1602 * __flush_tlb_one_user() will flush the given address for the current in flush_tlb_one_kernel()
1604 * not flush it for other address spaces. in flush_tlb_one_kernel()
1612 * See above. We need to propagate the flush to all other address in flush_tlb_one_kernel()
1621 * Flush one page in the user mapping
1628 /* Flush 'addr' from the kernel PCID: */ in native_flush_tlb_one_user()
1631 /* If PTI is off there is no user PCID and nothing to flush. */ in native_flush_tlb_one_user()
1655 * Flush everything
1685 * Flush the entire current user mapping
1708 * Flush everything
1722 * !PGE -> !PCID (setup_pcid()), thus every flush is total. in __flush_tlb_all()
1739 * a local TLB flush is needed. Optimize this use-case by calling in arch_tlbbatch_flush()