Lines Matching full:we
26 * We wait here for a poller to finish.
28 * If the poll runs on this CPU, then we yell loudly and return
32 * We wait until the poller is done and then recheck disabled and
33 * action (about to be disabled). Only if it's still active, we return
86 * All handlers must agree on IRQF_SHARED, so we test just the in try_one_irq()
209 * We need to take desc->lock here. note_interrupt() is called in __report_bad_irq()
210 * w/o desc->lock held, but IRQ_PROGRESS set. We might race in __report_bad_irq()
244 /* We didn't actually handle the IRQ - see if it was misrouted? */ in try_misrouted_irq()
249 * But for 'irqfixup == 2' we also do it for handled interrupts if in try_misrouted_irq()
260 * Since we don't get the descriptor lock, "action" can in try_misrouted_irq()
261 * change under us. We don't really care, but we don't in try_misrouted_irq()
286 * We cannot call note_interrupt from the threaded handler in note_interrupt()
287 * because we need to look at the compound of all handlers in note_interrupt()
289 * shared case we have no serialization against an incoming in note_interrupt()
290 * hardware interrupt while we are dealing with a threaded in note_interrupt()
293 * So in case a thread is woken, we just note the fact and in note_interrupt()
297 * handled an interrupt and we check whether that number in note_interrupt()
300 * We could handle all interrupts with the delayed by one in note_interrupt()
301 * mechanism, but for the non forced threaded case we'd just in note_interrupt()
309 * not we defer the spurious detection to the next in note_interrupt()
315 * We use bit 31 of thread_handled_last to in note_interrupt()
319 * and we have the guarantee that hard in note_interrupt()
331 * For simplicity we just set bit 31, as it is in note_interrupt()
332 * set in threads_handled_last as well. So we in note_interrupt()
333 * avoid extra masking. And we really do not in note_interrupt()
335 * count. We just care about the count being in note_interrupt()
336 * different than the one we saw before. in note_interrupt()
343 * Note: We keep the SPURIOUS_DEFERRED in note_interrupt()
344 * bit set. We are handling the in note_interrupt()
356 * We keep the SPURIOUS_DEFERRED bit in note_interrupt()
357 * set in threads_handled_last as we in note_interrupt()
366 * IRQ_HANDLED. So we don't care about the in note_interrupt()
370 * In theory we could/should check whether the in note_interrupt()
375 * handled we never trigger the spurious in note_interrupt()
378 * then we merily delay the spurious detection in note_interrupt()
387 * If we are seeing only the odd spurious IRQ caused by in note_interrupt()