Lines Matching full:poll
77 kstat_named_t polllistmiss; /* failed to find a cached poll list */
102 /* Contention lock & list for preventing deadlocks in recursive /dev/poll. */
118 * The per-thread poll state consists of
129 * poll when a cached fd is closed. This is protected by uf_lock.
148 * Whenever possible the design relies on the fact that the poll cache state
149 * is per thread thus for both poll and exit it is self-synchronizing.
154 * The two key locks in poll proper is ps_lock and pc_lock.
156 * The ps_lock is used for synchronization between poll, (lwp_)exit and close
158 * This lock is held through most of poll() except where poll sleeps
160 * of poll.
162 * structures (which are accessed by poll, pollwakeup, and polltime)
165 * Those exceptions occur in poll when first allocating the per-thread state,
166 * when poll grows the number of polldat (never shrinks), and when
168 * pollheads or fpollinfo to the threads poll state.
170 * Poll(2) system call is the only path which ps_lock and pc_lock are both
175 * that poll acquires these locks in the order of pc_lock and then PHLOCK
179 * from exiting and freeing all of the poll related state. Thus is done
186 * issues. Poll holds ps_lock and/or pc_lock across calls to getf/releasef
187 * which acquire uf_lock. The poll cleanup in close needs to hold uf_lock
188 * to prevent poll or exit from doing a delfpollinfo after which the thread
190 * the poll cache state. The solution is to use pc_busy and do the close
193 * This prevents the per-thread poll related state from being freed.
198 * poll has been woken up and checked getf() again).
200 * When removing a polled fd from poll cache, the fd is always removed
230 * (which hold poll locks on entry to xx_poll(), then acquire foo)
231 * and pollwakeup() threads (which hold foo, then acquire poll locks).
233 * pollunlock(*cookie) releases whatever poll locks the current thread holds,
236 * pollrelock(cookie) reacquires previously dropped poll locks;
242 * /dev/poll is in progress and pollcache locks cannot be dropped. Callers
255 * of a recursive /dev/poll operation. in pollunlock()
264 * t_pollcache is set by /dev/poll and event ports (port_fd.c). in pollunlock()
265 * If the pollrelock/pollunlock is called as a result of poll(2), in pollunlock()
301 * t_pollcache is set by /dev/poll and event ports (port_fd.c). in pollrelock()
302 * If the pollrelock/pollunlock is called as a result of poll(2), in pollrelock()
382 * Check to see if this one just wants to use poll() as a timeout. in poll_common()
422 * NOTE: for performance, buffers are saved across poll() calls. in poll_common()
423 * The theory is that if a process polls heavily, it tends to poll in poll_common()
448 * poll lists are keyed by the address of the passed-in fds in poll_common()
450 * poll cache list entry. As such, we elect not to support in poll_common()
451 * NULL as a valid (user) memory address and fail the poll() in poll_common()
459 * If this thread polls for the first time, allocate ALL poll in poll_common()
460 * cache data structures and cache the poll fd list. This in poll_common()
462 * (i.e. using poll as timeout()) don't need this memory. in poll_common()
470 * poll and cache this poll fd list in ps_pcacheset[0]. in poll_common()
481 * Not first time polling. Select a cached poll list by in poll_common()
494 * difference of the current poll in poll_common()
498 * of cached poll list and cache content. in poll_common()
519 * this poll list. in poll_common()
532 * We failed to find a matching cached poll fd list. in poll_common()
555 * the pcache before updating poll bitmap. in poll_common()
578 * If you get here, the poll of fds was unsuccessful. in poll_common()
601 * Continue around and poll fds again. in poll_common()
660 * This is the system call trap that poll(),
706 * Clean up any state left around by poll(2). Called when a thread exits.
718 * free up all cached poll fds in pollcleanup()
721 /* this pollstate is used by /dev/poll */ in pollcleanup()
760 * pollwakeup() - poke threads waiting in poll() for some event
860 * and is trying to get out of poll(). in pollwakeup()
903 * thread attempting to poll this port are blocked. There can be in pollwakeup()
1030 * walk through the poll fd lists to see if they are identical. This is an
1031 * expensive operation and should not be done more than once for each poll()
1036 * Zeroing out the revents field of each entry in current poll list is
1037 * required by poll man page.
1039 * Since the events field of cached list has illegal poll events filtered
1058 * Filter out invalid poll events while we are in in pcacheset_cmp()
1081 * This routine returns a pointer to a cached poll fd entry, or NULL if it
1249 * we come here because an earlier close() on this cached poll fd. in pcacheset_invalidate()
1290 * Insert poll fd into the pollcache, and add poll registration.
1309 * The poll caching uses the existing VOP_POLL interface. If there in pcache_insert()
1311 * one is sleeping in poll" flag. When the polled events happen in pcache_insert()
1341 * If this entry was used to cache a poll fd which was closed, and in pcache_insert()
1373 * by different threads. Unless this is a new first poll(), pd_events in pcache_insert()
1375 * is no way to cancel that event. In that case, poll degrade to its in pcache_insert()
1376 * old form -- polling on this fd every time poll() is called. The in pcache_insert()
1385 * appearance in poll list. If this is called from pcacheset_cache_list, in pcache_insert()
1398 * xf_position records the fd's first appearance in poll list in pcache_insert()
1578 * resolve the difference between the current poll list and a cached one.
1610 * the length of poll list has changed. allocate a new in pcacheset_resolve()
1619 * The comparison is done on the current poll list and the in pcacheset_resolve()
1621 * cached list for next poll. in pcacheset_resolve()
1675 * poll list. Find the next one on the in pcacheset_resolve()
1724 * after we updated cache poll list in pcacheset_resolve()
1732 * entry in cached poll list in pcacheset_resolve()
1739 * effect on next poll(). in pcacheset_resolve()
1854 * will be called on this fd in next poll. in pcacheset_resolve()
1860 * make sure cross reference between cached poll lists and cached in pcacheset_resolve()
1861 * poll fds are correct. in pcacheset_resolve()
1898 * read the bitmap and poll on fds corresponding to the '1' bits. The ps_lock
1928 * only poll fds which may have events in pcache_poll()
1943 * A bitmap caches poll state information of in pcache_poll()
1944 * multiple poll lists. Call VOP_POLL only if in pcache_poll()
1945 * the bit corresponds to an fd in this poll in pcache_poll()
1961 * blocked in poll. This ensures that we don't in pcache_poll()
1972 * in the poll list. Find all of them. in pcache_poll()
1996 * in the poll list. Find all of them. in pcache_poll()
2011 * Since we no longer hold poll head lock across in pcache_poll()
2018 * flag when it sees the poll may block. Pollwakeup() in pcache_poll()
2025 * the pd_events is union of all cached poll events in pcache_poll()
2027 * how the polled device sets the "poll pending" in pcache_poll()
2035 * poll entry. To prevent close() coming in to clear in pcache_poll()
2067 * in the poll list. This is rare but in pcache_poll()
2106 * wakeup on this device. Re-poll once. in pcache_poll()
2115 * in the poll list. This is rare but in pcache_poll()
2144 * Going through the poll list without much locking. Poll all fds and
2164 * cache the new poll list in pollcachset. in pcacheset_cache_list()
2173 * We have saved a copy of current poll fd list in one pollcacheset. in pcacheset_cache_list()
2181 * We also filter out the illegal poll events in the event in pcacheset_cache_list()
2182 * field for the cached poll list/set. in pcacheset_cache_list()
2195 * invalidate this cache entry in the cached poll list in pcacheset_cache_list()
2210 * poll list. Undo every thing. in pcacheset_cache_list()
2249 * set the event in cached poll lists to POLLCLOSED. This invalidate in pcache_clean_entry()
2250 * the cached poll fd entry in that poll list, which will force a in pcache_clean_entry()
2251 * removal of this cached entry in next poll(). The cleanup is done in pcache_clean_entry()
2425 * to calculate a final depth for the local /dev/poll in pollstate_contend()
2492 * called on a recursion-enabled /dev/poll handle from outside in pollstate_enter()
2493 * the poll() or /dev/poll codepaths. in pollstate_enter()
2556 * Complete phase 2 of cached poll fd cleanup. Call pcache_clean_entry to mark
2557 * the pcacheset events field POLLCLOSED to force the next poll() to remove
2559 * lwp block in poll() needs the info to return. Wakeup anyone blocked in
2560 * poll and let exiting lwp go. No lock is help upon entry. So it's OK for
2613 * this routine implements poll cache list replacement policy.
2645 * (2) /dev/poll.
2646 * When a polled fd is cached in /dev/poll, its polldat will remain on the
2670 * to synchronize this lwp with any other /dev/poll in pollhead_clean()
2788 * for resolved set poll list, the xref info in the pcache should be
2789 * consistent with this poll list.
2900 * current poll list should be 0.
2954 * allocate enough bits for the poll fd list in pcache_create()
3034 * Check each duplicated poll fd in the poll list. It may be necessary to
3035 * VOP_POLL the same fd again using different poll events. getf() has been
3037 * entire poll fd list. It returns -1 if underlying vnode has changed during