Lines Matching defs:cyclics
118 * cyclic_juggle() <-- Juggles cyclics away from a CPU
139 * The cyclic subsystem is designed to minimize interference between cyclics
150 * The cyclics are kept sorted by expiration time in the cyc_cpu's heap. The
183 * To see the heap by example, assume our cyclics array has the following
226 * fewer than sixteen cyclics in the heap, downheaps on UltraSPARC miss at
307 * For CY_HIGH_LEVEL cyclics, this is trivial; cyclic_expire() simply needs
310 * For CY_LOCK_LEVEL and CY_LOW_LEVEL cyclics, however, there exists a
327 * If we wish to avoid a linear scan of the cyclics array at soft interrupt
328 * level, cyclic_softint() must be able to quickly determine which cyclics
391 * CY_LOCK/LOW_LEVEL cyclics could thereby induce jitter in CY_HIGH_LEVEL
392 * cyclics.
436 * All of the discussion thus far has assumed a static number of cyclics.
445 * CPUs. Pending cyclics may not be dropped during a resize operation.
447 * Three key cyc_cpu data structures need to be resized: the cyclics array,
457 * 5. The old cyclics array is bzero()'d
473 * As under normal operation, cyclic_softint() will consume cyclics from
488 * all of the old buffers (the heap, the cyclics array and the producer/
497 * count in the old cyclics array. By zeroing the old cyclics array in
502 * count from the new cyclics array, and re-attempt the compare&swap.
573 * cyclic_fire() is done with all expired cyclics. To deal with this, such
574 * cyclics can be specified with a special interval of CY_INFINITY (INT64_MAX).
588 * done from the bottom of the heap to the top as reprogrammable cyclics
736 cyclic_t *cyclics;
745 cyclics = cpu->cyp_cyclics;
756 if (cyclics[current].cy_expire >= cyclics[parent].cy_expire)
779 cyclic_t *cyclics = cpu->cyp_cyclics;
813 if (cyclics[right].cy_expire < cyclics[left].cy_expire) {
819 if (cyclics[me].cy_expire <= cyclics[right].cy_expire)
838 if (cyclics[me].cy_expire <= cyclics[left].cy_expire)
859 * need to worry about the pend count for CY_HIGH_LEVEL cyclics.
898 * UINT32_MAX. Yes, cyclics can be lost in this case.
946 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
965 cyclic = &cyclics[ndx];
1087 * cyclic_softint() will call the handlers for cyclics pending at the
1089 * cyclics at the specified level have been dealt with; intervening
1090 * CY_HIGH_LEVEL interrupts which enqueue cyclics at the specified level
1127 cyclic_t *cyclics = cpu->cyp_cyclics;
1130 CYC_TRACE(cpu, level, "softint", cyclics, 0);
1144 CYC_TRACE(cpu, level, "softint-top", cyclics, pc);
1149 cyclic_t *cyclic = &cyclics[buf[consmasked]];
1165 * are in this case, and read the new cyclics buffer
1209 CYC_TRACE(cpu, level, "resize-int", cyclics, 0);
1211 ASSERT(cyclics != cpu->cyp_cyclics);
1215 cyclics = cpu->cyp_cyclics;
1216 cyclic = &cyclics[buf[consmasked]];
1233 * (b) The cyclics array has been yanked out
1251 ((cyclics != cpu->cyp_cyclics &&
1286 * If our cached cyclics pointer doesn't match cyp_cyclics,
1290 if (cpu->cyp_cyclics != cyclics) {
1292 cyclics = cpu->cyp_cyclics;
1348 cyclic_t *cyclics = cpu->cyp_cyclics, *new_cyclics = arg->cyx_cyclics;
1357 * take care of consuming any pending cyclics in the old buffer.
1371 bcopy(cyclics, new_cyclics, sizeof (cyclic_t) * size);
1374 * Now run through the old cyclics array, setting pend to 0. To
1380 cyclics[i].cy_pend = 0;
1383 * Set up the free list, and set all of the new cyclics to be CYF_FREE.
1399 * We've switched over the heap and the cyclics array. Now we need
1975 * Reprogrammed cyclics are typically one-shot ones that get
2753 * online and offline handlers. Omnipresent cyclics run on all online
2947 * If a CPU with bound cyclics is transitioned into the P_NOINTR state,
2948 * only cyclics not bound to the CPU can be juggled away; CPU-bound cyclics
2949 * will continue to fire on the P_NOINTR CPU. A CPU with bound cyclics
2951 * Likewise, cyclics may not be bound to an offline CPU; if the caller
2965 * panic. A CPU partition with bound cyclics cannot be destroyed (attempts
2967 * partition-bound cyclics is transitioned into the P_NOINTR state, cyclics
3173 * cyclic_juggle() juggles as many cyclics as possible away from the
3174 * specified CPU; all remaining cyclics on the CPU will either be CPU-
3179 * The only argument to cyclic_juggle() is the CPU from which cyclics
3180 * should be juggled. CPU-bound cyclics are never juggled; partition-bound
3181 * cyclics are only juggled if the specified CPU is in the P_NOINTR state
3188 * cyclic_juggle() returns a non-zero value if all cyclics were able to
3189 * be juggled away from the CPU, and zero if one or more cyclics could
3241 * cyclic_offline() will attempt to juggle cyclics away from the specified
3246 * cyclic_offline() returns 1 if all cyclics on the CPU were juggled away
3248 * cyclic_offline returns 0 if some cyclics remain, blocking the cyclic
3249 * offline operation. All remaining cyclics on the CPU will either be
3309 * cyclics.
3335 * all omnipresent cyclics on it.
3358 * will juggle all partition-bound, CPU-unbound cyclics to the specified
3384 * Look for CYF_PART_BOUND cyclics in the new partition. If
3387 * interrupts enabled), we'll juggle those cyclics over here.
3399 * Omnipresent cyclics are exempt from juggling.
3440 * cyclics. If the specified CPU is the last CPU in a partition with
3441 * partition-bound cyclics, cyclic_move_out() will fail. If there exists
3446 * partition-bound cyclics; CPU-bound cyclics which are not partition-bound
3447 * and unbound cyclics are not affected by changing the partition
3452 * cyclic_move_out() returns 1 if all partition-bound cyclics on the CPU
3453 * were juggled away; 0 if some cyclics remain.
3471 cyclic_t *cyclic, *cyclics = cpu->cyp_cyclics;
3478 * If there are any CYF_PART_BOUND cyclics on this CPU, we need
3486 cyclic = &cyclics[idp->cyi_ndx];
3497 * other cyclics).
3532 * cyclics at the time of cyclic_suspend(). Callers concerned with more