Lines Matching defs:slab
58 * objects in a slab are of the same type, so they have the same lifetime
60 * objects at slab granularity reduces the likelihood of an entire page being
65 * anticipate the distribution of long-lived objects within the allocator's slab
74 * an entire slab from being reclaimed, and any object handed out by
83 * The kmem slab consolidator therefore adds a move callback to the
134 * to the callback, since kmem wants to free them directly to the slab layer
140 * unused copy destination). kmem also marks the slab of the old
142 * that object as long as the slab remains on the partial slab list.
143 * (The system won't be getting the slab back as long as the
148 * attempts to move other objects off the slab, since it expects to
149 * succeed in clearing the slab in a later callback. The client
169 * that will be reaped (thereby liberating the slab). Because it
186 * 1. Uninitialized on the slab
187 * 2. Allocated from the slab but not constructed (still uninitialized)
188 * 3. Allocated from the slab, constructed, but not yet ready for business
196 * 9. Freed to the slab
211 * to a magazine appear allocated from the point of view of the slab layer, so
218 * objects are never populated directly from the slab layer (which contains raw,
221 * satisfied from the slab layer (creating a new slab if necessary). kmem calls
222 * the object constructor only when allocating from the slab layer, and only in
239 * callback is pending. When the last object on a slab is freed, if there is a
240 * pending move, kmem puts the slab on a per-cache dead list and defers freeing
298 * view of the slab layer, making it a candidate for the move callback. Most
365 * that an object can hold a slab hostage. However, if there is a cache-specific
383 * application to hold a znode slab hostage with an open file descriptor.
386 * object, kmem eventually stops believing it and treats the slab as if the
387 * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can
388 * then focus on getting it off of the partial slab list by allocating rather
389 * than freeing all of its objects. (Either way of getting a slab off the
655 * usage. Each slab has a fixed number of objects. Depending on the slab's
656 * "color" (the offset of the first object from the beginning of the slab;
658 * the maximum number of objects per slab determined at cache creation time or
660 * after the initial offset. A completely allocated slab may contribute some
661 * internal fragmentation (per-slab overhead) but no external fragmentation, so
663 * objects have all been freed to the slab are released to the virtual memory
665 * slab is concerned). External fragmentation exists when there are slabs
666 * somewhere between these extremes. A partial slab has at least one but not all
676 * thousands of complete slabs and only a single partial slab), separating
677 * complete slabs improves the efficiency of partial slab ordering, since the
682 * Objects are always allocated from the first partial slab in the free list,
683 * where the allocation is most likely to eliminate a partial slab (by
685 * allocated slab is freed to the slab, that slab is added to the front of the
692 * completely allocated. For example, a slab with a single allocated object
695 * allocation happens just before the free could have released it. Another slab
707 * scan range. By making consecutive scan ranges overlap by one slab, the least
708 * allocated slab in the current range can be carried along from the end of one
713 * and allocations from the slab layer represent only a startup cost, the
715 * to the opportunity of reducing complexity by eliminating the partial slab
718 * slabs separately in a list. To avoid increasing the size of the slab
719 * structure the AVL linkage pointers are reused for the slab's list linkage,
720 * since the slab will always be either partial or complete, never stored both
725 * or freeing a single object will change the slab's order, requiring a tree
736 * seconds. kmem maintains a running count of unallocated objects in the slab
744 * for a candidate slab to reclaim, starting at the end of the partial slab free
745 * list and scanning backwards. At first the consolidator is choosy: only a slab
748 * finding a candidate slab, kmem raises the allocation threshold incrementally,
751 * even in the worst case of every slab in the cache being almost 7/8 allocated.
758 * Once an eligible slab is chosen, a callback is generated for every allocated
759 * object on the slab, in the hope that the client will move everything off the
760 * slab and make it reclaimable. Objects selected as move destinations are
779 * slab layer.
784 * slowly, one sparsely allocated slab at a time during each maintenance
786 * starts at the last partial slab and enqueues callbacks for every allocated
787 * object on every partial slab, working backwards until it reaches the first
788 * partial slab. The first partial slab, meanwhile, advances in pace with the
792 * both ends and ending at the center with a single partial slab.
797 * marks the slab that supplied the stuck object non-reclaimable and moves it to
798 * front of the free list. The slab remains marked as long as it remains on the
799 * free list, and it appears more allocated to the partial slab compare function
800 * than any unmarked slab, no matter how many of its objects are allocated.
801 * Since even one immovable object ties up the entire slab, the goal is to
802 * completely allocate any slab that cannot be completely freed. kmem does not
803 * bother generating callbacks to move objects from a marked slab unless the
807 * slab. If the client responds LATER too many times, kmem disbelieves and
808 * treats the response as a NO. The count is cleared when the slab is taken off
809 * the partial slab list or when the client moves one of the slab's objects.
881 kstat_named_t kmc_scan; /* attempts to defrag one partial slab */
1013 size_t kmem_slab_log_size; /* slab create log [4 pages per CPU] */
1055 * kmem slab consolidator thresholds (tunables)
1066 * Number of slabs to scan backwards from the end of the partial slab list
1084 uint32_t kmem_mtb_move = 60; /* defrag 1 slab (~15min) */
1131 kmem_slab_t *kmp_slab; /* slab accoring to kmem_findslab() */
1212 * Debugging support. Given a buffer address, find its slab.
1367 printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n",
1482 * Create a new slab for cache cp.
1491 char *buf, *slab;
1503 slab = vmem_alloc(vmp, slabsize, kmflag & KM_VMFLAGS);
1505 if (slab == NULL)
1508 ASSERT(P2PHASE((uintptr_t)slab, vmp->vm_quantum) == 0);
1518 copy_pattern(KMEM_UNINITIALIZED_PATTERN, slab, slabsize);
1525 sp = KMEM_SLAB(cp, slab);
1532 sp->slab_base = buf = slab + color;
1570 kmem_log_event(kmem_slab_log, cp, sp, slab);
1584 vmem_free(vmp, slab, slabsize);
1595 * Destroy a slab.
1601 void *slab = (void *)P2ALIGN((uintptr_t)sp->slab_base, vmp->vm_quantum);
1614 vmem_free(vmp, slab, cp->cache_slabsize);
1626 * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we
1628 * slab is newly created.
1663 ASSERT(sp->slab_chunks > 1); /* the slab was partial */
1692 * The slab is now more allocated than it was, so the
1700 * Allocate a raw (unconstructed) buffer from cp's slab layer.
1716 * The freelist is empty. Create a new slab.
1760 * Free a raw (unconstructed) buffer to cp's slab layer.
1801 * clearing the slab, we can reset the slab flags now that the
1825 * There are no outstanding allocations from this slab,
1838 * Defer releasing the slab to the virtual memory subsystem
1859 * are pending (list head) and a slab freed while the
1876 /* Transition the slab from completely allocated to partial. */
2027 * Free each object in magazine mp to cp's slab layer, and free mp itself.
2473 break; /* fall back to slab layer */
2497 * so fall through to the slab layer.
2505 * so get a raw buffer from the slab layer and apply its constructor.
2636 * so fall through to the slab layer.
2707 * to the slab layer.
2726 * Completely allocate the newly created slab and put the pre-allocated
2728 * magazines must be returned to the slab.
2793 * the slab
3290 * update, magazine resizing, and slab consolidation.
3562 * fragmentation (the ratio of unused buffers held by the slab layer). There are
3563 * two ways to get a slab off of the freelist: 1) free all the buffers on the
3564 * slab, and 2) allocate all the buffers on the slab. It follows that we want
3568 * freed before another allocation can tie up the slab. For that reason a slab
3569 * with a higher slab_refcnt sorts less than than a slab with a lower
3572 * However, if a slab has at least one buffer that is deemed unfreeable, we
3573 * would rather have that slab at the front of the list regardless of
3574 * slab_refcnt, since even one unfreeable buffer makes the entire slab
3576 * callback, the slab is marked unfreeable for as long as it remains on the
3595 /* weight of first slab */
3601 /* weight of second slab */
3635 vmem_t *vmp, /* vmem source for slab allocation */
3799 * Now that we know the chunk size, determine the optimal slab size.
3864 * Initialize the rest of the slab layer.
3872 /* reuse partial slab AVL linkage for complete slab list linkage */
3961 /* make it easier to find a candidate slab */
4010 /* reuse the slab's AVL linkage for deadlist linkage */
4505 * Return the slab of the allocated buffer, or NULL if the buffer is not
4506 * allocated. This function may be called with a known slab address to determine
4507 * whether or not the buffer is allocated, or with a NULL slab address to obtain
4508 * an allocated buffer's slab.
4549 * same slab (the only partial slab) even if allocating the destination
4550 * buffer resulted in a completely allocated slab.
4639 * since kmem wants to free them directly to the slab layer. The client response
4643 * NO kmem frees the new buffer, marks the slab of the old buffer
4650 * buffers (old and new) passed to the move callback function, the slab of the
4658 * zero for as long as the slab remains on the deadlist and until the slab is
4674 * The number of allocated buffers on the slab may have changed since we
4675 * last checked the slab's reclaimability (when the pending move was
4677 * another buffer on the same slab.
4686 * Checking the slab layer is easy, so we might as well do that here
4737 /* slab safe to access until kmem_move_end() */
4889 * front of the dead list except for any slab at the tail that
4913 * of the partial slab list. Scan at most max_scan candidate slabs and move
4915 * If desperate to reclaim memory, move buffers from any partial slab, otherwise
4931 int i, j; /* slab index, buffer index */
4933 int b; /* allocated (movable) buffers on reclaimable slab */
4990 * Prevent the slab from being destroyed while we drop
4995 * the slab from being destroyed.
5016 * objects on the slab while the pending moves are still
5018 * flag causes the slab to be put at the end of the
5036 * destroy the slab, because even though
5039 * requires the slab to exist.
5042 * slab here, since it will get
5052 * Destroy the slab now if it was completely
5056 * pending moves from that slab are possible.
5081 * The slab's position changed while the lock was
5089 * destination buffer on the same slab. In that
5099 * buffer from the slab layer, bumping the first partial
5100 * slab if it is completely allocated. If the current
5101 * slab becomes the first partial slab as a result, we
5105 * destination buffer from the last partial slab, then
5106 * the buffer we're moving is on the same slab and our
5155 * notification if the slab is not marked by an earlier refusal
5268 * view of the slab layer. We want to know if the slab layer would
5313 * slab list (scan at most kmem_reclaim_scan_range slabs to find
5316 * the definition of a candidate slab if we're having trouble