1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright 2009 Sun Microsystems, Inc. All rights reserved. 23 * Use is subject to license terms. 24 */ 25 26 /* 27 * Kernel memory allocator, as described in the following two papers and a 28 * statement about the consolidator: 29 * 30 * Jeff Bonwick, 31 * The Slab Allocator: An Object-Caching Kernel Memory Allocator. 32 * Proceedings of the Summer 1994 Usenix Conference. 33 * Available as /shared/sac/PSARC/1994/028/materials/kmem.pdf. 34 * 35 * Jeff Bonwick and Jonathan Adams, 36 * Magazines and vmem: Extending the Slab Allocator to Many CPUs and 37 * Arbitrary Resources. 38 * Proceedings of the 2001 Usenix Conference. 39 * Available as /shared/sac/PSARC/2000/550/materials/vmem.pdf. 40 * 41 * kmem Slab Consolidator Big Theory Statement: 42 * 43 * 1. Motivation 44 * 45 * As stated in Bonwick94, slabs provide the following advantages over other 46 * allocation structures in terms of memory fragmentation: 47 * 48 * - Internal fragmentation (per-buffer wasted space) is minimal. 49 * - Severe external fragmentation (unused buffers on the free list) is 50 * unlikely. 51 * 52 * Segregating objects by size eliminates one source of external fragmentation, 53 * and according to Bonwick: 54 * 55 * The other reason that slabs reduce external fragmentation is that all 56 * objects in a slab are of the same type, so they have the same lifetime 57 * distribution. The resulting segregation of short-lived and long-lived 58 * objects at slab granularity reduces the likelihood of an entire page being 59 * held hostage due to a single long-lived allocation [Barrett93, Hanson90]. 60 * 61 * While unlikely, severe external fragmentation remains possible. Clients that 62 * allocate both short- and long-lived objects from the same cache cannot 63 * anticipate the distribution of long-lived objects within the allocator's slab 64 * implementation. Even a small percentage of long-lived objects distributed 65 * randomly across many slabs can lead to a worst case scenario where the client 66 * frees the majority of its objects and the system gets back almost none of the 67 * slabs. Despite the client doing what it reasonably can to help the system 68 * reclaim memory, the allocator cannot shake free enough slabs because of 69 * lonely allocations stubbornly hanging on. Although the allocator is in a 70 * position to diagnose the fragmentation, there is nothing that the allocator 71 * by itself can do about it. It only takes a single allocated object to prevent 72 * an entire slab from being reclaimed, and any object handed out by 73 * kmem_cache_alloc() is by definition in the client's control. Conversely, 74 * although the client is in a position to move a long-lived object, it has no 75 * way of knowing if the object is causing fragmentation, and if so, where to 76 * move it. A solution necessarily requires further cooperation between the 77 * allocator and the client. 78 * 79 * 2. Move Callback 80 * 81 * The kmem slab consolidator therefore adds a move callback to the 82 * allocator/client interface, improving worst-case external fragmentation in 83 * kmem caches that supply a function to move objects from one memory location 84 * to another. In a situation of low memory kmem attempts to consolidate all of 85 * a cache's slabs at once; otherwise it works slowly to bring external 86 * fragmentation within the 1/8 limit guaranteed for internal fragmentation, 87 * thereby helping to avoid a low memory situation in the future. 88 * 89 * The callback has the following signature: 90 * 91 * kmem_cbrc_t move(void *old, void *new, size_t size, void *user_arg) 92 * 93 * It supplies the kmem client with two addresses: the allocated object that 94 * kmem wants to move and a buffer selected by kmem for the client to use as the 95 * copy destination. The callback is kmem's way of saying "Please get off of 96 * this buffer and use this one instead." kmem knows where it wants to move the 97 * object in order to best reduce fragmentation. All the client needs to know 98 * about the second argument (void *new) is that it is an allocated, constructed 99 * object ready to take the contents of the old object. When the move function 100 * is called, the system is likely to be low on memory, and the new object 101 * spares the client from having to worry about allocating memory for the 102 * requested move. The third argument supplies the size of the object, in case a 103 * single move function handles multiple caches whose objects differ only in 104 * size (such as zio_buf_512, zio_buf_1024, etc). Finally, the same optional 105 * user argument passed to the constructor, destructor, and reclaim functions is 106 * also passed to the move callback. 107 * 108 * 2.1 Setting the Move Callback 109 * 110 * The client sets the move callback after creating the cache and before 111 * allocating from it: 112 * 113 * object_cache = kmem_cache_create(...); 114 * kmem_cache_set_move(object_cache, object_move); 115 * 116 * 2.2 Move Callback Return Values 117 * 118 * Only the client knows about its own data and when is a good time to move it. 119 * The client is cooperating with kmem to return unused memory to the system, 120 * and kmem respectfully accepts this help at the client's convenience. When 121 * asked to move an object, the client can respond with any of the following: 122 * 123 * typedef enum kmem_cbrc { 124 * KMEM_CBRC_YES, 125 * KMEM_CBRC_NO, 126 * KMEM_CBRC_LATER, 127 * KMEM_CBRC_DONT_NEED, 128 * KMEM_CBRC_DONT_KNOW 129 * } kmem_cbrc_t; 130 * 131 * The client must not explicitly kmem_cache_free() either of the objects passed 132 * to the callback, since kmem wants to free them directly to the slab layer 133 * (bypassing the per-CPU magazine layer). The response tells kmem which of the 134 * objects to free: 135 * 136 * YES: (Did it) The client moved the object, so kmem frees the old one. 137 * NO: (Never) The client refused, so kmem frees the new object (the 138 * unused copy destination). kmem also marks the slab of the old 139 * object so as not to bother the client with further callbacks for 140 * that object as long as the slab remains on the partial slab list. 141 * (The system won't be getting the slab back as long as the 142 * immovable object holds it hostage, so there's no point in moving 143 * any of its objects.) 144 * LATER: The client is using the object and cannot move it now, so kmem 145 * frees the new object (the unused copy destination). kmem still 146 * attempts to move other objects off the slab, since it expects to 147 * succeed in clearing the slab in a later callback. The client 148 * should use LATER instead of NO if the object is likely to become 149 * movable very soon. 150 * DONT_NEED: The client no longer needs the object, so kmem frees the old along 151 * with the new object (the unused copy destination). This response 152 * is the client's opportunity to be a model citizen and give back as 153 * much as it can. 154 * DONT_KNOW: The client does not know about the object because 155 * a) the client has just allocated the object and not yet put it 156 * wherever it expects to find known objects 157 * b) the client has removed the object from wherever it expects to 158 * find known objects and is about to free it, or 159 * c) the client has freed the object. 160 * In all these cases (a, b, and c) kmem frees the new object (the 161 * unused copy destination) and searches for the old object in the 162 * magazine layer. If found, the object is removed from the magazine 163 * layer and freed to the slab layer so it will no longer hold the 164 * slab hostage. 165 * 166 * 2.3 Object States 167 * 168 * Neither kmem nor the client can be assumed to know the object's whereabouts 169 * at the time of the callback. An object belonging to a kmem cache may be in 170 * any of the following states: 171 * 172 * 1. Uninitialized on the slab 173 * 2. Allocated from the slab but not constructed (still uninitialized) 174 * 3. Allocated from the slab, constructed, but not yet ready for business 175 * (not in a valid state for the move callback) 176 * 4. In use (valid and known to the client) 177 * 5. About to be freed (no longer in a valid state for the move callback) 178 * 6. Freed to a magazine (still constructed) 179 * 7. Allocated from a magazine, not yet ready for business (not in a valid 180 * state for the move callback), and about to return to state #4 181 * 8. Deconstructed on a magazine that is about to be freed 182 * 9. Freed to the slab 183 * 184 * Since the move callback may be called at any time while the object is in any 185 * of the above states (except state #1), the client needs a safe way to 186 * determine whether or not it knows about the object. Specifically, the client 187 * needs to know whether or not the object is in state #4, the only state in 188 * which a move is valid. If the object is in any other state, the client should 189 * immediately return KMEM_CBRC_DONT_KNOW, since it is unsafe to access any of 190 * the object's fields. 191 * 192 * Note that although an object may be in state #4 when kmem initiates the move 193 * request, the object may no longer be in that state by the time kmem actually 194 * calls the move function. Not only does the client free objects 195 * asynchronously, kmem itself puts move requests on a queue where thay are 196 * pending until kmem processes them from another context. Also, objects freed 197 * to a magazine appear allocated from the point of view of the slab layer, so 198 * kmem may even initiate requests for objects in a state other than state #4. 199 * 200 * 2.3.1 Magazine Layer 201 * 202 * An important insight revealed by the states listed above is that the magazine 203 * layer is populated only by kmem_cache_free(). Magazines of constructed 204 * objects are never populated directly from the slab layer (which contains raw, 205 * unconstructed objects). Whenever an allocation request cannot be satisfied 206 * from the magazine layer, the magazines are bypassed and the request is 207 * satisfied from the slab layer (creating a new slab if necessary). kmem calls 208 * the object constructor only when allocating from the slab layer, and only in 209 * response to kmem_cache_alloc() or to prepare the destination buffer passed in 210 * the move callback. kmem does not preconstruct objects in anticipation of 211 * kmem_cache_alloc(). 212 * 213 * 2.3.2 Object Constructor and Destructor 214 * 215 * If the client supplies a destructor, it must be valid to call the destructor 216 * on a newly created object (immediately after the constructor). 217 * 218 * 2.4 Recognizing Known Objects 219 * 220 * There is a simple test to determine safely whether or not the client knows 221 * about a given object in the move callback. It relies on the fact that kmem 222 * guarantees that the object of the move callback has only been touched by the 223 * client itself or else by kmem. kmem does this by ensuring that none of the 224 * cache's slabs are freed to the virtual memory (VM) subsystem while a move 225 * callback is pending. When the last object on a slab is freed, if there is a 226 * pending move, kmem puts the slab on a per-cache dead list and defers freeing 227 * slabs on that list until all pending callbacks are completed. That way, 228 * clients can be certain that the object of a move callback is in one of the 229 * states listed above, making it possible to distinguish known objects (in 230 * state #4) using the two low order bits of any pointer member (with the 231 * exception of 'char *' or 'short *' which may not be 4-byte aligned on some 232 * platforms). 233 * 234 * The test works as long as the client always transitions objects from state #4 235 * (known, in use) to state #5 (about to be freed, invalid) by setting the low 236 * order bit of the client-designated pointer member. Since kmem only writes 237 * invalid memory patterns, such as 0xbaddcafe to uninitialized memory and 238 * 0xdeadbeef to freed memory, any scribbling on the object done by kmem is 239 * guaranteed to set at least one of the two low order bits. Therefore, given an 240 * object with a back pointer to a 'container_t *o_container', the client can 241 * test 242 * 243 * container_t *container = object->o_container; 244 * if ((uintptr_t)container & 0x3) { 245 * return (KMEM_CBRC_DONT_KNOW); 246 * } 247 * 248 * Typically, an object will have a pointer to some structure with a list or 249 * hash where objects from the cache are kept while in use. Assuming that the 250 * client has some way of knowing that the container structure is valid and will 251 * not go away during the move, and assuming that the structure includes a lock 252 * to protect whatever collection is used, then the client would continue as 253 * follows: 254 * 255 * // Ensure that the container structure does not go away. 256 * if (container_hold(container) == 0) { 257 * return (KMEM_CBRC_DONT_KNOW); 258 * } 259 * mutex_enter(&container->c_objects_lock); 260 * if (container != object->o_container) { 261 * mutex_exit(&container->c_objects_lock); 262 * container_rele(container); 263 * return (KMEM_CBRC_DONT_KNOW); 264 * } 265 * 266 * At this point the client knows that the object cannot be freed as long as 267 * c_objects_lock is held. Note that after acquiring the lock, the client must 268 * recheck the o_container pointer in case the object was removed just before 269 * acquiring the lock. 270 * 271 * When the client is about to free an object, it must first remove that object 272 * from the list, hash, or other structure where it is kept. At that time, to 273 * mark the object so it can be distinguished from the remaining, known objects, 274 * the client sets the designated low order bit: 275 * 276 * mutex_enter(&container->c_objects_lock); 277 * object->o_container = (void *)((uintptr_t)object->o_container | 0x1); 278 * list_remove(&container->c_objects, object); 279 * mutex_exit(&container->c_objects_lock); 280 * 281 * In the common case, the object is freed to the magazine layer, where it may 282 * be reused on a subsequent allocation without the overhead of calling the 283 * constructor. While in the magazine it appears allocated from the point of 284 * view of the slab layer, making it a candidate for the move callback. Most 285 * objects unrecognized by the client in the move callback fall into this 286 * category and are cheaply distinguished from known objects by the test 287 * described earlier. Since recognition is cheap for the client, and searching 288 * magazines is expensive for kmem, kmem defers searching until the client first 289 * returns KMEM_CBRC_DONT_KNOW. As long as the needed effort is reasonable, kmem 290 * elsewhere does what it can to avoid bothering the client unnecessarily. 291 * 292 * Invalidating the designated pointer member before freeing the object marks 293 * the object to be avoided in the callback, and conversely, assigning a valid 294 * value to the designated pointer member after allocating the object makes the 295 * object fair game for the callback: 296 * 297 * ... allocate object ... 298 * ... set any initial state not set by the constructor ... 299 * 300 * mutex_enter(&container->c_objects_lock); 301 * list_insert_tail(&container->c_objects, object); 302 * membar_producer(); 303 * object->o_container = container; 304 * mutex_exit(&container->c_objects_lock); 305 * 306 * Note that everything else must be valid before setting o_container makes the 307 * object fair game for the move callback. The membar_producer() call ensures 308 * that all the object's state is written to memory before setting the pointer 309 * that transitions the object from state #3 or #7 (allocated, constructed, not 310 * yet in use) to state #4 (in use, valid). That's important because the move 311 * function has to check the validity of the pointer before it can safely 312 * acquire the lock protecting the collection where it expects to find known 313 * objects. 314 * 315 * This method of distinguishing known objects observes the usual symmetry: 316 * invalidating the designated pointer is the first thing the client does before 317 * freeing the object, and setting the designated pointer is the last thing the 318 * client does after allocating the object. Of course, the client is not 319 * required to use this method. Fundamentally, how the client recognizes known 320 * objects is completely up to the client, but this method is recommended as an 321 * efficient and safe way to take advantage of the guarantees made by kmem. If 322 * the entire object is arbitrary data without any markable bits from a suitable 323 * pointer member, then the client must find some other method, such as 324 * searching a hash table of known objects. 325 * 326 * 2.5 Preventing Objects From Moving 327 * 328 * Besides a way to distinguish known objects, the other thing that the client 329 * needs is a strategy to ensure that an object will not move while the client 330 * is actively using it. The details of satisfying this requirement tend to be 331 * highly cache-specific. It might seem that the same rules that let a client 332 * remove an object safely should also decide when an object can be moved 333 * safely. However, any object state that makes a removal attempt invalid is 334 * likely to be long-lasting for objects that the client does not expect to 335 * remove. kmem knows nothing about the object state and is equally likely (from 336 * the client's point of view) to request a move for any object in the cache, 337 * whether prepared for removal or not. Even a low percentage of objects stuck 338 * in place by unremovability will defeat the consolidator if the stuck objects 339 * are the same long-lived allocations likely to hold slabs hostage. 340 * Fundamentally, the consolidator is not aimed at common cases. Severe external 341 * fragmentation is a worst case scenario manifested as sparsely allocated 342 * slabs, by definition a low percentage of the cache's objects. When deciding 343 * what makes an object movable, keep in mind the goal of the consolidator: to 344 * bring worst-case external fragmentation within the limits guaranteed for 345 * internal fragmentation. Removability is a poor criterion if it is likely to 346 * exclude more than an insignificant percentage of objects for long periods of 347 * time. 348 * 349 * A tricky general solution exists, and it has the advantage of letting you 350 * move any object at almost any moment, practically eliminating the likelihood 351 * that an object can hold a slab hostage. However, if there is a cache-specific 352 * way to ensure that an object is not actively in use in the vast majority of 353 * cases, a simpler solution that leverages this cache-specific knowledge is 354 * preferred. 355 * 356 * 2.5.1 Cache-Specific Solution 357 * 358 * As an example of a cache-specific solution, the ZFS znode cache takes 359 * advantage of the fact that the vast majority of znodes are only being 360 * referenced from the DNLC. (A typical case might be a few hundred in active 361 * use and a hundred thousand in the DNLC.) In the move callback, after the ZFS 362 * client has established that it recognizes the znode and can access its fields 363 * safely (using the method described earlier), it then tests whether the znode 364 * is referenced by anything other than the DNLC. If so, it assumes that the 365 * znode may be in active use and is unsafe to move, so it drops its locks and 366 * returns KMEM_CBRC_LATER. The advantage of this strategy is that everywhere 367 * else znodes are used, no change is needed to protect against the possibility 368 * of the znode moving. The disadvantage is that it remains possible for an 369 * application to hold a znode slab hostage with an open file descriptor. 370 * However, this case ought to be rare and the consolidator has a way to deal 371 * with it: If the client responds KMEM_CBRC_LATER repeatedly for the same 372 * object, kmem eventually stops believing it and treats the slab as if the 373 * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can 374 * then focus on getting it off of the partial slab list by allocating rather 375 * than freeing all of its objects. (Either way of getting a slab off the 376 * free list reduces fragmentation.) 377 * 378 * 2.5.2 General Solution 379 * 380 * The general solution, on the other hand, requires an explicit hold everywhere 381 * the object is used to prevent it from moving. To keep the client locking 382 * strategy as uncomplicated as possible, kmem guarantees the simplifying 383 * assumption that move callbacks are sequential, even across multiple caches. 384 * Internally, a global queue processed by a single thread supports all caches 385 * implementing the callback function. No matter how many caches supply a move 386 * function, the consolidator never moves more than one object at a time, so the 387 * client does not have to worry about tricky lock ordering involving several 388 * related objects from different kmem caches. 389 * 390 * The general solution implements the explicit hold as a read-write lock, which 391 * allows multiple readers to access an object from the cache simultaneously 392 * while a single writer is excluded from moving it. A single rwlock for the 393 * entire cache would lock out all threads from using any of the cache's objects 394 * even though only a single object is being moved, so to reduce contention, 395 * the client can fan out the single rwlock into an array of rwlocks hashed by 396 * the object address, making it probable that moving one object will not 397 * prevent other threads from using a different object. The rwlock cannot be a 398 * member of the object itself, because the possibility of the object moving 399 * makes it unsafe to access any of the object's fields until the lock is 400 * acquired. 401 * 402 * Assuming a small, fixed number of locks, it's possible that multiple objects 403 * will hash to the same lock. A thread that needs to use multiple objects in 404 * the same function may acquire the same lock multiple times. Since rwlocks are 405 * reentrant for readers, and since there is never more than a single writer at 406 * a time (assuming that the client acquires the lock as a writer only when 407 * moving an object inside the callback), there would seem to be no problem. 408 * However, a client locking multiple objects in the same function must handle 409 * one case of potential deadlock: Assume that thread A needs to prevent both 410 * object 1 and object 2 from moving, and thread B, the callback, meanwhile 411 * tries to move object 3. It's possible, if objects 1, 2, and 3 all hash to the 412 * same lock, that thread A will acquire the lock for object 1 as a reader 413 * before thread B sets the lock's write-wanted bit, preventing thread A from 414 * reacquiring the lock for object 2 as a reader. Unable to make forward 415 * progress, thread A will never release the lock for object 1, resulting in 416 * deadlock. 417 * 418 * There are two ways of avoiding the deadlock just described. The first is to 419 * use rw_tryenter() rather than rw_enter() in the callback function when 420 * attempting to acquire the lock as a writer. If tryenter discovers that the 421 * same object (or another object hashed to the same lock) is already in use, it 422 * aborts the callback and returns KMEM_CBRC_LATER. The second way is to use 423 * rprwlock_t (declared in common/fs/zfs/sys/rprwlock.h) instead of rwlock_t, 424 * since it allows a thread to acquire the lock as a reader in spite of a 425 * waiting writer. This second approach insists on moving the object now, no 426 * matter how many readers the move function must wait for in order to do so, 427 * and could delay the completion of the callback indefinitely (blocking 428 * callbacks to other clients). In practice, a less insistent callback using 429 * rw_tryenter() returns KMEM_CBRC_LATER infrequently enough that there seems 430 * little reason to use anything else. 431 * 432 * Avoiding deadlock is not the only problem that an implementation using an 433 * explicit hold needs to solve. Locking the object in the first place (to 434 * prevent it from moving) remains a problem, since the object could move 435 * between the time you obtain a pointer to the object and the time you acquire 436 * the rwlock hashed to that pointer value. Therefore the client needs to 437 * recheck the value of the pointer after acquiring the lock, drop the lock if 438 * the value has changed, and try again. This requires a level of indirection: 439 * something that points to the object rather than the object itself, that the 440 * client can access safely while attempting to acquire the lock. (The object 441 * itself cannot be referenced safely because it can move at any time.) 442 * The following lock-acquisition function takes whatever is safe to reference 443 * (arg), follows its pointer to the object (using function f), and tries as 444 * often as necessary to acquire the hashed lock and verify that the object 445 * still has not moved: 446 * 447 * object_t * 448 * object_hold(object_f f, void *arg) 449 * { 450 * object_t *op; 451 * 452 * op = f(arg); 453 * if (op == NULL) { 454 * return (NULL); 455 * } 456 * 457 * rw_enter(OBJECT_RWLOCK(op), RW_READER); 458 * while (op != f(arg)) { 459 * rw_exit(OBJECT_RWLOCK(op)); 460 * op = f(arg); 461 * if (op == NULL) { 462 * break; 463 * } 464 * rw_enter(OBJECT_RWLOCK(op), RW_READER); 465 * } 466 * 467 * return (op); 468 * } 469 * 470 * The OBJECT_RWLOCK macro hashes the object address to obtain the rwlock. The 471 * lock reacquisition loop, while necessary, almost never executes. The function 472 * pointer f (used to obtain the object pointer from arg) has the following type 473 * definition: 474 * 475 * typedef object_t *(*object_f)(void *arg); 476 * 477 * An object_f implementation is likely to be as simple as accessing a structure 478 * member: 479 * 480 * object_t * 481 * s_object(void *arg) 482 * { 483 * something_t *sp = arg; 484 * return (sp->s_object); 485 * } 486 * 487 * The flexibility of a function pointer allows the path to the object to be 488 * arbitrarily complex and also supports the notion that depending on where you 489 * are using the object, you may need to get it from someplace different. 490 * 491 * The function that releases the explicit hold is simpler because it does not 492 * have to worry about the object moving: 493 * 494 * void 495 * object_rele(object_t *op) 496 * { 497 * rw_exit(OBJECT_RWLOCK(op)); 498 * } 499 * 500 * The caller is spared these details so that obtaining and releasing an 501 * explicit hold feels like a simple mutex_enter()/mutex_exit() pair. The caller 502 * of object_hold() only needs to know that the returned object pointer is valid 503 * if not NULL and that the object will not move until released. 504 * 505 * Although object_hold() prevents an object from moving, it does not prevent it 506 * from being freed. The caller must take measures before calling object_hold() 507 * (afterwards is too late) to ensure that the held object cannot be freed. The 508 * caller must do so without accessing the unsafe object reference, so any lock 509 * or reference count used to ensure the continued existence of the object must 510 * live outside the object itself. 511 * 512 * Obtaining a new object is a special case where an explicit hold is impossible 513 * for the caller. Any function that returns a newly allocated object (either as 514 * a return value, or as an in-out paramter) must return it already held; after 515 * the caller gets it is too late, since the object cannot be safely accessed 516 * without the level of indirection described earlier. The following 517 * object_alloc() example uses the same code shown earlier to transition a new 518 * object into the state of being recognized (by the client) as a known object. 519 * The function must acquire the hold (rw_enter) before that state transition 520 * makes the object movable: 521 * 522 * static object_t * 523 * object_alloc(container_t *container) 524 * { 525 * object_t *object = kmem_cache_alloc(object_cache, 0); 526 * ... set any initial state not set by the constructor ... 527 * rw_enter(OBJECT_RWLOCK(object), RW_READER); 528 * mutex_enter(&container->c_objects_lock); 529 * list_insert_tail(&container->c_objects, object); 530 * membar_producer(); 531 * object->o_container = container; 532 * mutex_exit(&container->c_objects_lock); 533 * return (object); 534 * } 535 * 536 * Functions that implicitly acquire an object hold (any function that calls 537 * object_alloc() to supply an object for the caller) need to be carefully noted 538 * so that the matching object_rele() is not neglected. Otherwise, leaked holds 539 * prevent all objects hashed to the affected rwlocks from ever being moved. 540 * 541 * The pointer to a held object can be hashed to the holding rwlock even after 542 * the object has been freed. Although it is possible to release the hold 543 * after freeing the object, you may decide to release the hold implicitly in 544 * whatever function frees the object, so as to release the hold as soon as 545 * possible, and for the sake of symmetry with the function that implicitly 546 * acquires the hold when it allocates the object. Here, object_free() releases 547 * the hold acquired by object_alloc(). Its implicit object_rele() forms a 548 * matching pair with object_hold(): 549 * 550 * void 551 * object_free(object_t *object) 552 * { 553 * container_t *container; 554 * 555 * ASSERT(object_held(object)); 556 * container = object->o_container; 557 * mutex_enter(&container->c_objects_lock); 558 * object->o_container = 559 * (void *)((uintptr_t)object->o_container | 0x1); 560 * list_remove(&container->c_objects, object); 561 * mutex_exit(&container->c_objects_lock); 562 * object_rele(object); 563 * kmem_cache_free(object_cache, object); 564 * } 565 * 566 * Note that object_free() cannot safely accept an object pointer as an argument 567 * unless the object is already held. Any function that calls object_free() 568 * needs to be carefully noted since it similarly forms a matching pair with 569 * object_hold(). 570 * 571 * To complete the picture, the following callback function implements the 572 * general solution by moving objects only if they are currently unheld: 573 * 574 * static kmem_cbrc_t 575 * object_move(void *buf, void *newbuf, size_t size, void *arg) 576 * { 577 * object_t *op = buf, *np = newbuf; 578 * container_t *container; 579 * 580 * container = op->o_container; 581 * if ((uintptr_t)container & 0x3) { 582 * return (KMEM_CBRC_DONT_KNOW); 583 * } 584 * 585 * // Ensure that the container structure does not go away. 586 * if (container_hold(container) == 0) { 587 * return (KMEM_CBRC_DONT_KNOW); 588 * } 589 * 590 * mutex_enter(&container->c_objects_lock); 591 * if (container != op->o_container) { 592 * mutex_exit(&container->c_objects_lock); 593 * container_rele(container); 594 * return (KMEM_CBRC_DONT_KNOW); 595 * } 596 * 597 * if (rw_tryenter(OBJECT_RWLOCK(op), RW_WRITER) == 0) { 598 * mutex_exit(&container->c_objects_lock); 599 * container_rele(container); 600 * return (KMEM_CBRC_LATER); 601 * } 602 * 603 * object_move_impl(op, np); // critical section 604 * rw_exit(OBJECT_RWLOCK(op)); 605 * 606 * op->o_container = (void *)((uintptr_t)op->o_container | 0x1); 607 * list_link_replace(&op->o_link_node, &np->o_link_node); 608 * mutex_exit(&container->c_objects_lock); 609 * container_rele(container); 610 * return (KMEM_CBRC_YES); 611 * } 612 * 613 * Note that object_move() must invalidate the designated o_container pointer of 614 * the old object in the same way that object_free() does, since kmem will free 615 * the object in response to the KMEM_CBRC_YES return value. 616 * 617 * The lock order in object_move() differs from object_alloc(), which locks 618 * OBJECT_RWLOCK first and &container->c_objects_lock second, but as long as the 619 * callback uses rw_tryenter() (preventing the deadlock described earlier), it's 620 * not a problem. Holding the lock on the object list in the example above 621 * through the entire callback not only prevents the object from going away, it 622 * also allows you to lock the list elsewhere and know that none of its elements 623 * will move during iteration. 624 * 625 * Adding an explicit hold everywhere an object from the cache is used is tricky 626 * and involves much more change to client code than a cache-specific solution 627 * that leverages existing state to decide whether or not an object is 628 * movable. However, this approach has the advantage that no object remains 629 * immovable for any significant length of time, making it extremely unlikely 630 * that long-lived allocations can continue holding slabs hostage; and it works 631 * for any cache. 632 * 633 * 3. Consolidator Implementation 634 * 635 * Once the client supplies a move function that a) recognizes known objects and 636 * b) avoids moving objects that are actively in use, the remaining work is up 637 * to the consolidator to decide which objects to move and when to issue 638 * callbacks. 639 * 640 * The consolidator relies on the fact that a cache's slabs are ordered by 641 * usage. Each slab has a fixed number of objects. Depending on the slab's 642 * "color" (the offset of the first object from the beginning of the slab; 643 * offsets are staggered to mitigate false sharing of cache lines) it is either 644 * the maximum number of objects per slab determined at cache creation time or 645 * else the number closest to the maximum that fits within the space remaining 646 * after the initial offset. A completely allocated slab may contribute some 647 * internal fragmentation (per-slab overhead) but no external fragmentation, so 648 * it is of no interest to the consolidator. At the other extreme, slabs whose 649 * objects have all been freed to the slab are released to the virtual memory 650 * (VM) subsystem (objects freed to magazines are still allocated as far as the 651 * slab is concerned). External fragmentation exists when there are slabs 652 * somewhere between these extremes. A partial slab has at least one but not all 653 * of its objects allocated. The more partial slabs, and the fewer allocated 654 * objects on each of them, the higher the fragmentation. Hence the 655 * consolidator's overall strategy is to reduce the number of partial slabs by 656 * moving allocated objects from the least allocated slabs to the most allocated 657 * slabs. 658 * 659 * Partial slabs are kept in an AVL tree ordered by usage. Completely allocated 660 * slabs are kept separately in an unordered list. Since the majority of slabs 661 * tend to be completely allocated (a typical unfragmented cache may have 662 * thousands of complete slabs and only a single partial slab), separating 663 * complete slabs improves the efficiency of partial slab ordering, since the 664 * complete slabs do not affect the depth or balance of the AVL tree. This 665 * ordered sequence of partial slabs acts as a "free list" supplying objects for 666 * allocation requests. 667 * 668 * Objects are always allocated from the first partial slab in the free list, 669 * where the allocation is most likely to eliminate a partial slab (by 670 * completely allocating it). Conversely, when a single object from a completely 671 * allocated slab is freed to the slab, that slab is added to the front of the 672 * free list. Since most free list activity involves highly allocated slabs 673 * coming and going at the front of the list, slabs tend naturally toward the 674 * ideal order: highly allocated at the front, sparsely allocated at the back. 675 * Slabs with few allocated objects are likely to become completely free if they 676 * keep a safe distance away from the front of the free list. Slab misorders 677 * interfere with the natural tendency of slabs to become completely free or 678 * completely allocated. For example, a slab with a single allocated object 679 * needs only a single free to escape the cache; its natural desire is 680 * frustrated when it finds itself at the front of the list where a second 681 * allocation happens just before the free could have released it. Another slab 682 * with all but one object allocated might have supplied the buffer instead, so 683 * that both (as opposed to neither) of the slabs would have been taken off the 684 * free list. 685 * 686 * Although slabs tend naturally toward the ideal order, misorders allowed by a 687 * simple list implementation defeat the consolidator's strategy of merging 688 * least- and most-allocated slabs. Without an AVL tree to guarantee order, kmem 689 * needs another way to fix misorders to optimize its callback strategy. One 690 * approach is to periodically scan a limited number of slabs, advancing a 691 * marker to hold the current scan position, and to move extreme misorders to 692 * the front or back of the free list and to the front or back of the current 693 * scan range. By making consecutive scan ranges overlap by one slab, the least 694 * allocated slab in the current range can be carried along from the end of one 695 * scan to the start of the next. 696 * 697 * Maintaining partial slabs in an AVL tree relieves kmem of this additional 698 * task, however. Since most of the cache's activity is in the magazine layer, 699 * and allocations from the slab layer represent only a startup cost, the 700 * overhead of maintaining a balanced tree is not a significant concern compared 701 * to the opportunity of reducing complexity by eliminating the partial slab 702 * scanner just described. The overhead of an AVL tree is minimized by 703 * maintaining only partial slabs in the tree and keeping completely allocated 704 * slabs separately in a list. To avoid increasing the size of the slab 705 * structure the AVL linkage pointers are reused for the slab's list linkage, 706 * since the slab will always be either partial or complete, never stored both 707 * ways at the same time. To further minimize the overhead of the AVL tree the 708 * compare function that orders partial slabs by usage divides the range of 709 * allocated object counts into bins such that counts within the same bin are 710 * considered equal. Binning partial slabs makes it less likely that allocating 711 * or freeing a single object will change the slab's order, requiring a tree 712 * reinsertion (an avl_remove() followed by an avl_add(), both potentially 713 * requiring some rebalancing of the tree). Allocation counts closest to 714 * completely free and completely allocated are left unbinned (finely sorted) to 715 * better support the consolidator's strategy of merging slabs at either 716 * extreme. 717 * 718 * 3.1 Assessing Fragmentation and Selecting Candidate Slabs 719 * 720 * The consolidator piggybacks on the kmem maintenance thread and is called on 721 * the same interval as kmem_cache_update(), once per cache every fifteen 722 * seconds. kmem maintains a running count of unallocated objects in the slab 723 * layer (cache_bufslab). The consolidator checks whether that number exceeds 724 * 12.5% (1/8) of the total objects in the cache (cache_buftotal), and whether 725 * there is a significant number of slabs in the cache (arbitrarily a minimum 726 * 101 total slabs). Unused objects that have fallen out of the magazine layer's 727 * working set are included in the assessment, and magazines in the depot are 728 * reaped if those objects would lift cache_bufslab above the fragmentation 729 * threshold. Once the consolidator decides that a cache is fragmented, it looks 730 * for a candidate slab to reclaim, starting at the end of the partial slab free 731 * list and scanning backwards. At first the consolidator is choosy: only a slab 732 * with fewer than 12.5% (1/8) of its objects allocated qualifies (or else a 733 * single allocated object, regardless of percentage). If there is difficulty 734 * finding a candidate slab, kmem raises the allocation threshold incrementally, 735 * up to a maximum 87.5% (7/8), so that eventually the consolidator will reduce 736 * external fragmentation (unused objects on the free list) below 12.5% (1/8), 737 * even in the worst case of every slab in the cache being almost 7/8 allocated. 738 * The threshold can also be lowered incrementally when candidate slabs are easy 739 * to find, and the threshold is reset to the minimum 1/8 as soon as the cache 740 * is no longer fragmented. 741 * 742 * 3.2 Generating Callbacks 743 * 744 * Once an eligible slab is chosen, a callback is generated for every allocated 745 * object on the slab, in the hope that the client will move everything off the 746 * slab and make it reclaimable. Objects selected as move destinations are 747 * chosen from slabs at the front of the free list. Assuming slabs in the ideal 748 * order (most allocated at the front, least allocated at the back) and a 749 * cooperative client, the consolidator will succeed in removing slabs from both 750 * ends of the free list, completely allocating on the one hand and completely 751 * freeing on the other. Objects selected as move destinations are allocated in 752 * the kmem maintenance thread where move requests are enqueued. A separate 753 * callback thread removes pending callbacks from the queue and calls the 754 * client. The separate thread ensures that client code (the move function) does 755 * not interfere with internal kmem maintenance tasks. A map of pending 756 * callbacks keyed by object address (the object to be moved) is checked to 757 * ensure that duplicate callbacks are not generated for the same object. 758 * Allocating the move destination (the object to move to) prevents subsequent 759 * callbacks from selecting the same destination as an earlier pending callback. 760 * 761 * Move requests can also be generated by kmem_cache_reap() when the system is 762 * desperate for memory and by kmem_cache_move_notify(), called by the client to 763 * notify kmem that a move refused earlier with KMEM_CBRC_LATER is now possible. 764 * The map of pending callbacks is protected by the same lock that protects the 765 * slab layer. 766 * 767 * When the system is desperate for memory, kmem does not bother to determine 768 * whether or not the cache exceeds the fragmentation threshold, but tries to 769 * consolidate as many slabs as possible. Normally, the consolidator chews 770 * slowly, one sparsely allocated slab at a time during each maintenance 771 * interval that the cache is fragmented. When desperate, the consolidator 772 * starts at the last partial slab and enqueues callbacks for every allocated 773 * object on every partial slab, working backwards until it reaches the first 774 * partial slab. The first partial slab, meanwhile, advances in pace with the 775 * consolidator as allocations to supply move destinations for the enqueued 776 * callbacks use up the highly allocated slabs at the front of the free list. 777 * Ideally, the overgrown free list collapses like an accordion, starting at 778 * both ends and ending at the center with a single partial slab. 779 * 780 * 3.3 Client Responses 781 * 782 * When the client returns KMEM_CBRC_NO in response to the move callback, kmem 783 * marks the slab that supplied the stuck object non-reclaimable and moves it to 784 * front of the free list. The slab remains marked as long as it remains on the 785 * free list, and it appears more allocated to the partial slab compare function 786 * than any unmarked slab, no matter how many of its objects are allocated. 787 * Since even one immovable object ties up the entire slab, the goal is to 788 * completely allocate any slab that cannot be completely freed. kmem does not 789 * bother generating callbacks to move objects from a marked slab unless the 790 * system is desperate. 791 * 792 * When the client responds KMEM_CBRC_LATER, kmem increments a count for the 793 * slab. If the client responds LATER too many times, kmem disbelieves and 794 * treats the response as a NO. The count is cleared when the slab is taken off 795 * the partial slab list or when the client moves one of the slab's objects. 796 * 797 * 4. Observability 798 * 799 * A kmem cache's external fragmentation is best observed with 'mdb -k' using 800 * the ::kmem_slabs dcmd. For a complete description of the command, enter 801 * '::help kmem_slabs' at the mdb prompt. 802 */ 803 804 #include <sys/kmem_impl.h> 805 #include <sys/vmem_impl.h> 806 #include <sys/param.h> 807 #include <sys/sysmacros.h> 808 #include <sys/vm.h> 809 #include <sys/proc.h> 810 #include <sys/tuneable.h> 811 #include <sys/systm.h> 812 #include <sys/cmn_err.h> 813 #include <sys/debug.h> 814 #include <sys/sdt.h> 815 #include <sys/mutex.h> 816 #include <sys/bitmap.h> 817 #include <sys/atomic.h> 818 #include <sys/kobj.h> 819 #include <sys/disp.h> 820 #include <vm/seg_kmem.h> 821 #include <sys/log.h> 822 #include <sys/callb.h> 823 #include <sys/taskq.h> 824 #include <sys/modctl.h> 825 #include <sys/reboot.h> 826 #include <sys/id32.h> 827 #include <sys/zone.h> 828 #include <sys/netstack.h> 829 #ifdef DEBUG 830 #include <sys/random.h> 831 #endif 832 833 extern void streams_msg_init(void); 834 extern int segkp_fromheap; 835 extern void segkp_cache_free(void); 836 837 struct kmem_cache_kstat { 838 kstat_named_t kmc_buf_size; 839 kstat_named_t kmc_align; 840 kstat_named_t kmc_chunk_size; 841 kstat_named_t kmc_slab_size; 842 kstat_named_t kmc_alloc; 843 kstat_named_t kmc_alloc_fail; 844 kstat_named_t kmc_free; 845 kstat_named_t kmc_depot_alloc; 846 kstat_named_t kmc_depot_free; 847 kstat_named_t kmc_depot_contention; 848 kstat_named_t kmc_slab_alloc; 849 kstat_named_t kmc_slab_free; 850 kstat_named_t kmc_buf_constructed; 851 kstat_named_t kmc_buf_avail; 852 kstat_named_t kmc_buf_inuse; 853 kstat_named_t kmc_buf_total; 854 kstat_named_t kmc_buf_max; 855 kstat_named_t kmc_slab_create; 856 kstat_named_t kmc_slab_destroy; 857 kstat_named_t kmc_vmem_source; 858 kstat_named_t kmc_hash_size; 859 kstat_named_t kmc_hash_lookup_depth; 860 kstat_named_t kmc_hash_rescale; 861 kstat_named_t kmc_full_magazines; 862 kstat_named_t kmc_empty_magazines; 863 kstat_named_t kmc_magazine_size; 864 kstat_named_t kmc_move_callbacks; 865 kstat_named_t kmc_move_yes; 866 kstat_named_t kmc_move_no; 867 kstat_named_t kmc_move_later; 868 kstat_named_t kmc_move_dont_need; 869 kstat_named_t kmc_move_dont_know; 870 kstat_named_t kmc_move_hunt_found; 871 } kmem_cache_kstat = { 872 { "buf_size", KSTAT_DATA_UINT64 }, 873 { "align", KSTAT_DATA_UINT64 }, 874 { "chunk_size", KSTAT_DATA_UINT64 }, 875 { "slab_size", KSTAT_DATA_UINT64 }, 876 { "alloc", KSTAT_DATA_UINT64 }, 877 { "alloc_fail", KSTAT_DATA_UINT64 }, 878 { "free", KSTAT_DATA_UINT64 }, 879 { "depot_alloc", KSTAT_DATA_UINT64 }, 880 { "depot_free", KSTAT_DATA_UINT64 }, 881 { "depot_contention", KSTAT_DATA_UINT64 }, 882 { "slab_alloc", KSTAT_DATA_UINT64 }, 883 { "slab_free", KSTAT_DATA_UINT64 }, 884 { "buf_constructed", KSTAT_DATA_UINT64 }, 885 { "buf_avail", KSTAT_DATA_UINT64 }, 886 { "buf_inuse", KSTAT_DATA_UINT64 }, 887 { "buf_total", KSTAT_DATA_UINT64 }, 888 { "buf_max", KSTAT_DATA_UINT64 }, 889 { "slab_create", KSTAT_DATA_UINT64 }, 890 { "slab_destroy", KSTAT_DATA_UINT64 }, 891 { "vmem_source", KSTAT_DATA_UINT64 }, 892 { "hash_size", KSTAT_DATA_UINT64 }, 893 { "hash_lookup_depth", KSTAT_DATA_UINT64 }, 894 { "hash_rescale", KSTAT_DATA_UINT64 }, 895 { "full_magazines", KSTAT_DATA_UINT64 }, 896 { "empty_magazines", KSTAT_DATA_UINT64 }, 897 { "magazine_size", KSTAT_DATA_UINT64 }, 898 { "move_callbacks", KSTAT_DATA_UINT64 }, 899 { "move_yes", KSTAT_DATA_UINT64 }, 900 { "move_no", KSTAT_DATA_UINT64 }, 901 { "move_later", KSTAT_DATA_UINT64 }, 902 { "move_dont_need", KSTAT_DATA_UINT64 }, 903 { "move_dont_know", KSTAT_DATA_UINT64 }, 904 { "move_hunt_found", KSTAT_DATA_UINT64 }, 905 }; 906 907 static kmutex_t kmem_cache_kstat_lock; 908 909 /* 910 * The default set of caches to back kmem_alloc(). 911 * These sizes should be reevaluated periodically. 912 * 913 * We want allocations that are multiples of the coherency granularity 914 * (64 bytes) to be satisfied from a cache which is a multiple of 64 915 * bytes, so that it will be 64-byte aligned. For all multiples of 64, 916 * the next kmem_cache_size greater than or equal to it must be a 917 * multiple of 64. 918 * 919 * We split the table into two sections: size <= 4k and size > 4k. This 920 * saves a lot of space and cache footprint in our cache tables. 921 */ 922 static const int kmem_alloc_sizes[] = { 923 1 * 8, 924 2 * 8, 925 3 * 8, 926 4 * 8, 5 * 8, 6 * 8, 7 * 8, 927 4 * 16, 5 * 16, 6 * 16, 7 * 16, 928 4 * 32, 5 * 32, 6 * 32, 7 * 32, 929 4 * 64, 5 * 64, 6 * 64, 7 * 64, 930 4 * 128, 5 * 128, 6 * 128, 7 * 128, 931 P2ALIGN(8192 / 7, 64), 932 P2ALIGN(8192 / 6, 64), 933 P2ALIGN(8192 / 5, 64), 934 P2ALIGN(8192 / 4, 64), 935 P2ALIGN(8192 / 3, 64), 936 P2ALIGN(8192 / 2, 64), 937 }; 938 939 static const int kmem_big_alloc_sizes[] = { 940 2 * 4096, 3 * 4096, 941 2 * 8192, 3 * 8192, 942 4 * 8192, 5 * 8192, 6 * 8192, 7 * 8192, 943 8 * 8192, 9 * 8192, 10 * 8192, 11 * 8192, 944 12 * 8192, 13 * 8192, 14 * 8192, 15 * 8192, 945 16 * 8192 946 }; 947 948 #define KMEM_MAXBUF 4096 949 #define KMEM_BIG_MAXBUF_32BIT 32768 950 #define KMEM_BIG_MAXBUF 131072 951 952 #define KMEM_BIG_MULTIPLE 4096 /* big_alloc_sizes must be a multiple */ 953 #define KMEM_BIG_SHIFT 12 /* lg(KMEM_BIG_MULTIPLE) */ 954 955 static kmem_cache_t *kmem_alloc_table[KMEM_MAXBUF >> KMEM_ALIGN_SHIFT]; 956 static kmem_cache_t *kmem_big_alloc_table[KMEM_BIG_MAXBUF >> KMEM_BIG_SHIFT]; 957 958 #define KMEM_ALLOC_TABLE_MAX (KMEM_MAXBUF >> KMEM_ALIGN_SHIFT) 959 static size_t kmem_big_alloc_table_max = 0; /* # of filled elements */ 960 961 static kmem_magtype_t kmem_magtype[] = { 962 { 1, 8, 3200, 65536 }, 963 { 3, 16, 256, 32768 }, 964 { 7, 32, 64, 16384 }, 965 { 15, 64, 0, 8192 }, 966 { 31, 64, 0, 4096 }, 967 { 47, 64, 0, 2048 }, 968 { 63, 64, 0, 1024 }, 969 { 95, 64, 0, 512 }, 970 { 143, 64, 0, 0 }, 971 }; 972 973 static uint32_t kmem_reaping; 974 static uint32_t kmem_reaping_idspace; 975 976 /* 977 * kmem tunables 978 */ 979 clock_t kmem_reap_interval; /* cache reaping rate [15 * HZ ticks] */ 980 int kmem_depot_contention = 3; /* max failed tryenters per real interval */ 981 pgcnt_t kmem_reapahead = 0; /* start reaping N pages before pageout */ 982 int kmem_panic = 1; /* whether to panic on error */ 983 int kmem_logging = 1; /* kmem_log_enter() override */ 984 uint32_t kmem_mtbf = 0; /* mean time between failures [default: off] */ 985 size_t kmem_transaction_log_size; /* transaction log size [2% of memory] */ 986 size_t kmem_content_log_size; /* content log size [2% of memory] */ 987 size_t kmem_failure_log_size; /* failure log [4 pages per CPU] */ 988 size_t kmem_slab_log_size; /* slab create log [4 pages per CPU] */ 989 size_t kmem_content_maxsave = 256; /* KMF_CONTENTS max bytes to log */ 990 size_t kmem_lite_minsize = 0; /* minimum buffer size for KMF_LITE */ 991 size_t kmem_lite_maxalign = 1024; /* maximum buffer alignment for KMF_LITE */ 992 int kmem_lite_pcs = 4; /* number of PCs to store in KMF_LITE mode */ 993 size_t kmem_maxverify; /* maximum bytes to inspect in debug routines */ 994 size_t kmem_minfirewall; /* hardware-enforced redzone threshold */ 995 996 #ifdef _LP64 997 size_t kmem_max_cached = KMEM_BIG_MAXBUF; /* maximum kmem_alloc cache */ 998 #else 999 size_t kmem_max_cached = KMEM_BIG_MAXBUF_32BIT; /* maximum kmem_alloc cache */ 1000 #endif 1001 1002 #ifdef DEBUG 1003 int kmem_flags = KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | KMF_CONTENTS; 1004 #else 1005 int kmem_flags = 0; 1006 #endif 1007 int kmem_ready; 1008 1009 static kmem_cache_t *kmem_slab_cache; 1010 static kmem_cache_t *kmem_bufctl_cache; 1011 static kmem_cache_t *kmem_bufctl_audit_cache; 1012 1013 static kmutex_t kmem_cache_lock; /* inter-cache linkage only */ 1014 static list_t kmem_caches; 1015 1016 static taskq_t *kmem_taskq; 1017 static kmutex_t kmem_flags_lock; 1018 static vmem_t *kmem_metadata_arena; 1019 static vmem_t *kmem_msb_arena; /* arena for metadata caches */ 1020 static vmem_t *kmem_cache_arena; 1021 static vmem_t *kmem_hash_arena; 1022 static vmem_t *kmem_log_arena; 1023 static vmem_t *kmem_oversize_arena; 1024 static vmem_t *kmem_va_arena; 1025 static vmem_t *kmem_default_arena; 1026 static vmem_t *kmem_firewall_va_arena; 1027 static vmem_t *kmem_firewall_arena; 1028 1029 /* 1030 * Define KMEM_STATS to turn on statistic gathering. By default, it is only 1031 * turned on when DEBUG is also defined. 1032 */ 1033 #ifdef DEBUG 1034 #define KMEM_STATS 1035 #endif /* DEBUG */ 1036 1037 #ifdef KMEM_STATS 1038 #define KMEM_STAT_ADD(stat) ((stat)++) 1039 #define KMEM_STAT_COND_ADD(cond, stat) ((void) (!(cond) || (stat)++)) 1040 #else 1041 #define KMEM_STAT_ADD(stat) /* nothing */ 1042 #define KMEM_STAT_COND_ADD(cond, stat) /* nothing */ 1043 #endif /* KMEM_STATS */ 1044 1045 /* 1046 * kmem slab consolidator thresholds (tunables) 1047 */ 1048 static size_t kmem_frag_minslabs = 101; /* minimum total slabs */ 1049 static size_t kmem_frag_numer = 1; /* free buffers (numerator) */ 1050 static size_t kmem_frag_denom = KMEM_VOID_FRACTION; /* buffers (denominator) */ 1051 /* 1052 * Maximum number of slabs from which to move buffers during a single 1053 * maintenance interval while the system is not low on memory. 1054 */ 1055 static size_t kmem_reclaim_max_slabs = 1; 1056 /* 1057 * Number of slabs to scan backwards from the end of the partial slab list 1058 * when searching for buffers to relocate. 1059 */ 1060 static size_t kmem_reclaim_scan_range = 12; 1061 1062 #ifdef KMEM_STATS 1063 static struct { 1064 uint64_t kms_callbacks; 1065 uint64_t kms_yes; 1066 uint64_t kms_no; 1067 uint64_t kms_later; 1068 uint64_t kms_dont_need; 1069 uint64_t kms_dont_know; 1070 uint64_t kms_hunt_found_slab; 1071 uint64_t kms_hunt_found_mag; 1072 uint64_t kms_hunt_alloc_fail; 1073 uint64_t kms_hunt_lucky; 1074 uint64_t kms_notify; 1075 uint64_t kms_notify_callbacks; 1076 uint64_t kms_disbelief; 1077 uint64_t kms_already_pending; 1078 uint64_t kms_callback_alloc_fail; 1079 uint64_t kms_callback_taskq_fail; 1080 uint64_t kms_endscan_slab_destroyed; 1081 uint64_t kms_endscan_nomem; 1082 uint64_t kms_endscan_slab_all_used; 1083 uint64_t kms_endscan_refcnt_changed; 1084 uint64_t kms_endscan_nomove_changed; 1085 uint64_t kms_endscan_freelist; 1086 uint64_t kms_avl_update; 1087 uint64_t kms_avl_noupdate; 1088 uint64_t kms_no_longer_reclaimable; 1089 uint64_t kms_notify_no_longer_reclaimable; 1090 uint64_t kms_alloc_fail; 1091 uint64_t kms_constructor_fail; 1092 uint64_t kms_dead_slabs_freed; 1093 uint64_t kms_defrags; 1094 uint64_t kms_scan_depot_ws_reaps; 1095 uint64_t kms_debug_reaps; 1096 uint64_t kms_debug_move_scans; 1097 } kmem_move_stats; 1098 #endif /* KMEM_STATS */ 1099 1100 /* consolidator knobs */ 1101 static boolean_t kmem_move_noreap; 1102 static boolean_t kmem_move_blocked; 1103 static boolean_t kmem_move_fulltilt; 1104 static boolean_t kmem_move_any_partial; 1105 1106 #ifdef DEBUG 1107 /* 1108 * Ensure code coverage by occasionally running the consolidator even when the 1109 * caches are not fragmented (they may never be). These intervals are mean time 1110 * in cache maintenance intervals (kmem_cache_update). 1111 */ 1112 static int kmem_mtb_move = 60; /* defrag 1 slab (~15min) */ 1113 static int kmem_mtb_reap = 1800; /* defrag all slabs (~7.5hrs) */ 1114 #endif /* DEBUG */ 1115 1116 static kmem_cache_t *kmem_defrag_cache; 1117 static kmem_cache_t *kmem_move_cache; 1118 static taskq_t *kmem_move_taskq; 1119 1120 static void kmem_cache_scan(kmem_cache_t *); 1121 static void kmem_cache_defrag(kmem_cache_t *); 1122 1123 1124 kmem_log_header_t *kmem_transaction_log; 1125 kmem_log_header_t *kmem_content_log; 1126 kmem_log_header_t *kmem_failure_log; 1127 kmem_log_header_t *kmem_slab_log; 1128 1129 static int kmem_lite_count; /* # of PCs in kmem_buftag_lite_t */ 1130 1131 #define KMEM_BUFTAG_LITE_ENTER(bt, count, caller) \ 1132 if ((count) > 0) { \ 1133 pc_t *_s = ((kmem_buftag_lite_t *)(bt))->bt_history; \ 1134 pc_t *_e; \ 1135 /* memmove() the old entries down one notch */ \ 1136 for (_e = &_s[(count) - 1]; _e > _s; _e--) \ 1137 *_e = *(_e - 1); \ 1138 *_s = (uintptr_t)(caller); \ 1139 } 1140 1141 #define KMERR_MODIFIED 0 /* buffer modified while on freelist */ 1142 #define KMERR_REDZONE 1 /* redzone violation (write past end of buf) */ 1143 #define KMERR_DUPFREE 2 /* freed a buffer twice */ 1144 #define KMERR_BADADDR 3 /* freed a bad (unallocated) address */ 1145 #define KMERR_BADBUFTAG 4 /* buftag corrupted */ 1146 #define KMERR_BADBUFCTL 5 /* bufctl corrupted */ 1147 #define KMERR_BADCACHE 6 /* freed a buffer to the wrong cache */ 1148 #define KMERR_BADSIZE 7 /* alloc size != free size */ 1149 #define KMERR_BADBASE 8 /* buffer base address wrong */ 1150 1151 struct { 1152 hrtime_t kmp_timestamp; /* timestamp of panic */ 1153 int kmp_error; /* type of kmem error */ 1154 void *kmp_buffer; /* buffer that induced panic */ 1155 void *kmp_realbuf; /* real start address for buffer */ 1156 kmem_cache_t *kmp_cache; /* buffer's cache according to client */ 1157 kmem_cache_t *kmp_realcache; /* actual cache containing buffer */ 1158 kmem_slab_t *kmp_slab; /* slab accoring to kmem_findslab() */ 1159 kmem_bufctl_t *kmp_bufctl; /* bufctl */ 1160 } kmem_panic_info; 1161 1162 1163 static void 1164 copy_pattern(uint64_t pattern, void *buf_arg, size_t size) 1165 { 1166 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 1167 uint64_t *buf = buf_arg; 1168 1169 while (buf < bufend) 1170 *buf++ = pattern; 1171 } 1172 1173 static void * 1174 verify_pattern(uint64_t pattern, void *buf_arg, size_t size) 1175 { 1176 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 1177 uint64_t *buf; 1178 1179 for (buf = buf_arg; buf < bufend; buf++) 1180 if (*buf != pattern) 1181 return (buf); 1182 return (NULL); 1183 } 1184 1185 static void * 1186 verify_and_copy_pattern(uint64_t old, uint64_t new, void *buf_arg, size_t size) 1187 { 1188 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 1189 uint64_t *buf; 1190 1191 for (buf = buf_arg; buf < bufend; buf++) { 1192 if (*buf != old) { 1193 copy_pattern(old, buf_arg, 1194 (char *)buf - (char *)buf_arg); 1195 return (buf); 1196 } 1197 *buf = new; 1198 } 1199 1200 return (NULL); 1201 } 1202 1203 static void 1204 kmem_cache_applyall(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag) 1205 { 1206 kmem_cache_t *cp; 1207 1208 mutex_enter(&kmem_cache_lock); 1209 for (cp = list_head(&kmem_caches); cp != NULL; 1210 cp = list_next(&kmem_caches, cp)) 1211 if (tq != NULL) 1212 (void) taskq_dispatch(tq, (task_func_t *)func, cp, 1213 tqflag); 1214 else 1215 func(cp); 1216 mutex_exit(&kmem_cache_lock); 1217 } 1218 1219 static void 1220 kmem_cache_applyall_id(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag) 1221 { 1222 kmem_cache_t *cp; 1223 1224 mutex_enter(&kmem_cache_lock); 1225 for (cp = list_head(&kmem_caches); cp != NULL; 1226 cp = list_next(&kmem_caches, cp)) { 1227 if (!(cp->cache_cflags & KMC_IDENTIFIER)) 1228 continue; 1229 if (tq != NULL) 1230 (void) taskq_dispatch(tq, (task_func_t *)func, cp, 1231 tqflag); 1232 else 1233 func(cp); 1234 } 1235 mutex_exit(&kmem_cache_lock); 1236 } 1237 1238 /* 1239 * Debugging support. Given a buffer address, find its slab. 1240 */ 1241 static kmem_slab_t * 1242 kmem_findslab(kmem_cache_t *cp, void *buf) 1243 { 1244 kmem_slab_t *sp; 1245 1246 mutex_enter(&cp->cache_lock); 1247 for (sp = list_head(&cp->cache_complete_slabs); sp != NULL; 1248 sp = list_next(&cp->cache_complete_slabs, sp)) { 1249 if (KMEM_SLAB_MEMBER(sp, buf)) { 1250 mutex_exit(&cp->cache_lock); 1251 return (sp); 1252 } 1253 } 1254 for (sp = avl_first(&cp->cache_partial_slabs); sp != NULL; 1255 sp = AVL_NEXT(&cp->cache_partial_slabs, sp)) { 1256 if (KMEM_SLAB_MEMBER(sp, buf)) { 1257 mutex_exit(&cp->cache_lock); 1258 return (sp); 1259 } 1260 } 1261 mutex_exit(&cp->cache_lock); 1262 1263 return (NULL); 1264 } 1265 1266 static void 1267 kmem_error(int error, kmem_cache_t *cparg, void *bufarg) 1268 { 1269 kmem_buftag_t *btp = NULL; 1270 kmem_bufctl_t *bcp = NULL; 1271 kmem_cache_t *cp = cparg; 1272 kmem_slab_t *sp; 1273 uint64_t *off; 1274 void *buf = bufarg; 1275 1276 kmem_logging = 0; /* stop logging when a bad thing happens */ 1277 1278 kmem_panic_info.kmp_timestamp = gethrtime(); 1279 1280 sp = kmem_findslab(cp, buf); 1281 if (sp == NULL) { 1282 for (cp = list_tail(&kmem_caches); cp != NULL; 1283 cp = list_prev(&kmem_caches, cp)) { 1284 if ((sp = kmem_findslab(cp, buf)) != NULL) 1285 break; 1286 } 1287 } 1288 1289 if (sp == NULL) { 1290 cp = NULL; 1291 error = KMERR_BADADDR; 1292 } else { 1293 if (cp != cparg) 1294 error = KMERR_BADCACHE; 1295 else 1296 buf = (char *)bufarg - ((uintptr_t)bufarg - 1297 (uintptr_t)sp->slab_base) % cp->cache_chunksize; 1298 if (buf != bufarg) 1299 error = KMERR_BADBASE; 1300 if (cp->cache_flags & KMF_BUFTAG) 1301 btp = KMEM_BUFTAG(cp, buf); 1302 if (cp->cache_flags & KMF_HASH) { 1303 mutex_enter(&cp->cache_lock); 1304 for (bcp = *KMEM_HASH(cp, buf); bcp; bcp = bcp->bc_next) 1305 if (bcp->bc_addr == buf) 1306 break; 1307 mutex_exit(&cp->cache_lock); 1308 if (bcp == NULL && btp != NULL) 1309 bcp = btp->bt_bufctl; 1310 if (kmem_findslab(cp->cache_bufctl_cache, bcp) == 1311 NULL || P2PHASE((uintptr_t)bcp, KMEM_ALIGN) || 1312 bcp->bc_addr != buf) { 1313 error = KMERR_BADBUFCTL; 1314 bcp = NULL; 1315 } 1316 } 1317 } 1318 1319 kmem_panic_info.kmp_error = error; 1320 kmem_panic_info.kmp_buffer = bufarg; 1321 kmem_panic_info.kmp_realbuf = buf; 1322 kmem_panic_info.kmp_cache = cparg; 1323 kmem_panic_info.kmp_realcache = cp; 1324 kmem_panic_info.kmp_slab = sp; 1325 kmem_panic_info.kmp_bufctl = bcp; 1326 1327 printf("kernel memory allocator: "); 1328 1329 switch (error) { 1330 1331 case KMERR_MODIFIED: 1332 printf("buffer modified after being freed\n"); 1333 off = verify_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 1334 if (off == NULL) /* shouldn't happen */ 1335 off = buf; 1336 printf("modification occurred at offset 0x%lx " 1337 "(0x%llx replaced by 0x%llx)\n", 1338 (uintptr_t)off - (uintptr_t)buf, 1339 (longlong_t)KMEM_FREE_PATTERN, (longlong_t)*off); 1340 break; 1341 1342 case KMERR_REDZONE: 1343 printf("redzone violation: write past end of buffer\n"); 1344 break; 1345 1346 case KMERR_BADADDR: 1347 printf("invalid free: buffer not in cache\n"); 1348 break; 1349 1350 case KMERR_DUPFREE: 1351 printf("duplicate free: buffer freed twice\n"); 1352 break; 1353 1354 case KMERR_BADBUFTAG: 1355 printf("boundary tag corrupted\n"); 1356 printf("bcp ^ bxstat = %lx, should be %lx\n", 1357 (intptr_t)btp->bt_bufctl ^ btp->bt_bxstat, 1358 KMEM_BUFTAG_FREE); 1359 break; 1360 1361 case KMERR_BADBUFCTL: 1362 printf("bufctl corrupted\n"); 1363 break; 1364 1365 case KMERR_BADCACHE: 1366 printf("buffer freed to wrong cache\n"); 1367 printf("buffer was allocated from %s,\n", cp->cache_name); 1368 printf("caller attempting free to %s.\n", cparg->cache_name); 1369 break; 1370 1371 case KMERR_BADSIZE: 1372 printf("bad free: free size (%u) != alloc size (%u)\n", 1373 KMEM_SIZE_DECODE(((uint32_t *)btp)[0]), 1374 KMEM_SIZE_DECODE(((uint32_t *)btp)[1])); 1375 break; 1376 1377 case KMERR_BADBASE: 1378 printf("bad free: free address (%p) != alloc address (%p)\n", 1379 bufarg, buf); 1380 break; 1381 } 1382 1383 printf("buffer=%p bufctl=%p cache: %s\n", 1384 bufarg, (void *)bcp, cparg->cache_name); 1385 1386 if (bcp != NULL && (cp->cache_flags & KMF_AUDIT) && 1387 error != KMERR_BADBUFCTL) { 1388 int d; 1389 timestruc_t ts; 1390 kmem_bufctl_audit_t *bcap = (kmem_bufctl_audit_t *)bcp; 1391 1392 hrt2ts(kmem_panic_info.kmp_timestamp - bcap->bc_timestamp, &ts); 1393 printf("previous transaction on buffer %p:\n", buf); 1394 printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n", 1395 (void *)bcap->bc_thread, ts.tv_sec, ts.tv_nsec, 1396 (void *)sp, cp->cache_name); 1397 for (d = 0; d < MIN(bcap->bc_depth, KMEM_STACK_DEPTH); d++) { 1398 ulong_t off; 1399 char *sym = kobj_getsymname(bcap->bc_stack[d], &off); 1400 printf("%s+%lx\n", sym ? sym : "?", off); 1401 } 1402 } 1403 if (kmem_panic > 0) 1404 panic("kernel heap corruption detected"); 1405 if (kmem_panic == 0) 1406 debug_enter(NULL); 1407 kmem_logging = 1; /* resume logging */ 1408 } 1409 1410 static kmem_log_header_t * 1411 kmem_log_init(size_t logsize) 1412 { 1413 kmem_log_header_t *lhp; 1414 int nchunks = 4 * max_ncpus; 1415 size_t lhsize = (size_t)&((kmem_log_header_t *)0)->lh_cpu[max_ncpus]; 1416 int i; 1417 1418 /* 1419 * Make sure that lhp->lh_cpu[] is nicely aligned 1420 * to prevent false sharing of cache lines. 1421 */ 1422 lhsize = P2ROUNDUP(lhsize, KMEM_ALIGN); 1423 lhp = vmem_xalloc(kmem_log_arena, lhsize, 64, P2NPHASE(lhsize, 64), 0, 1424 NULL, NULL, VM_SLEEP); 1425 bzero(lhp, lhsize); 1426 1427 mutex_init(&lhp->lh_lock, NULL, MUTEX_DEFAULT, NULL); 1428 lhp->lh_nchunks = nchunks; 1429 lhp->lh_chunksize = P2ROUNDUP(logsize / nchunks + 1, PAGESIZE); 1430 lhp->lh_base = vmem_alloc(kmem_log_arena, 1431 lhp->lh_chunksize * nchunks, VM_SLEEP); 1432 lhp->lh_free = vmem_alloc(kmem_log_arena, 1433 nchunks * sizeof (int), VM_SLEEP); 1434 bzero(lhp->lh_base, lhp->lh_chunksize * nchunks); 1435 1436 for (i = 0; i < max_ncpus; i++) { 1437 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[i]; 1438 mutex_init(&clhp->clh_lock, NULL, MUTEX_DEFAULT, NULL); 1439 clhp->clh_chunk = i; 1440 } 1441 1442 for (i = max_ncpus; i < nchunks; i++) 1443 lhp->lh_free[i] = i; 1444 1445 lhp->lh_head = max_ncpus; 1446 lhp->lh_tail = 0; 1447 1448 return (lhp); 1449 } 1450 1451 static void * 1452 kmem_log_enter(kmem_log_header_t *lhp, void *data, size_t size) 1453 { 1454 void *logspace; 1455 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[CPU->cpu_seqid]; 1456 1457 if (lhp == NULL || kmem_logging == 0 || panicstr) 1458 return (NULL); 1459 1460 mutex_enter(&clhp->clh_lock); 1461 clhp->clh_hits++; 1462 if (size > clhp->clh_avail) { 1463 mutex_enter(&lhp->lh_lock); 1464 lhp->lh_hits++; 1465 lhp->lh_free[lhp->lh_tail] = clhp->clh_chunk; 1466 lhp->lh_tail = (lhp->lh_tail + 1) % lhp->lh_nchunks; 1467 clhp->clh_chunk = lhp->lh_free[lhp->lh_head]; 1468 lhp->lh_head = (lhp->lh_head + 1) % lhp->lh_nchunks; 1469 clhp->clh_current = lhp->lh_base + 1470 clhp->clh_chunk * lhp->lh_chunksize; 1471 clhp->clh_avail = lhp->lh_chunksize; 1472 if (size > lhp->lh_chunksize) 1473 size = lhp->lh_chunksize; 1474 mutex_exit(&lhp->lh_lock); 1475 } 1476 logspace = clhp->clh_current; 1477 clhp->clh_current += size; 1478 clhp->clh_avail -= size; 1479 bcopy(data, logspace, size); 1480 mutex_exit(&clhp->clh_lock); 1481 return (logspace); 1482 } 1483 1484 #define KMEM_AUDIT(lp, cp, bcp) \ 1485 { \ 1486 kmem_bufctl_audit_t *_bcp = (kmem_bufctl_audit_t *)(bcp); \ 1487 _bcp->bc_timestamp = gethrtime(); \ 1488 _bcp->bc_thread = curthread; \ 1489 _bcp->bc_depth = getpcstack(_bcp->bc_stack, KMEM_STACK_DEPTH); \ 1490 _bcp->bc_lastlog = kmem_log_enter((lp), _bcp, sizeof (*_bcp)); \ 1491 } 1492 1493 static void 1494 kmem_log_event(kmem_log_header_t *lp, kmem_cache_t *cp, 1495 kmem_slab_t *sp, void *addr) 1496 { 1497 kmem_bufctl_audit_t bca; 1498 1499 bzero(&bca, sizeof (kmem_bufctl_audit_t)); 1500 bca.bc_addr = addr; 1501 bca.bc_slab = sp; 1502 bca.bc_cache = cp; 1503 KMEM_AUDIT(lp, cp, &bca); 1504 } 1505 1506 /* 1507 * Create a new slab for cache cp. 1508 */ 1509 static kmem_slab_t * 1510 kmem_slab_create(kmem_cache_t *cp, int kmflag) 1511 { 1512 size_t slabsize = cp->cache_slabsize; 1513 size_t chunksize = cp->cache_chunksize; 1514 int cache_flags = cp->cache_flags; 1515 size_t color, chunks; 1516 char *buf, *slab; 1517 kmem_slab_t *sp; 1518 kmem_bufctl_t *bcp; 1519 vmem_t *vmp = cp->cache_arena; 1520 1521 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 1522 1523 color = cp->cache_color + cp->cache_align; 1524 if (color > cp->cache_maxcolor) 1525 color = cp->cache_mincolor; 1526 cp->cache_color = color; 1527 1528 slab = vmem_alloc(vmp, slabsize, kmflag & KM_VMFLAGS); 1529 1530 if (slab == NULL) 1531 goto vmem_alloc_failure; 1532 1533 ASSERT(P2PHASE((uintptr_t)slab, vmp->vm_quantum) == 0); 1534 1535 /* 1536 * Reverify what was already checked in kmem_cache_set_move(), since the 1537 * consolidator depends (for correctness) on slabs being initialized 1538 * with the 0xbaddcafe memory pattern (setting a low order bit usable by 1539 * clients to distinguish uninitialized memory from known objects). 1540 */ 1541 ASSERT((cp->cache_move == NULL) || !(cp->cache_cflags & KMC_NOTOUCH)); 1542 if (!(cp->cache_cflags & KMC_NOTOUCH)) 1543 copy_pattern(KMEM_UNINITIALIZED_PATTERN, slab, slabsize); 1544 1545 if (cache_flags & KMF_HASH) { 1546 if ((sp = kmem_cache_alloc(kmem_slab_cache, kmflag)) == NULL) 1547 goto slab_alloc_failure; 1548 chunks = (slabsize - color) / chunksize; 1549 } else { 1550 sp = KMEM_SLAB(cp, slab); 1551 chunks = (slabsize - sizeof (kmem_slab_t) - color) / chunksize; 1552 } 1553 1554 sp->slab_cache = cp; 1555 sp->slab_head = NULL; 1556 sp->slab_refcnt = 0; 1557 sp->slab_base = buf = slab + color; 1558 sp->slab_chunks = chunks; 1559 sp->slab_stuck_offset = (uint32_t)-1; 1560 sp->slab_later_count = 0; 1561 sp->slab_flags = 0; 1562 1563 ASSERT(chunks > 0); 1564 while (chunks-- != 0) { 1565 if (cache_flags & KMF_HASH) { 1566 bcp = kmem_cache_alloc(cp->cache_bufctl_cache, kmflag); 1567 if (bcp == NULL) 1568 goto bufctl_alloc_failure; 1569 if (cache_flags & KMF_AUDIT) { 1570 kmem_bufctl_audit_t *bcap = 1571 (kmem_bufctl_audit_t *)bcp; 1572 bzero(bcap, sizeof (kmem_bufctl_audit_t)); 1573 bcap->bc_cache = cp; 1574 } 1575 bcp->bc_addr = buf; 1576 bcp->bc_slab = sp; 1577 } else { 1578 bcp = KMEM_BUFCTL(cp, buf); 1579 } 1580 if (cache_flags & KMF_BUFTAG) { 1581 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 1582 btp->bt_redzone = KMEM_REDZONE_PATTERN; 1583 btp->bt_bufctl = bcp; 1584 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 1585 if (cache_flags & KMF_DEADBEEF) { 1586 copy_pattern(KMEM_FREE_PATTERN, buf, 1587 cp->cache_verify); 1588 } 1589 } 1590 bcp->bc_next = sp->slab_head; 1591 sp->slab_head = bcp; 1592 buf += chunksize; 1593 } 1594 1595 kmem_log_event(kmem_slab_log, cp, sp, slab); 1596 1597 return (sp); 1598 1599 bufctl_alloc_failure: 1600 1601 while ((bcp = sp->slab_head) != NULL) { 1602 sp->slab_head = bcp->bc_next; 1603 kmem_cache_free(cp->cache_bufctl_cache, bcp); 1604 } 1605 kmem_cache_free(kmem_slab_cache, sp); 1606 1607 slab_alloc_failure: 1608 1609 vmem_free(vmp, slab, slabsize); 1610 1611 vmem_alloc_failure: 1612 1613 kmem_log_event(kmem_failure_log, cp, NULL, NULL); 1614 atomic_add_64(&cp->cache_alloc_fail, 1); 1615 1616 return (NULL); 1617 } 1618 1619 /* 1620 * Destroy a slab. 1621 */ 1622 static void 1623 kmem_slab_destroy(kmem_cache_t *cp, kmem_slab_t *sp) 1624 { 1625 vmem_t *vmp = cp->cache_arena; 1626 void *slab = (void *)P2ALIGN((uintptr_t)sp->slab_base, vmp->vm_quantum); 1627 1628 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 1629 ASSERT(sp->slab_refcnt == 0); 1630 1631 if (cp->cache_flags & KMF_HASH) { 1632 kmem_bufctl_t *bcp; 1633 while ((bcp = sp->slab_head) != NULL) { 1634 sp->slab_head = bcp->bc_next; 1635 kmem_cache_free(cp->cache_bufctl_cache, bcp); 1636 } 1637 kmem_cache_free(kmem_slab_cache, sp); 1638 } 1639 vmem_free(vmp, slab, cp->cache_slabsize); 1640 } 1641 1642 static void * 1643 kmem_slab_alloc_impl(kmem_cache_t *cp, kmem_slab_t *sp) 1644 { 1645 kmem_bufctl_t *bcp, **hash_bucket; 1646 void *buf; 1647 1648 ASSERT(MUTEX_HELD(&cp->cache_lock)); 1649 /* 1650 * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we 1651 * can't ASSERT(avl_is_empty(&cp->cache_partial_slabs)) here when the 1652 * slab is newly created (sp->slab_refcnt == 0). 1653 */ 1654 ASSERT((sp->slab_refcnt == 0) || (KMEM_SLAB_IS_PARTIAL(sp) && 1655 (sp == avl_first(&cp->cache_partial_slabs)))); 1656 ASSERT(sp->slab_cache == cp); 1657 1658 cp->cache_slab_alloc++; 1659 cp->cache_bufslab--; 1660 sp->slab_refcnt++; 1661 1662 bcp = sp->slab_head; 1663 if ((sp->slab_head = bcp->bc_next) == NULL) { 1664 ASSERT(KMEM_SLAB_IS_ALL_USED(sp)); 1665 if (sp->slab_refcnt == 1) { 1666 ASSERT(sp->slab_chunks == 1); 1667 } else { 1668 ASSERT(sp->slab_chunks > 1); /* the slab was partial */ 1669 avl_remove(&cp->cache_partial_slabs, sp); 1670 sp->slab_later_count = 0; /* clear history */ 1671 sp->slab_flags &= ~KMEM_SLAB_NOMOVE; 1672 sp->slab_stuck_offset = (uint32_t)-1; 1673 } 1674 list_insert_head(&cp->cache_complete_slabs, sp); 1675 cp->cache_complete_slab_count++; 1676 } else { 1677 ASSERT(KMEM_SLAB_IS_PARTIAL(sp)); 1678 if (sp->slab_refcnt == 1) { 1679 avl_add(&cp->cache_partial_slabs, sp); 1680 } else { 1681 /* 1682 * The slab is now more allocated than it was, so the 1683 * order remains unchanged. 1684 */ 1685 ASSERT(!avl_update(&cp->cache_partial_slabs, sp)); 1686 } 1687 } 1688 1689 if (cp->cache_flags & KMF_HASH) { 1690 /* 1691 * Add buffer to allocated-address hash table. 1692 */ 1693 buf = bcp->bc_addr; 1694 hash_bucket = KMEM_HASH(cp, buf); 1695 bcp->bc_next = *hash_bucket; 1696 *hash_bucket = bcp; 1697 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) { 1698 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 1699 } 1700 } else { 1701 buf = KMEM_BUF(cp, bcp); 1702 } 1703 1704 ASSERT(KMEM_SLAB_MEMBER(sp, buf)); 1705 return (buf); 1706 } 1707 1708 /* 1709 * Allocate a raw (unconstructed) buffer from cp's slab layer. 1710 */ 1711 static void * 1712 kmem_slab_alloc(kmem_cache_t *cp, int kmflag) 1713 { 1714 kmem_slab_t *sp; 1715 void *buf; 1716 boolean_t test_destructor; 1717 1718 mutex_enter(&cp->cache_lock); 1719 test_destructor = (cp->cache_slab_alloc == 0); 1720 sp = avl_first(&cp->cache_partial_slabs); 1721 if (sp == NULL) { 1722 ASSERT(cp->cache_bufslab == 0); 1723 1724 /* 1725 * The freelist is empty. Create a new slab. 1726 */ 1727 mutex_exit(&cp->cache_lock); 1728 if ((sp = kmem_slab_create(cp, kmflag)) == NULL) { 1729 return (NULL); 1730 } 1731 mutex_enter(&cp->cache_lock); 1732 cp->cache_slab_create++; 1733 if ((cp->cache_buftotal += sp->slab_chunks) > cp->cache_bufmax) 1734 cp->cache_bufmax = cp->cache_buftotal; 1735 cp->cache_bufslab += sp->slab_chunks; 1736 } 1737 1738 buf = kmem_slab_alloc_impl(cp, sp); 1739 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) == 1740 (cp->cache_complete_slab_count + 1741 avl_numnodes(&cp->cache_partial_slabs) + 1742 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount))); 1743 mutex_exit(&cp->cache_lock); 1744 1745 if (test_destructor && cp->cache_destructor != NULL) { 1746 /* 1747 * On the first kmem_slab_alloc(), assert that it is valid to 1748 * call the destructor on a newly constructed object without any 1749 * client involvement. 1750 */ 1751 if ((cp->cache_constructor == NULL) || 1752 cp->cache_constructor(buf, cp->cache_private, 1753 kmflag) == 0) { 1754 cp->cache_destructor(buf, cp->cache_private); 1755 } 1756 copy_pattern(KMEM_UNINITIALIZED_PATTERN, buf, 1757 cp->cache_bufsize); 1758 if (cp->cache_flags & KMF_DEADBEEF) { 1759 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 1760 } 1761 } 1762 1763 return (buf); 1764 } 1765 1766 static void kmem_slab_move_yes(kmem_cache_t *, kmem_slab_t *, void *); 1767 1768 /* 1769 * Free a raw (unconstructed) buffer to cp's slab layer. 1770 */ 1771 static void 1772 kmem_slab_free(kmem_cache_t *cp, void *buf) 1773 { 1774 kmem_slab_t *sp; 1775 kmem_bufctl_t *bcp, **prev_bcpp; 1776 1777 ASSERT(buf != NULL); 1778 1779 mutex_enter(&cp->cache_lock); 1780 cp->cache_slab_free++; 1781 1782 if (cp->cache_flags & KMF_HASH) { 1783 /* 1784 * Look up buffer in allocated-address hash table. 1785 */ 1786 prev_bcpp = KMEM_HASH(cp, buf); 1787 while ((bcp = *prev_bcpp) != NULL) { 1788 if (bcp->bc_addr == buf) { 1789 *prev_bcpp = bcp->bc_next; 1790 sp = bcp->bc_slab; 1791 break; 1792 } 1793 cp->cache_lookup_depth++; 1794 prev_bcpp = &bcp->bc_next; 1795 } 1796 } else { 1797 bcp = KMEM_BUFCTL(cp, buf); 1798 sp = KMEM_SLAB(cp, buf); 1799 } 1800 1801 if (bcp == NULL || sp->slab_cache != cp || !KMEM_SLAB_MEMBER(sp, buf)) { 1802 mutex_exit(&cp->cache_lock); 1803 kmem_error(KMERR_BADADDR, cp, buf); 1804 return; 1805 } 1806 1807 if (KMEM_SLAB_OFFSET(sp, buf) == sp->slab_stuck_offset) { 1808 /* 1809 * If this is the buffer that prevented the consolidator from 1810 * clearing the slab, we can reset the slab flags now that the 1811 * buffer is freed. (It makes sense to do this in 1812 * kmem_cache_free(), where the client gives up ownership of the 1813 * buffer, but on the hot path the test is too expensive.) 1814 */ 1815 kmem_slab_move_yes(cp, sp, buf); 1816 } 1817 1818 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) { 1819 if (cp->cache_flags & KMF_CONTENTS) 1820 ((kmem_bufctl_audit_t *)bcp)->bc_contents = 1821 kmem_log_enter(kmem_content_log, buf, 1822 cp->cache_contents); 1823 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 1824 } 1825 1826 bcp->bc_next = sp->slab_head; 1827 sp->slab_head = bcp; 1828 1829 cp->cache_bufslab++; 1830 ASSERT(sp->slab_refcnt >= 1); 1831 1832 if (--sp->slab_refcnt == 0) { 1833 /* 1834 * There are no outstanding allocations from this slab, 1835 * so we can reclaim the memory. 1836 */ 1837 if (sp->slab_chunks == 1) { 1838 list_remove(&cp->cache_complete_slabs, sp); 1839 cp->cache_complete_slab_count--; 1840 } else { 1841 avl_remove(&cp->cache_partial_slabs, sp); 1842 } 1843 1844 cp->cache_buftotal -= sp->slab_chunks; 1845 cp->cache_bufslab -= sp->slab_chunks; 1846 /* 1847 * Defer releasing the slab to the virtual memory subsystem 1848 * while there is a pending move callback, since we guarantee 1849 * that buffers passed to the move callback have only been 1850 * touched by kmem or by the client itself. Since the memory 1851 * patterns baddcafe (uninitialized) and deadbeef (freed) both 1852 * set at least one of the two lowest order bits, the client can 1853 * test those bits in the move callback to determine whether or 1854 * not it knows about the buffer (assuming that the client also 1855 * sets one of those low order bits whenever it frees a buffer). 1856 */ 1857 if (cp->cache_defrag == NULL || 1858 (avl_is_empty(&cp->cache_defrag->kmd_moves_pending) && 1859 !(sp->slab_flags & KMEM_SLAB_MOVE_PENDING))) { 1860 cp->cache_slab_destroy++; 1861 mutex_exit(&cp->cache_lock); 1862 kmem_slab_destroy(cp, sp); 1863 } else { 1864 list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 1865 /* 1866 * Slabs are inserted at both ends of the deadlist to 1867 * distinguish between slabs freed while move callbacks 1868 * are pending (list head) and a slab freed while the 1869 * lock is dropped in kmem_move_buffers() (list tail) so 1870 * that in both cases slab_destroy() is called from the 1871 * right context. 1872 */ 1873 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) { 1874 list_insert_tail(deadlist, sp); 1875 } else { 1876 list_insert_head(deadlist, sp); 1877 } 1878 cp->cache_defrag->kmd_deadcount++; 1879 mutex_exit(&cp->cache_lock); 1880 } 1881 return; 1882 } 1883 1884 if (bcp->bc_next == NULL) { 1885 /* Transition the slab from completely allocated to partial. */ 1886 ASSERT(sp->slab_refcnt == (sp->slab_chunks - 1)); 1887 ASSERT(sp->slab_chunks > 1); 1888 list_remove(&cp->cache_complete_slabs, sp); 1889 cp->cache_complete_slab_count--; 1890 avl_add(&cp->cache_partial_slabs, sp); 1891 } else { 1892 #ifdef DEBUG 1893 if (avl_update_gt(&cp->cache_partial_slabs, sp)) { 1894 KMEM_STAT_ADD(kmem_move_stats.kms_avl_update); 1895 } else { 1896 KMEM_STAT_ADD(kmem_move_stats.kms_avl_noupdate); 1897 } 1898 #else 1899 (void) avl_update_gt(&cp->cache_partial_slabs, sp); 1900 #endif 1901 } 1902 1903 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) == 1904 (cp->cache_complete_slab_count + 1905 avl_numnodes(&cp->cache_partial_slabs) + 1906 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount))); 1907 mutex_exit(&cp->cache_lock); 1908 } 1909 1910 /* 1911 * Return -1 if kmem_error, 1 if constructor fails, 0 if successful. 1912 */ 1913 static int 1914 kmem_cache_alloc_debug(kmem_cache_t *cp, void *buf, int kmflag, int construct, 1915 caddr_t caller) 1916 { 1917 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 1918 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl; 1919 uint32_t mtbf; 1920 1921 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) { 1922 kmem_error(KMERR_BADBUFTAG, cp, buf); 1923 return (-1); 1924 } 1925 1926 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_ALLOC; 1927 1928 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) { 1929 kmem_error(KMERR_BADBUFCTL, cp, buf); 1930 return (-1); 1931 } 1932 1933 if (cp->cache_flags & KMF_DEADBEEF) { 1934 if (!construct && (cp->cache_flags & KMF_LITE)) { 1935 if (*(uint64_t *)buf != KMEM_FREE_PATTERN) { 1936 kmem_error(KMERR_MODIFIED, cp, buf); 1937 return (-1); 1938 } 1939 if (cp->cache_constructor != NULL) 1940 *(uint64_t *)buf = btp->bt_redzone; 1941 else 1942 *(uint64_t *)buf = KMEM_UNINITIALIZED_PATTERN; 1943 } else { 1944 construct = 1; 1945 if (verify_and_copy_pattern(KMEM_FREE_PATTERN, 1946 KMEM_UNINITIALIZED_PATTERN, buf, 1947 cp->cache_verify)) { 1948 kmem_error(KMERR_MODIFIED, cp, buf); 1949 return (-1); 1950 } 1951 } 1952 } 1953 btp->bt_redzone = KMEM_REDZONE_PATTERN; 1954 1955 if ((mtbf = kmem_mtbf | cp->cache_mtbf) != 0 && 1956 gethrtime() % mtbf == 0 && 1957 (kmflag & (KM_NOSLEEP | KM_PANIC)) == KM_NOSLEEP) { 1958 kmem_log_event(kmem_failure_log, cp, NULL, NULL); 1959 if (!construct && cp->cache_destructor != NULL) 1960 cp->cache_destructor(buf, cp->cache_private); 1961 } else { 1962 mtbf = 0; 1963 } 1964 1965 if (mtbf || (construct && cp->cache_constructor != NULL && 1966 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0)) { 1967 atomic_add_64(&cp->cache_alloc_fail, 1); 1968 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 1969 if (cp->cache_flags & KMF_DEADBEEF) 1970 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 1971 kmem_slab_free(cp, buf); 1972 return (1); 1973 } 1974 1975 if (cp->cache_flags & KMF_AUDIT) { 1976 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 1977 } 1978 1979 if ((cp->cache_flags & KMF_LITE) && 1980 !(cp->cache_cflags & KMC_KMEM_ALLOC)) { 1981 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller); 1982 } 1983 1984 return (0); 1985 } 1986 1987 static int 1988 kmem_cache_free_debug(kmem_cache_t *cp, void *buf, caddr_t caller) 1989 { 1990 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 1991 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl; 1992 kmem_slab_t *sp; 1993 1994 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_ALLOC)) { 1995 if (btp->bt_bxstat == ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) { 1996 kmem_error(KMERR_DUPFREE, cp, buf); 1997 return (-1); 1998 } 1999 sp = kmem_findslab(cp, buf); 2000 if (sp == NULL || sp->slab_cache != cp) 2001 kmem_error(KMERR_BADADDR, cp, buf); 2002 else 2003 kmem_error(KMERR_REDZONE, cp, buf); 2004 return (-1); 2005 } 2006 2007 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 2008 2009 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) { 2010 kmem_error(KMERR_BADBUFCTL, cp, buf); 2011 return (-1); 2012 } 2013 2014 if (btp->bt_redzone != KMEM_REDZONE_PATTERN) { 2015 kmem_error(KMERR_REDZONE, cp, buf); 2016 return (-1); 2017 } 2018 2019 if (cp->cache_flags & KMF_AUDIT) { 2020 if (cp->cache_flags & KMF_CONTENTS) 2021 bcp->bc_contents = kmem_log_enter(kmem_content_log, 2022 buf, cp->cache_contents); 2023 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 2024 } 2025 2026 if ((cp->cache_flags & KMF_LITE) && 2027 !(cp->cache_cflags & KMC_KMEM_ALLOC)) { 2028 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller); 2029 } 2030 2031 if (cp->cache_flags & KMF_DEADBEEF) { 2032 if (cp->cache_flags & KMF_LITE) 2033 btp->bt_redzone = *(uint64_t *)buf; 2034 else if (cp->cache_destructor != NULL) 2035 cp->cache_destructor(buf, cp->cache_private); 2036 2037 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 2038 } 2039 2040 return (0); 2041 } 2042 2043 /* 2044 * Free each object in magazine mp to cp's slab layer, and free mp itself. 2045 */ 2046 static void 2047 kmem_magazine_destroy(kmem_cache_t *cp, kmem_magazine_t *mp, int nrounds) 2048 { 2049 int round; 2050 2051 ASSERT(!list_link_active(&cp->cache_link) || 2052 taskq_member(kmem_taskq, curthread)); 2053 2054 for (round = 0; round < nrounds; round++) { 2055 void *buf = mp->mag_round[round]; 2056 2057 if (cp->cache_flags & KMF_DEADBEEF) { 2058 if (verify_pattern(KMEM_FREE_PATTERN, buf, 2059 cp->cache_verify) != NULL) { 2060 kmem_error(KMERR_MODIFIED, cp, buf); 2061 continue; 2062 } 2063 if ((cp->cache_flags & KMF_LITE) && 2064 cp->cache_destructor != NULL) { 2065 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2066 *(uint64_t *)buf = btp->bt_redzone; 2067 cp->cache_destructor(buf, cp->cache_private); 2068 *(uint64_t *)buf = KMEM_FREE_PATTERN; 2069 } 2070 } else if (cp->cache_destructor != NULL) { 2071 cp->cache_destructor(buf, cp->cache_private); 2072 } 2073 2074 kmem_slab_free(cp, buf); 2075 } 2076 ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 2077 kmem_cache_free(cp->cache_magtype->mt_cache, mp); 2078 } 2079 2080 /* 2081 * Allocate a magazine from the depot. 2082 */ 2083 static kmem_magazine_t * 2084 kmem_depot_alloc(kmem_cache_t *cp, kmem_maglist_t *mlp) 2085 { 2086 kmem_magazine_t *mp; 2087 2088 /* 2089 * If we can't get the depot lock without contention, 2090 * update our contention count. We use the depot 2091 * contention rate to determine whether we need to 2092 * increase the magazine size for better scalability. 2093 */ 2094 if (!mutex_tryenter(&cp->cache_depot_lock)) { 2095 mutex_enter(&cp->cache_depot_lock); 2096 cp->cache_depot_contention++; 2097 } 2098 2099 if ((mp = mlp->ml_list) != NULL) { 2100 ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 2101 mlp->ml_list = mp->mag_next; 2102 if (--mlp->ml_total < mlp->ml_min) 2103 mlp->ml_min = mlp->ml_total; 2104 mlp->ml_alloc++; 2105 } 2106 2107 mutex_exit(&cp->cache_depot_lock); 2108 2109 return (mp); 2110 } 2111 2112 /* 2113 * Free a magazine to the depot. 2114 */ 2115 static void 2116 kmem_depot_free(kmem_cache_t *cp, kmem_maglist_t *mlp, kmem_magazine_t *mp) 2117 { 2118 mutex_enter(&cp->cache_depot_lock); 2119 ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 2120 mp->mag_next = mlp->ml_list; 2121 mlp->ml_list = mp; 2122 mlp->ml_total++; 2123 mutex_exit(&cp->cache_depot_lock); 2124 } 2125 2126 /* 2127 * Update the working set statistics for cp's depot. 2128 */ 2129 static void 2130 kmem_depot_ws_update(kmem_cache_t *cp) 2131 { 2132 mutex_enter(&cp->cache_depot_lock); 2133 cp->cache_full.ml_reaplimit = cp->cache_full.ml_min; 2134 cp->cache_full.ml_min = cp->cache_full.ml_total; 2135 cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_min; 2136 cp->cache_empty.ml_min = cp->cache_empty.ml_total; 2137 mutex_exit(&cp->cache_depot_lock); 2138 } 2139 2140 /* 2141 * Reap all magazines that have fallen out of the depot's working set. 2142 */ 2143 static void 2144 kmem_depot_ws_reap(kmem_cache_t *cp) 2145 { 2146 long reap; 2147 kmem_magazine_t *mp; 2148 2149 ASSERT(!list_link_active(&cp->cache_link) || 2150 taskq_member(kmem_taskq, curthread)); 2151 2152 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 2153 while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_full)) != NULL) 2154 kmem_magazine_destroy(cp, mp, cp->cache_magtype->mt_magsize); 2155 2156 reap = MIN(cp->cache_empty.ml_reaplimit, cp->cache_empty.ml_min); 2157 while (reap-- && (mp = kmem_depot_alloc(cp, &cp->cache_empty)) != NULL) 2158 kmem_magazine_destroy(cp, mp, 0); 2159 } 2160 2161 static void 2162 kmem_cpu_reload(kmem_cpu_cache_t *ccp, kmem_magazine_t *mp, int rounds) 2163 { 2164 ASSERT((ccp->cc_loaded == NULL && ccp->cc_rounds == -1) || 2165 (ccp->cc_loaded && ccp->cc_rounds + rounds == ccp->cc_magsize)); 2166 ASSERT(ccp->cc_magsize > 0); 2167 2168 ccp->cc_ploaded = ccp->cc_loaded; 2169 ccp->cc_prounds = ccp->cc_rounds; 2170 ccp->cc_loaded = mp; 2171 ccp->cc_rounds = rounds; 2172 } 2173 2174 /* 2175 * Allocate a constructed object from cache cp. 2176 */ 2177 void * 2178 kmem_cache_alloc(kmem_cache_t *cp, int kmflag) 2179 { 2180 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2181 kmem_magazine_t *fmp; 2182 void *buf; 2183 2184 mutex_enter(&ccp->cc_lock); 2185 for (;;) { 2186 /* 2187 * If there's an object available in the current CPU's 2188 * loaded magazine, just take it and return. 2189 */ 2190 if (ccp->cc_rounds > 0) { 2191 buf = ccp->cc_loaded->mag_round[--ccp->cc_rounds]; 2192 ccp->cc_alloc++; 2193 mutex_exit(&ccp->cc_lock); 2194 if ((ccp->cc_flags & KMF_BUFTAG) && 2195 kmem_cache_alloc_debug(cp, buf, kmflag, 0, 2196 caller()) != 0) { 2197 if (kmflag & KM_NOSLEEP) 2198 return (NULL); 2199 mutex_enter(&ccp->cc_lock); 2200 continue; 2201 } 2202 return (buf); 2203 } 2204 2205 /* 2206 * The loaded magazine is empty. If the previously loaded 2207 * magazine was full, exchange them and try again. 2208 */ 2209 if (ccp->cc_prounds > 0) { 2210 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds); 2211 continue; 2212 } 2213 2214 /* 2215 * If the magazine layer is disabled, break out now. 2216 */ 2217 if (ccp->cc_magsize == 0) 2218 break; 2219 2220 /* 2221 * Try to get a full magazine from the depot. 2222 */ 2223 fmp = kmem_depot_alloc(cp, &cp->cache_full); 2224 if (fmp != NULL) { 2225 if (ccp->cc_ploaded != NULL) 2226 kmem_depot_free(cp, &cp->cache_empty, 2227 ccp->cc_ploaded); 2228 kmem_cpu_reload(ccp, fmp, ccp->cc_magsize); 2229 continue; 2230 } 2231 2232 /* 2233 * There are no full magazines in the depot, 2234 * so fall through to the slab layer. 2235 */ 2236 break; 2237 } 2238 mutex_exit(&ccp->cc_lock); 2239 2240 /* 2241 * We couldn't allocate a constructed object from the magazine layer, 2242 * so get a raw buffer from the slab layer and apply its constructor. 2243 */ 2244 buf = kmem_slab_alloc(cp, kmflag); 2245 2246 if (buf == NULL) 2247 return (NULL); 2248 2249 if (cp->cache_flags & KMF_BUFTAG) { 2250 /* 2251 * Make kmem_cache_alloc_debug() apply the constructor for us. 2252 */ 2253 int rc = kmem_cache_alloc_debug(cp, buf, kmflag, 1, caller()); 2254 if (rc != 0) { 2255 if (kmflag & KM_NOSLEEP) 2256 return (NULL); 2257 /* 2258 * kmem_cache_alloc_debug() detected corruption 2259 * but didn't panic (kmem_panic <= 0). We should not be 2260 * here because the constructor failed (indicated by a 2261 * return code of 1). Try again. 2262 */ 2263 ASSERT(rc == -1); 2264 return (kmem_cache_alloc(cp, kmflag)); 2265 } 2266 return (buf); 2267 } 2268 2269 if (cp->cache_constructor != NULL && 2270 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0) { 2271 atomic_add_64(&cp->cache_alloc_fail, 1); 2272 kmem_slab_free(cp, buf); 2273 return (NULL); 2274 } 2275 2276 return (buf); 2277 } 2278 2279 /* 2280 * The freed argument tells whether or not kmem_cache_free_debug() has already 2281 * been called so that we can avoid the duplicate free error. For example, a 2282 * buffer on a magazine has already been freed by the client but is still 2283 * constructed. 2284 */ 2285 static void 2286 kmem_slab_free_constructed(kmem_cache_t *cp, void *buf, boolean_t freed) 2287 { 2288 if (!freed && (cp->cache_flags & KMF_BUFTAG)) 2289 if (kmem_cache_free_debug(cp, buf, caller()) == -1) 2290 return; 2291 2292 /* 2293 * Note that if KMF_DEADBEEF is in effect and KMF_LITE is not, 2294 * kmem_cache_free_debug() will have already applied the destructor. 2295 */ 2296 if ((cp->cache_flags & (KMF_DEADBEEF | KMF_LITE)) != KMF_DEADBEEF && 2297 cp->cache_destructor != NULL) { 2298 if (cp->cache_flags & KMF_DEADBEEF) { /* KMF_LITE implied */ 2299 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2300 *(uint64_t *)buf = btp->bt_redzone; 2301 cp->cache_destructor(buf, cp->cache_private); 2302 *(uint64_t *)buf = KMEM_FREE_PATTERN; 2303 } else { 2304 cp->cache_destructor(buf, cp->cache_private); 2305 } 2306 } 2307 2308 kmem_slab_free(cp, buf); 2309 } 2310 2311 /* 2312 * Free a constructed object to cache cp. 2313 */ 2314 void 2315 kmem_cache_free(kmem_cache_t *cp, void *buf) 2316 { 2317 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2318 kmem_magazine_t *emp; 2319 kmem_magtype_t *mtp; 2320 2321 /* 2322 * The client must not free either of the buffers passed to the move 2323 * callback function. 2324 */ 2325 ASSERT(cp->cache_defrag == NULL || 2326 cp->cache_defrag->kmd_thread != curthread || 2327 (buf != cp->cache_defrag->kmd_from_buf && 2328 buf != cp->cache_defrag->kmd_to_buf)); 2329 2330 if (ccp->cc_flags & KMF_BUFTAG) 2331 if (kmem_cache_free_debug(cp, buf, caller()) == -1) 2332 return; 2333 2334 mutex_enter(&ccp->cc_lock); 2335 for (;;) { 2336 /* 2337 * If there's a slot available in the current CPU's 2338 * loaded magazine, just put the object there and return. 2339 */ 2340 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) { 2341 ccp->cc_loaded->mag_round[ccp->cc_rounds++] = buf; 2342 ccp->cc_free++; 2343 mutex_exit(&ccp->cc_lock); 2344 return; 2345 } 2346 2347 /* 2348 * The loaded magazine is full. If the previously loaded 2349 * magazine was empty, exchange them and try again. 2350 */ 2351 if (ccp->cc_prounds == 0) { 2352 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds); 2353 continue; 2354 } 2355 2356 /* 2357 * If the magazine layer is disabled, break out now. 2358 */ 2359 if (ccp->cc_magsize == 0) 2360 break; 2361 2362 /* 2363 * Try to get an empty magazine from the depot. 2364 */ 2365 emp = kmem_depot_alloc(cp, &cp->cache_empty); 2366 if (emp != NULL) { 2367 if (ccp->cc_ploaded != NULL) 2368 kmem_depot_free(cp, &cp->cache_full, 2369 ccp->cc_ploaded); 2370 kmem_cpu_reload(ccp, emp, 0); 2371 continue; 2372 } 2373 2374 /* 2375 * There are no empty magazines in the depot, 2376 * so try to allocate a new one. We must drop all locks 2377 * across kmem_cache_alloc() because lower layers may 2378 * attempt to allocate from this cache. 2379 */ 2380 mtp = cp->cache_magtype; 2381 mutex_exit(&ccp->cc_lock); 2382 emp = kmem_cache_alloc(mtp->mt_cache, KM_NOSLEEP); 2383 mutex_enter(&ccp->cc_lock); 2384 2385 if (emp != NULL) { 2386 /* 2387 * We successfully allocated an empty magazine. 2388 * However, we had to drop ccp->cc_lock to do it, 2389 * so the cache's magazine size may have changed. 2390 * If so, free the magazine and try again. 2391 */ 2392 if (ccp->cc_magsize != mtp->mt_magsize) { 2393 mutex_exit(&ccp->cc_lock); 2394 kmem_cache_free(mtp->mt_cache, emp); 2395 mutex_enter(&ccp->cc_lock); 2396 continue; 2397 } 2398 2399 /* 2400 * We got a magazine of the right size. Add it to 2401 * the depot and try the whole dance again. 2402 */ 2403 kmem_depot_free(cp, &cp->cache_empty, emp); 2404 continue; 2405 } 2406 2407 /* 2408 * We couldn't allocate an empty magazine, 2409 * so fall through to the slab layer. 2410 */ 2411 break; 2412 } 2413 mutex_exit(&ccp->cc_lock); 2414 2415 /* 2416 * We couldn't free our constructed object to the magazine layer, 2417 * so apply its destructor and free it to the slab layer. 2418 */ 2419 kmem_slab_free_constructed(cp, buf, B_TRUE); 2420 } 2421 2422 void * 2423 kmem_zalloc(size_t size, int kmflag) 2424 { 2425 size_t index; 2426 void *buf; 2427 2428 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) { 2429 kmem_cache_t *cp = kmem_alloc_table[index]; 2430 buf = kmem_cache_alloc(cp, kmflag); 2431 if (buf != NULL) { 2432 if (cp->cache_flags & KMF_BUFTAG) { 2433 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2434 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE; 2435 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size); 2436 2437 if (cp->cache_flags & KMF_LITE) { 2438 KMEM_BUFTAG_LITE_ENTER(btp, 2439 kmem_lite_count, caller()); 2440 } 2441 } 2442 bzero(buf, size); 2443 } 2444 } else { 2445 buf = kmem_alloc(size, kmflag); 2446 if (buf != NULL) 2447 bzero(buf, size); 2448 } 2449 return (buf); 2450 } 2451 2452 void * 2453 kmem_alloc(size_t size, int kmflag) 2454 { 2455 size_t index; 2456 kmem_cache_t *cp; 2457 void *buf; 2458 2459 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) { 2460 cp = kmem_alloc_table[index]; 2461 /* fall through to kmem_cache_alloc() */ 2462 2463 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) < 2464 kmem_big_alloc_table_max) { 2465 cp = kmem_big_alloc_table[index]; 2466 /* fall through to kmem_cache_alloc() */ 2467 2468 } else { 2469 if (size == 0) 2470 return (NULL); 2471 2472 buf = vmem_alloc(kmem_oversize_arena, size, 2473 kmflag & KM_VMFLAGS); 2474 if (buf == NULL) 2475 kmem_log_event(kmem_failure_log, NULL, NULL, 2476 (void *)size); 2477 return (buf); 2478 } 2479 2480 buf = kmem_cache_alloc(cp, kmflag); 2481 if ((cp->cache_flags & KMF_BUFTAG) && buf != NULL) { 2482 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2483 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE; 2484 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size); 2485 2486 if (cp->cache_flags & KMF_LITE) { 2487 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller()); 2488 } 2489 } 2490 return (buf); 2491 } 2492 2493 void 2494 kmem_free(void *buf, size_t size) 2495 { 2496 size_t index; 2497 kmem_cache_t *cp; 2498 2499 if ((index = (size - 1) >> KMEM_ALIGN_SHIFT) < KMEM_ALLOC_TABLE_MAX) { 2500 cp = kmem_alloc_table[index]; 2501 /* fall through to kmem_cache_free() */ 2502 2503 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) < 2504 kmem_big_alloc_table_max) { 2505 cp = kmem_big_alloc_table[index]; 2506 /* fall through to kmem_cache_free() */ 2507 2508 } else { 2509 if (buf == NULL && size == 0) 2510 return; 2511 vmem_free(kmem_oversize_arena, buf, size); 2512 return; 2513 } 2514 2515 if (cp->cache_flags & KMF_BUFTAG) { 2516 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2517 uint32_t *ip = (uint32_t *)btp; 2518 if (ip[1] != KMEM_SIZE_ENCODE(size)) { 2519 if (*(uint64_t *)buf == KMEM_FREE_PATTERN) { 2520 kmem_error(KMERR_DUPFREE, cp, buf); 2521 return; 2522 } 2523 if (KMEM_SIZE_VALID(ip[1])) { 2524 ip[0] = KMEM_SIZE_ENCODE(size); 2525 kmem_error(KMERR_BADSIZE, cp, buf); 2526 } else { 2527 kmem_error(KMERR_REDZONE, cp, buf); 2528 } 2529 return; 2530 } 2531 if (((uint8_t *)buf)[size] != KMEM_REDZONE_BYTE) { 2532 kmem_error(KMERR_REDZONE, cp, buf); 2533 return; 2534 } 2535 btp->bt_redzone = KMEM_REDZONE_PATTERN; 2536 if (cp->cache_flags & KMF_LITE) { 2537 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, 2538 caller()); 2539 } 2540 } 2541 kmem_cache_free(cp, buf); 2542 } 2543 2544 void * 2545 kmem_firewall_va_alloc(vmem_t *vmp, size_t size, int vmflag) 2546 { 2547 size_t realsize = size + vmp->vm_quantum; 2548 void *addr; 2549 2550 /* 2551 * Annoying edge case: if 'size' is just shy of ULONG_MAX, adding 2552 * vm_quantum will cause integer wraparound. Check for this, and 2553 * blow off the firewall page in this case. Note that such a 2554 * giant allocation (the entire kernel address space) can never 2555 * be satisfied, so it will either fail immediately (VM_NOSLEEP) 2556 * or sleep forever (VM_SLEEP). Thus, there is no need for a 2557 * corresponding check in kmem_firewall_va_free(). 2558 */ 2559 if (realsize < size) 2560 realsize = size; 2561 2562 /* 2563 * While boot still owns resource management, make sure that this 2564 * redzone virtual address allocation is properly accounted for in 2565 * OBPs "virtual-memory" "available" lists because we're 2566 * effectively claiming them for a red zone. If we don't do this, 2567 * the available lists become too fragmented and too large for the 2568 * current boot/kernel memory list interface. 2569 */ 2570 addr = vmem_alloc(vmp, realsize, vmflag | VM_NEXTFIT); 2571 2572 if (addr != NULL && kvseg.s_base == NULL && realsize != size) 2573 (void) boot_virt_alloc((char *)addr + size, vmp->vm_quantum); 2574 2575 return (addr); 2576 } 2577 2578 void 2579 kmem_firewall_va_free(vmem_t *vmp, void *addr, size_t size) 2580 { 2581 ASSERT((kvseg.s_base == NULL ? 2582 va_to_pfn((char *)addr + size) : 2583 hat_getpfnum(kas.a_hat, (caddr_t)addr + size)) == PFN_INVALID); 2584 2585 vmem_free(vmp, addr, size + vmp->vm_quantum); 2586 } 2587 2588 /* 2589 * Try to allocate at least `size' bytes of memory without sleeping or 2590 * panicking. Return actual allocated size in `asize'. If allocation failed, 2591 * try final allocation with sleep or panic allowed. 2592 */ 2593 void * 2594 kmem_alloc_tryhard(size_t size, size_t *asize, int kmflag) 2595 { 2596 void *p; 2597 2598 *asize = P2ROUNDUP(size, KMEM_ALIGN); 2599 do { 2600 p = kmem_alloc(*asize, (kmflag | KM_NOSLEEP) & ~KM_PANIC); 2601 if (p != NULL) 2602 return (p); 2603 *asize += KMEM_ALIGN; 2604 } while (*asize <= PAGESIZE); 2605 2606 *asize = P2ROUNDUP(size, KMEM_ALIGN); 2607 return (kmem_alloc(*asize, kmflag)); 2608 } 2609 2610 /* 2611 * Reclaim all unused memory from a cache. 2612 */ 2613 static void 2614 kmem_cache_reap(kmem_cache_t *cp) 2615 { 2616 ASSERT(taskq_member(kmem_taskq, curthread)); 2617 2618 /* 2619 * Ask the cache's owner to free some memory if possible. 2620 * The idea is to handle things like the inode cache, which 2621 * typically sits on a bunch of memory that it doesn't truly 2622 * *need*. Reclaim policy is entirely up to the owner; this 2623 * callback is just an advisory plea for help. 2624 */ 2625 if (cp->cache_reclaim != NULL) { 2626 long delta; 2627 2628 /* 2629 * Reclaimed memory should be reapable (not included in the 2630 * depot's working set). 2631 */ 2632 delta = cp->cache_full.ml_total; 2633 cp->cache_reclaim(cp->cache_private); 2634 delta = cp->cache_full.ml_total - delta; 2635 if (delta > 0) { 2636 mutex_enter(&cp->cache_depot_lock); 2637 cp->cache_full.ml_reaplimit += delta; 2638 cp->cache_full.ml_min += delta; 2639 mutex_exit(&cp->cache_depot_lock); 2640 } 2641 } 2642 2643 kmem_depot_ws_reap(cp); 2644 2645 if (cp->cache_defrag != NULL && !kmem_move_noreap) { 2646 kmem_cache_defrag(cp); 2647 } 2648 } 2649 2650 static void 2651 kmem_reap_timeout(void *flag_arg) 2652 { 2653 uint32_t *flag = (uint32_t *)flag_arg; 2654 2655 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace); 2656 *flag = 0; 2657 } 2658 2659 static void 2660 kmem_reap_done(void *flag) 2661 { 2662 (void) timeout(kmem_reap_timeout, flag, kmem_reap_interval); 2663 } 2664 2665 static void 2666 kmem_reap_start(void *flag) 2667 { 2668 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace); 2669 2670 if (flag == &kmem_reaping) { 2671 kmem_cache_applyall(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP); 2672 /* 2673 * if we have segkp under heap, reap segkp cache. 2674 */ 2675 if (segkp_fromheap) 2676 segkp_cache_free(); 2677 } 2678 else 2679 kmem_cache_applyall_id(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP); 2680 2681 /* 2682 * We use taskq_dispatch() to schedule a timeout to clear 2683 * the flag so that kmem_reap() becomes self-throttling: 2684 * we won't reap again until the current reap completes *and* 2685 * at least kmem_reap_interval ticks have elapsed. 2686 */ 2687 if (!taskq_dispatch(kmem_taskq, kmem_reap_done, flag, TQ_NOSLEEP)) 2688 kmem_reap_done(flag); 2689 } 2690 2691 static void 2692 kmem_reap_common(void *flag_arg) 2693 { 2694 uint32_t *flag = (uint32_t *)flag_arg; 2695 2696 if (MUTEX_HELD(&kmem_cache_lock) || kmem_taskq == NULL || 2697 cas32(flag, 0, 1) != 0) 2698 return; 2699 2700 /* 2701 * It may not be kosher to do memory allocation when a reap is called 2702 * is called (for example, if vmem_populate() is in the call chain). 2703 * So we start the reap going with a TQ_NOALLOC dispatch. If the 2704 * dispatch fails, we reset the flag, and the next reap will try again. 2705 */ 2706 if (!taskq_dispatch(kmem_taskq, kmem_reap_start, flag, TQ_NOALLOC)) 2707 *flag = 0; 2708 } 2709 2710 /* 2711 * Reclaim all unused memory from all caches. Called from the VM system 2712 * when memory gets tight. 2713 */ 2714 void 2715 kmem_reap(void) 2716 { 2717 kmem_reap_common(&kmem_reaping); 2718 } 2719 2720 /* 2721 * Reclaim all unused memory from identifier arenas, called when a vmem 2722 * arena not back by memory is exhausted. Since reaping memory-backed caches 2723 * cannot help with identifier exhaustion, we avoid both a large amount of 2724 * work and unwanted side-effects from reclaim callbacks. 2725 */ 2726 void 2727 kmem_reap_idspace(void) 2728 { 2729 kmem_reap_common(&kmem_reaping_idspace); 2730 } 2731 2732 /* 2733 * Purge all magazines from a cache and set its magazine limit to zero. 2734 * All calls are serialized by the kmem_taskq lock, except for the final 2735 * call from kmem_cache_destroy(). 2736 */ 2737 static void 2738 kmem_cache_magazine_purge(kmem_cache_t *cp) 2739 { 2740 kmem_cpu_cache_t *ccp; 2741 kmem_magazine_t *mp, *pmp; 2742 int rounds, prounds, cpu_seqid; 2743 2744 ASSERT(!list_link_active(&cp->cache_link) || 2745 taskq_member(kmem_taskq, curthread)); 2746 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 2747 2748 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 2749 ccp = &cp->cache_cpu[cpu_seqid]; 2750 2751 mutex_enter(&ccp->cc_lock); 2752 mp = ccp->cc_loaded; 2753 pmp = ccp->cc_ploaded; 2754 rounds = ccp->cc_rounds; 2755 prounds = ccp->cc_prounds; 2756 ccp->cc_loaded = NULL; 2757 ccp->cc_ploaded = NULL; 2758 ccp->cc_rounds = -1; 2759 ccp->cc_prounds = -1; 2760 ccp->cc_magsize = 0; 2761 mutex_exit(&ccp->cc_lock); 2762 2763 if (mp) 2764 kmem_magazine_destroy(cp, mp, rounds); 2765 if (pmp) 2766 kmem_magazine_destroy(cp, pmp, prounds); 2767 } 2768 2769 /* 2770 * Updating the working set statistics twice in a row has the 2771 * effect of setting the working set size to zero, so everything 2772 * is eligible for reaping. 2773 */ 2774 kmem_depot_ws_update(cp); 2775 kmem_depot_ws_update(cp); 2776 2777 kmem_depot_ws_reap(cp); 2778 } 2779 2780 /* 2781 * Enable per-cpu magazines on a cache. 2782 */ 2783 static void 2784 kmem_cache_magazine_enable(kmem_cache_t *cp) 2785 { 2786 int cpu_seqid; 2787 2788 if (cp->cache_flags & KMF_NOMAGAZINE) 2789 return; 2790 2791 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 2792 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 2793 mutex_enter(&ccp->cc_lock); 2794 ccp->cc_magsize = cp->cache_magtype->mt_magsize; 2795 mutex_exit(&ccp->cc_lock); 2796 } 2797 2798 } 2799 2800 /* 2801 * Reap (almost) everything right now. See kmem_cache_magazine_purge() 2802 * for explanation of the back-to-back kmem_depot_ws_update() calls. 2803 */ 2804 void 2805 kmem_cache_reap_now(kmem_cache_t *cp) 2806 { 2807 ASSERT(list_link_active(&cp->cache_link)); 2808 2809 kmem_depot_ws_update(cp); 2810 kmem_depot_ws_update(cp); 2811 2812 (void) taskq_dispatch(kmem_taskq, 2813 (task_func_t *)kmem_depot_ws_reap, cp, TQ_SLEEP); 2814 taskq_wait(kmem_taskq); 2815 } 2816 2817 /* 2818 * Recompute a cache's magazine size. The trade-off is that larger magazines 2819 * provide a higher transfer rate with the depot, while smaller magazines 2820 * reduce memory consumption. Magazine resizing is an expensive operation; 2821 * it should not be done frequently. 2822 * 2823 * Changes to the magazine size are serialized by the kmem_taskq lock. 2824 * 2825 * Note: at present this only grows the magazine size. It might be useful 2826 * to allow shrinkage too. 2827 */ 2828 static void 2829 kmem_cache_magazine_resize(kmem_cache_t *cp) 2830 { 2831 kmem_magtype_t *mtp = cp->cache_magtype; 2832 2833 ASSERT(taskq_member(kmem_taskq, curthread)); 2834 2835 if (cp->cache_chunksize < mtp->mt_maxbuf) { 2836 kmem_cache_magazine_purge(cp); 2837 mutex_enter(&cp->cache_depot_lock); 2838 cp->cache_magtype = ++mtp; 2839 cp->cache_depot_contention_prev = 2840 cp->cache_depot_contention + INT_MAX; 2841 mutex_exit(&cp->cache_depot_lock); 2842 kmem_cache_magazine_enable(cp); 2843 } 2844 } 2845 2846 /* 2847 * Rescale a cache's hash table, so that the table size is roughly the 2848 * cache size. We want the average lookup time to be extremely small. 2849 */ 2850 static void 2851 kmem_hash_rescale(kmem_cache_t *cp) 2852 { 2853 kmem_bufctl_t **old_table, **new_table, *bcp; 2854 size_t old_size, new_size, h; 2855 2856 ASSERT(taskq_member(kmem_taskq, curthread)); 2857 2858 new_size = MAX(KMEM_HASH_INITIAL, 2859 1 << (highbit(3 * cp->cache_buftotal + 4) - 2)); 2860 old_size = cp->cache_hash_mask + 1; 2861 2862 if ((old_size >> 1) <= new_size && new_size <= (old_size << 1)) 2863 return; 2864 2865 new_table = vmem_alloc(kmem_hash_arena, new_size * sizeof (void *), 2866 VM_NOSLEEP); 2867 if (new_table == NULL) 2868 return; 2869 bzero(new_table, new_size * sizeof (void *)); 2870 2871 mutex_enter(&cp->cache_lock); 2872 2873 old_size = cp->cache_hash_mask + 1; 2874 old_table = cp->cache_hash_table; 2875 2876 cp->cache_hash_mask = new_size - 1; 2877 cp->cache_hash_table = new_table; 2878 cp->cache_rescale++; 2879 2880 for (h = 0; h < old_size; h++) { 2881 bcp = old_table[h]; 2882 while (bcp != NULL) { 2883 void *addr = bcp->bc_addr; 2884 kmem_bufctl_t *next_bcp = bcp->bc_next; 2885 kmem_bufctl_t **hash_bucket = KMEM_HASH(cp, addr); 2886 bcp->bc_next = *hash_bucket; 2887 *hash_bucket = bcp; 2888 bcp = next_bcp; 2889 } 2890 } 2891 2892 mutex_exit(&cp->cache_lock); 2893 2894 vmem_free(kmem_hash_arena, old_table, old_size * sizeof (void *)); 2895 } 2896 2897 /* 2898 * Perform periodic maintenance on a cache: hash rescaling, depot working-set 2899 * update, magazine resizing, and slab consolidation. 2900 */ 2901 static void 2902 kmem_cache_update(kmem_cache_t *cp) 2903 { 2904 int need_hash_rescale = 0; 2905 int need_magazine_resize = 0; 2906 2907 ASSERT(MUTEX_HELD(&kmem_cache_lock)); 2908 2909 /* 2910 * If the cache has become much larger or smaller than its hash table, 2911 * fire off a request to rescale the hash table. 2912 */ 2913 mutex_enter(&cp->cache_lock); 2914 2915 if ((cp->cache_flags & KMF_HASH) && 2916 (cp->cache_buftotal > (cp->cache_hash_mask << 1) || 2917 (cp->cache_buftotal < (cp->cache_hash_mask >> 1) && 2918 cp->cache_hash_mask > KMEM_HASH_INITIAL))) 2919 need_hash_rescale = 1; 2920 2921 mutex_exit(&cp->cache_lock); 2922 2923 /* 2924 * Update the depot working set statistics. 2925 */ 2926 kmem_depot_ws_update(cp); 2927 2928 /* 2929 * If there's a lot of contention in the depot, 2930 * increase the magazine size. 2931 */ 2932 mutex_enter(&cp->cache_depot_lock); 2933 2934 if (cp->cache_chunksize < cp->cache_magtype->mt_maxbuf && 2935 (int)(cp->cache_depot_contention - 2936 cp->cache_depot_contention_prev) > kmem_depot_contention) 2937 need_magazine_resize = 1; 2938 2939 cp->cache_depot_contention_prev = cp->cache_depot_contention; 2940 2941 mutex_exit(&cp->cache_depot_lock); 2942 2943 if (need_hash_rescale) 2944 (void) taskq_dispatch(kmem_taskq, 2945 (task_func_t *)kmem_hash_rescale, cp, TQ_NOSLEEP); 2946 2947 if (need_magazine_resize) 2948 (void) taskq_dispatch(kmem_taskq, 2949 (task_func_t *)kmem_cache_magazine_resize, cp, TQ_NOSLEEP); 2950 2951 if (cp->cache_defrag != NULL) 2952 (void) taskq_dispatch(kmem_taskq, 2953 (task_func_t *)kmem_cache_scan, cp, TQ_NOSLEEP); 2954 } 2955 2956 static void 2957 kmem_update_timeout(void *dummy) 2958 { 2959 static void kmem_update(void *); 2960 2961 (void) timeout(kmem_update, dummy, kmem_reap_interval); 2962 } 2963 2964 static void 2965 kmem_update(void *dummy) 2966 { 2967 kmem_cache_applyall(kmem_cache_update, NULL, TQ_NOSLEEP); 2968 2969 /* 2970 * We use taskq_dispatch() to reschedule the timeout so that 2971 * kmem_update() becomes self-throttling: it won't schedule 2972 * new tasks until all previous tasks have completed. 2973 */ 2974 if (!taskq_dispatch(kmem_taskq, kmem_update_timeout, dummy, TQ_NOSLEEP)) 2975 kmem_update_timeout(NULL); 2976 } 2977 2978 static int 2979 kmem_cache_kstat_update(kstat_t *ksp, int rw) 2980 { 2981 struct kmem_cache_kstat *kmcp = &kmem_cache_kstat; 2982 kmem_cache_t *cp = ksp->ks_private; 2983 uint64_t cpu_buf_avail; 2984 uint64_t buf_avail = 0; 2985 int cpu_seqid; 2986 2987 ASSERT(MUTEX_HELD(&kmem_cache_kstat_lock)); 2988 2989 if (rw == KSTAT_WRITE) 2990 return (EACCES); 2991 2992 mutex_enter(&cp->cache_lock); 2993 2994 kmcp->kmc_alloc_fail.value.ui64 = cp->cache_alloc_fail; 2995 kmcp->kmc_alloc.value.ui64 = cp->cache_slab_alloc; 2996 kmcp->kmc_free.value.ui64 = cp->cache_slab_free; 2997 kmcp->kmc_slab_alloc.value.ui64 = cp->cache_slab_alloc; 2998 kmcp->kmc_slab_free.value.ui64 = cp->cache_slab_free; 2999 3000 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 3001 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 3002 3003 mutex_enter(&ccp->cc_lock); 3004 3005 cpu_buf_avail = 0; 3006 if (ccp->cc_rounds > 0) 3007 cpu_buf_avail += ccp->cc_rounds; 3008 if (ccp->cc_prounds > 0) 3009 cpu_buf_avail += ccp->cc_prounds; 3010 3011 kmcp->kmc_alloc.value.ui64 += ccp->cc_alloc; 3012 kmcp->kmc_free.value.ui64 += ccp->cc_free; 3013 buf_avail += cpu_buf_avail; 3014 3015 mutex_exit(&ccp->cc_lock); 3016 } 3017 3018 mutex_enter(&cp->cache_depot_lock); 3019 3020 kmcp->kmc_depot_alloc.value.ui64 = cp->cache_full.ml_alloc; 3021 kmcp->kmc_depot_free.value.ui64 = cp->cache_empty.ml_alloc; 3022 kmcp->kmc_depot_contention.value.ui64 = cp->cache_depot_contention; 3023 kmcp->kmc_full_magazines.value.ui64 = cp->cache_full.ml_total; 3024 kmcp->kmc_empty_magazines.value.ui64 = cp->cache_empty.ml_total; 3025 kmcp->kmc_magazine_size.value.ui64 = 3026 (cp->cache_flags & KMF_NOMAGAZINE) ? 3027 0 : cp->cache_magtype->mt_magsize; 3028 3029 kmcp->kmc_alloc.value.ui64 += cp->cache_full.ml_alloc; 3030 kmcp->kmc_free.value.ui64 += cp->cache_empty.ml_alloc; 3031 buf_avail += cp->cache_full.ml_total * cp->cache_magtype->mt_magsize; 3032 3033 mutex_exit(&cp->cache_depot_lock); 3034 3035 kmcp->kmc_buf_size.value.ui64 = cp->cache_bufsize; 3036 kmcp->kmc_align.value.ui64 = cp->cache_align; 3037 kmcp->kmc_chunk_size.value.ui64 = cp->cache_chunksize; 3038 kmcp->kmc_slab_size.value.ui64 = cp->cache_slabsize; 3039 kmcp->kmc_buf_constructed.value.ui64 = buf_avail; 3040 buf_avail += cp->cache_bufslab; 3041 kmcp->kmc_buf_avail.value.ui64 = buf_avail; 3042 kmcp->kmc_buf_inuse.value.ui64 = cp->cache_buftotal - buf_avail; 3043 kmcp->kmc_buf_total.value.ui64 = cp->cache_buftotal; 3044 kmcp->kmc_buf_max.value.ui64 = cp->cache_bufmax; 3045 kmcp->kmc_slab_create.value.ui64 = cp->cache_slab_create; 3046 kmcp->kmc_slab_destroy.value.ui64 = cp->cache_slab_destroy; 3047 kmcp->kmc_hash_size.value.ui64 = (cp->cache_flags & KMF_HASH) ? 3048 cp->cache_hash_mask + 1 : 0; 3049 kmcp->kmc_hash_lookup_depth.value.ui64 = cp->cache_lookup_depth; 3050 kmcp->kmc_hash_rescale.value.ui64 = cp->cache_rescale; 3051 kmcp->kmc_vmem_source.value.ui64 = cp->cache_arena->vm_id; 3052 3053 if (cp->cache_defrag == NULL) { 3054 kmcp->kmc_move_callbacks.value.ui64 = 0; 3055 kmcp->kmc_move_yes.value.ui64 = 0; 3056 kmcp->kmc_move_no.value.ui64 = 0; 3057 kmcp->kmc_move_later.value.ui64 = 0; 3058 kmcp->kmc_move_dont_need.value.ui64 = 0; 3059 kmcp->kmc_move_dont_know.value.ui64 = 0; 3060 kmcp->kmc_move_hunt_found.value.ui64 = 0; 3061 } else { 3062 kmem_defrag_t *kd = cp->cache_defrag; 3063 kmcp->kmc_move_callbacks.value.ui64 = kd->kmd_callbacks; 3064 kmcp->kmc_move_yes.value.ui64 = kd->kmd_yes; 3065 kmcp->kmc_move_no.value.ui64 = kd->kmd_no; 3066 kmcp->kmc_move_later.value.ui64 = kd->kmd_later; 3067 kmcp->kmc_move_dont_need.value.ui64 = kd->kmd_dont_need; 3068 kmcp->kmc_move_dont_know.value.ui64 = kd->kmd_dont_know; 3069 kmcp->kmc_move_hunt_found.value.ui64 = kd->kmd_hunt_found; 3070 } 3071 3072 mutex_exit(&cp->cache_lock); 3073 return (0); 3074 } 3075 3076 /* 3077 * Return a named statistic about a particular cache. 3078 * This shouldn't be called very often, so it's currently designed for 3079 * simplicity (leverages existing kstat support) rather than efficiency. 3080 */ 3081 uint64_t 3082 kmem_cache_stat(kmem_cache_t *cp, char *name) 3083 { 3084 int i; 3085 kstat_t *ksp = cp->cache_kstat; 3086 kstat_named_t *knp = (kstat_named_t *)&kmem_cache_kstat; 3087 uint64_t value = 0; 3088 3089 if (ksp != NULL) { 3090 mutex_enter(&kmem_cache_kstat_lock); 3091 (void) kmem_cache_kstat_update(ksp, KSTAT_READ); 3092 for (i = 0; i < ksp->ks_ndata; i++) { 3093 if (strcmp(knp[i].name, name) == 0) { 3094 value = knp[i].value.ui64; 3095 break; 3096 } 3097 } 3098 mutex_exit(&kmem_cache_kstat_lock); 3099 } 3100 return (value); 3101 } 3102 3103 /* 3104 * Return an estimate of currently available kernel heap memory. 3105 * On 32-bit systems, physical memory may exceed virtual memory, 3106 * we just truncate the result at 1GB. 3107 */ 3108 size_t 3109 kmem_avail(void) 3110 { 3111 spgcnt_t rmem = availrmem - tune.t_minarmem; 3112 spgcnt_t fmem = freemem - minfree; 3113 3114 return ((size_t)ptob(MIN(MAX(MIN(rmem, fmem), 0), 3115 1 << (30 - PAGESHIFT)))); 3116 } 3117 3118 /* 3119 * Return the maximum amount of memory that is (in theory) allocatable 3120 * from the heap. This may be used as an estimate only since there 3121 * is no guarentee this space will still be available when an allocation 3122 * request is made, nor that the space may be allocated in one big request 3123 * due to kernel heap fragmentation. 3124 */ 3125 size_t 3126 kmem_maxavail(void) 3127 { 3128 spgcnt_t pmem = availrmem - tune.t_minarmem; 3129 spgcnt_t vmem = btop(vmem_size(heap_arena, VMEM_FREE)); 3130 3131 return ((size_t)ptob(MAX(MIN(pmem, vmem), 0))); 3132 } 3133 3134 /* 3135 * Indicate whether memory-intensive kmem debugging is enabled. 3136 */ 3137 int 3138 kmem_debugging(void) 3139 { 3140 return (kmem_flags & (KMF_AUDIT | KMF_REDZONE)); 3141 } 3142 3143 /* binning function, sorts finely at the two extremes */ 3144 #define KMEM_PARTIAL_SLAB_WEIGHT(sp, binshift) \ 3145 ((((sp)->slab_refcnt <= (binshift)) || \ 3146 (((sp)->slab_chunks - (sp)->slab_refcnt) <= (binshift))) \ 3147 ? -(sp)->slab_refcnt \ 3148 : -((binshift) + ((sp)->slab_refcnt >> (binshift)))) 3149 3150 /* 3151 * Minimizing the number of partial slabs on the freelist minimizes 3152 * fragmentation (the ratio of unused buffers held by the slab layer). There are 3153 * two ways to get a slab off of the freelist: 1) free all the buffers on the 3154 * slab, and 2) allocate all the buffers on the slab. It follows that we want 3155 * the most-used slabs at the front of the list where they have the best chance 3156 * of being completely allocated, and the least-used slabs at a safe distance 3157 * from the front to improve the odds that the few remaining buffers will all be 3158 * freed before another allocation can tie up the slab. For that reason a slab 3159 * with a higher slab_refcnt sorts less than than a slab with a lower 3160 * slab_refcnt. 3161 * 3162 * However, if a slab has at least one buffer that is deemed unfreeable, we 3163 * would rather have that slab at the front of the list regardless of 3164 * slab_refcnt, since even one unfreeable buffer makes the entire slab 3165 * unfreeable. If the client returns KMEM_CBRC_NO in response to a cache_move() 3166 * callback, the slab is marked unfreeable for as long as it remains on the 3167 * freelist. 3168 */ 3169 static int 3170 kmem_partial_slab_cmp(const void *p0, const void *p1) 3171 { 3172 const kmem_cache_t *cp; 3173 const kmem_slab_t *s0 = p0; 3174 const kmem_slab_t *s1 = p1; 3175 int w0, w1; 3176 size_t binshift; 3177 3178 ASSERT(KMEM_SLAB_IS_PARTIAL(s0)); 3179 ASSERT(KMEM_SLAB_IS_PARTIAL(s1)); 3180 ASSERT(s0->slab_cache == s1->slab_cache); 3181 cp = s1->slab_cache; 3182 ASSERT(MUTEX_HELD(&cp->cache_lock)); 3183 binshift = cp->cache_partial_binshift; 3184 3185 /* weight of first slab */ 3186 w0 = KMEM_PARTIAL_SLAB_WEIGHT(s0, binshift); 3187 if (s0->slab_flags & KMEM_SLAB_NOMOVE) { 3188 w0 -= cp->cache_maxchunks; 3189 } 3190 3191 /* weight of second slab */ 3192 w1 = KMEM_PARTIAL_SLAB_WEIGHT(s1, binshift); 3193 if (s1->slab_flags & KMEM_SLAB_NOMOVE) { 3194 w1 -= cp->cache_maxchunks; 3195 } 3196 3197 if (w0 < w1) 3198 return (-1); 3199 if (w0 > w1) 3200 return (1); 3201 3202 /* compare pointer values */ 3203 if ((uintptr_t)s0 < (uintptr_t)s1) 3204 return (-1); 3205 if ((uintptr_t)s0 > (uintptr_t)s1) 3206 return (1); 3207 3208 return (0); 3209 } 3210 3211 /* 3212 * It must be valid to call the destructor (if any) on a newly created object. 3213 * That is, the constructor (if any) must leave the object in a valid state for 3214 * the destructor. 3215 */ 3216 kmem_cache_t * 3217 kmem_cache_create( 3218 char *name, /* descriptive name for this cache */ 3219 size_t bufsize, /* size of the objects it manages */ 3220 size_t align, /* required object alignment */ 3221 int (*constructor)(void *, void *, int), /* object constructor */ 3222 void (*destructor)(void *, void *), /* object destructor */ 3223 void (*reclaim)(void *), /* memory reclaim callback */ 3224 void *private, /* pass-thru arg for constr/destr/reclaim */ 3225 vmem_t *vmp, /* vmem source for slab allocation */ 3226 int cflags) /* cache creation flags */ 3227 { 3228 int cpu_seqid; 3229 size_t chunksize; 3230 kmem_cache_t *cp; 3231 kmem_magtype_t *mtp; 3232 size_t csize = KMEM_CACHE_SIZE(max_ncpus); 3233 3234 #ifdef DEBUG 3235 /* 3236 * Cache names should conform to the rules for valid C identifiers 3237 */ 3238 if (!strident_valid(name)) { 3239 cmn_err(CE_CONT, 3240 "kmem_cache_create: '%s' is an invalid cache name\n" 3241 "cache names must conform to the rules for " 3242 "C identifiers\n", name); 3243 } 3244 #endif /* DEBUG */ 3245 3246 if (vmp == NULL) 3247 vmp = kmem_default_arena; 3248 3249 /* 3250 * If this kmem cache has an identifier vmem arena as its source, mark 3251 * it such to allow kmem_reap_idspace(). 3252 */ 3253 ASSERT(!(cflags & KMC_IDENTIFIER)); /* consumer should not set this */ 3254 if (vmp->vm_cflags & VMC_IDENTIFIER) 3255 cflags |= KMC_IDENTIFIER; 3256 3257 /* 3258 * Get a kmem_cache structure. We arrange that cp->cache_cpu[] 3259 * is aligned on a KMEM_CPU_CACHE_SIZE boundary to prevent 3260 * false sharing of per-CPU data. 3261 */ 3262 cp = vmem_xalloc(kmem_cache_arena, csize, KMEM_CPU_CACHE_SIZE, 3263 P2NPHASE(csize, KMEM_CPU_CACHE_SIZE), 0, NULL, NULL, VM_SLEEP); 3264 bzero(cp, csize); 3265 list_link_init(&cp->cache_link); 3266 3267 if (align == 0) 3268 align = KMEM_ALIGN; 3269 3270 /* 3271 * If we're not at least KMEM_ALIGN aligned, we can't use free 3272 * memory to hold bufctl information (because we can't safely 3273 * perform word loads and stores on it). 3274 */ 3275 if (align < KMEM_ALIGN) 3276 cflags |= KMC_NOTOUCH; 3277 3278 if ((align & (align - 1)) != 0 || align > vmp->vm_quantum) 3279 panic("kmem_cache_create: bad alignment %lu", align); 3280 3281 mutex_enter(&kmem_flags_lock); 3282 if (kmem_flags & KMF_RANDOMIZE) 3283 kmem_flags = (((kmem_flags | ~KMF_RANDOM) + 1) & KMF_RANDOM) | 3284 KMF_RANDOMIZE; 3285 cp->cache_flags = (kmem_flags | cflags) & KMF_DEBUG; 3286 mutex_exit(&kmem_flags_lock); 3287 3288 /* 3289 * Make sure all the various flags are reasonable. 3290 */ 3291 ASSERT(!(cflags & KMC_NOHASH) || !(cflags & KMC_NOTOUCH)); 3292 3293 if (cp->cache_flags & KMF_LITE) { 3294 if (bufsize >= kmem_lite_minsize && 3295 align <= kmem_lite_maxalign && 3296 P2PHASE(bufsize, kmem_lite_maxalign) != 0) { 3297 cp->cache_flags |= KMF_BUFTAG; 3298 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL); 3299 } else { 3300 cp->cache_flags &= ~KMF_DEBUG; 3301 } 3302 } 3303 3304 if (cp->cache_flags & KMF_DEADBEEF) 3305 cp->cache_flags |= KMF_REDZONE; 3306 3307 if ((cflags & KMC_QCACHE) && (cp->cache_flags & KMF_AUDIT)) 3308 cp->cache_flags |= KMF_NOMAGAZINE; 3309 3310 if (cflags & KMC_NODEBUG) 3311 cp->cache_flags &= ~KMF_DEBUG; 3312 3313 if (cflags & KMC_NOTOUCH) 3314 cp->cache_flags &= ~KMF_TOUCH; 3315 3316 if (cflags & KMC_NOHASH) 3317 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL); 3318 3319 if (cflags & KMC_NOMAGAZINE) 3320 cp->cache_flags |= KMF_NOMAGAZINE; 3321 3322 if ((cp->cache_flags & KMF_AUDIT) && !(cflags & KMC_NOTOUCH)) 3323 cp->cache_flags |= KMF_REDZONE; 3324 3325 if (!(cp->cache_flags & KMF_AUDIT)) 3326 cp->cache_flags &= ~KMF_CONTENTS; 3327 3328 if ((cp->cache_flags & KMF_BUFTAG) && bufsize >= kmem_minfirewall && 3329 !(cp->cache_flags & KMF_LITE) && !(cflags & KMC_NOHASH)) 3330 cp->cache_flags |= KMF_FIREWALL; 3331 3332 if (vmp != kmem_default_arena || kmem_firewall_arena == NULL) 3333 cp->cache_flags &= ~KMF_FIREWALL; 3334 3335 if (cp->cache_flags & KMF_FIREWALL) { 3336 cp->cache_flags &= ~KMF_BUFTAG; 3337 cp->cache_flags |= KMF_NOMAGAZINE; 3338 ASSERT(vmp == kmem_default_arena); 3339 vmp = kmem_firewall_arena; 3340 } 3341 3342 /* 3343 * Set cache properties. 3344 */ 3345 (void) strncpy(cp->cache_name, name, KMEM_CACHE_NAMELEN); 3346 strident_canon(cp->cache_name, KMEM_CACHE_NAMELEN + 1); 3347 cp->cache_bufsize = bufsize; 3348 cp->cache_align = align; 3349 cp->cache_constructor = constructor; 3350 cp->cache_destructor = destructor; 3351 cp->cache_reclaim = reclaim; 3352 cp->cache_private = private; 3353 cp->cache_arena = vmp; 3354 cp->cache_cflags = cflags; 3355 3356 /* 3357 * Determine the chunk size. 3358 */ 3359 chunksize = bufsize; 3360 3361 if (align >= KMEM_ALIGN) { 3362 chunksize = P2ROUNDUP(chunksize, KMEM_ALIGN); 3363 cp->cache_bufctl = chunksize - KMEM_ALIGN; 3364 } 3365 3366 if (cp->cache_flags & KMF_BUFTAG) { 3367 cp->cache_bufctl = chunksize; 3368 cp->cache_buftag = chunksize; 3369 if (cp->cache_flags & KMF_LITE) 3370 chunksize += KMEM_BUFTAG_LITE_SIZE(kmem_lite_count); 3371 else 3372 chunksize += sizeof (kmem_buftag_t); 3373 } 3374 3375 if (cp->cache_flags & KMF_DEADBEEF) { 3376 cp->cache_verify = MIN(cp->cache_buftag, kmem_maxverify); 3377 if (cp->cache_flags & KMF_LITE) 3378 cp->cache_verify = sizeof (uint64_t); 3379 } 3380 3381 cp->cache_contents = MIN(cp->cache_bufctl, kmem_content_maxsave); 3382 3383 cp->cache_chunksize = chunksize = P2ROUNDUP(chunksize, align); 3384 3385 /* 3386 * Now that we know the chunk size, determine the optimal slab size. 3387 */ 3388 if (vmp == kmem_firewall_arena) { 3389 cp->cache_slabsize = P2ROUNDUP(chunksize, vmp->vm_quantum); 3390 cp->cache_mincolor = cp->cache_slabsize - chunksize; 3391 cp->cache_maxcolor = cp->cache_mincolor; 3392 cp->cache_flags |= KMF_HASH; 3393 ASSERT(!(cp->cache_flags & KMF_BUFTAG)); 3394 } else if ((cflags & KMC_NOHASH) || (!(cflags & KMC_NOTOUCH) && 3395 !(cp->cache_flags & KMF_AUDIT) && 3396 chunksize < vmp->vm_quantum / KMEM_VOID_FRACTION)) { 3397 cp->cache_slabsize = vmp->vm_quantum; 3398 cp->cache_mincolor = 0; 3399 cp->cache_maxcolor = 3400 (cp->cache_slabsize - sizeof (kmem_slab_t)) % chunksize; 3401 ASSERT(chunksize + sizeof (kmem_slab_t) <= cp->cache_slabsize); 3402 ASSERT(!(cp->cache_flags & KMF_AUDIT)); 3403 } else { 3404 size_t chunks, bestfit, waste, slabsize; 3405 size_t minwaste = LONG_MAX; 3406 3407 for (chunks = 1; chunks <= KMEM_VOID_FRACTION; chunks++) { 3408 slabsize = P2ROUNDUP(chunksize * chunks, 3409 vmp->vm_quantum); 3410 chunks = slabsize / chunksize; 3411 waste = (slabsize % chunksize) / chunks; 3412 if (waste < minwaste) { 3413 minwaste = waste; 3414 bestfit = slabsize; 3415 } 3416 } 3417 if (cflags & KMC_QCACHE) 3418 bestfit = VMEM_QCACHE_SLABSIZE(vmp->vm_qcache_max); 3419 cp->cache_slabsize = bestfit; 3420 cp->cache_mincolor = 0; 3421 cp->cache_maxcolor = bestfit % chunksize; 3422 cp->cache_flags |= KMF_HASH; 3423 } 3424 3425 cp->cache_maxchunks = (cp->cache_slabsize / cp->cache_chunksize); 3426 cp->cache_partial_binshift = highbit(cp->cache_maxchunks / 16) + 1; 3427 3428 if (cp->cache_flags & KMF_HASH) { 3429 ASSERT(!(cflags & KMC_NOHASH)); 3430 cp->cache_bufctl_cache = (cp->cache_flags & KMF_AUDIT) ? 3431 kmem_bufctl_audit_cache : kmem_bufctl_cache; 3432 } 3433 3434 if (cp->cache_maxcolor >= vmp->vm_quantum) 3435 cp->cache_maxcolor = vmp->vm_quantum - 1; 3436 3437 cp->cache_color = cp->cache_mincolor; 3438 3439 /* 3440 * Initialize the rest of the slab layer. 3441 */ 3442 mutex_init(&cp->cache_lock, NULL, MUTEX_DEFAULT, NULL); 3443 3444 avl_create(&cp->cache_partial_slabs, kmem_partial_slab_cmp, 3445 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link)); 3446 /* LINTED: E_TRUE_LOGICAL_EXPR */ 3447 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t)); 3448 /* reuse partial slab AVL linkage for complete slab list linkage */ 3449 list_create(&cp->cache_complete_slabs, 3450 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link)); 3451 3452 if (cp->cache_flags & KMF_HASH) { 3453 cp->cache_hash_table = vmem_alloc(kmem_hash_arena, 3454 KMEM_HASH_INITIAL * sizeof (void *), VM_SLEEP); 3455 bzero(cp->cache_hash_table, 3456 KMEM_HASH_INITIAL * sizeof (void *)); 3457 cp->cache_hash_mask = KMEM_HASH_INITIAL - 1; 3458 cp->cache_hash_shift = highbit((ulong_t)chunksize) - 1; 3459 } 3460 3461 /* 3462 * Initialize the depot. 3463 */ 3464 mutex_init(&cp->cache_depot_lock, NULL, MUTEX_DEFAULT, NULL); 3465 3466 for (mtp = kmem_magtype; chunksize <= mtp->mt_minbuf; mtp++) 3467 continue; 3468 3469 cp->cache_magtype = mtp; 3470 3471 /* 3472 * Initialize the CPU layer. 3473 */ 3474 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 3475 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 3476 mutex_init(&ccp->cc_lock, NULL, MUTEX_DEFAULT, NULL); 3477 ccp->cc_flags = cp->cache_flags; 3478 ccp->cc_rounds = -1; 3479 ccp->cc_prounds = -1; 3480 } 3481 3482 /* 3483 * Create the cache's kstats. 3484 */ 3485 if ((cp->cache_kstat = kstat_create("unix", 0, cp->cache_name, 3486 "kmem_cache", KSTAT_TYPE_NAMED, 3487 sizeof (kmem_cache_kstat) / sizeof (kstat_named_t), 3488 KSTAT_FLAG_VIRTUAL)) != NULL) { 3489 cp->cache_kstat->ks_data = &kmem_cache_kstat; 3490 cp->cache_kstat->ks_update = kmem_cache_kstat_update; 3491 cp->cache_kstat->ks_private = cp; 3492 cp->cache_kstat->ks_lock = &kmem_cache_kstat_lock; 3493 kstat_install(cp->cache_kstat); 3494 } 3495 3496 /* 3497 * Add the cache to the global list. This makes it visible 3498 * to kmem_update(), so the cache must be ready for business. 3499 */ 3500 mutex_enter(&kmem_cache_lock); 3501 list_insert_tail(&kmem_caches, cp); 3502 mutex_exit(&kmem_cache_lock); 3503 3504 if (kmem_ready) 3505 kmem_cache_magazine_enable(cp); 3506 3507 return (cp); 3508 } 3509 3510 static int 3511 kmem_move_cmp(const void *buf, const void *p) 3512 { 3513 const kmem_move_t *kmm = p; 3514 uintptr_t v1 = (uintptr_t)buf; 3515 uintptr_t v2 = (uintptr_t)kmm->kmm_from_buf; 3516 return (v1 < v2 ? -1 : (v1 > v2 ? 1 : 0)); 3517 } 3518 3519 static void 3520 kmem_reset_reclaim_threshold(kmem_defrag_t *kmd) 3521 { 3522 kmd->kmd_reclaim_numer = 1; 3523 } 3524 3525 /* 3526 * Initially, when choosing candidate slabs for buffers to move, we want to be 3527 * very selective and take only slabs that are less than 3528 * (1 / KMEM_VOID_FRACTION) allocated. If we have difficulty finding candidate 3529 * slabs, then we raise the allocation ceiling incrementally. The reclaim 3530 * threshold is reset to (1 / KMEM_VOID_FRACTION) as soon as the cache is no 3531 * longer fragmented. 3532 */ 3533 static void 3534 kmem_adjust_reclaim_threshold(kmem_defrag_t *kmd, int direction) 3535 { 3536 if (direction > 0) { 3537 /* make it easier to find a candidate slab */ 3538 if (kmd->kmd_reclaim_numer < (KMEM_VOID_FRACTION - 1)) { 3539 kmd->kmd_reclaim_numer++; 3540 } 3541 } else { 3542 /* be more selective */ 3543 if (kmd->kmd_reclaim_numer > 1) { 3544 kmd->kmd_reclaim_numer--; 3545 } 3546 } 3547 } 3548 3549 void 3550 kmem_cache_set_move(kmem_cache_t *cp, 3551 kmem_cbrc_t (*move)(void *, void *, size_t, void *)) 3552 { 3553 kmem_defrag_t *defrag; 3554 3555 ASSERT(move != NULL); 3556 /* 3557 * The consolidator does not support NOTOUCH caches because kmem cannot 3558 * initialize their slabs with the 0xbaddcafe memory pattern, which sets 3559 * a low order bit usable by clients to distinguish uninitialized memory 3560 * from known objects (see kmem_slab_create). 3561 */ 3562 ASSERT(!(cp->cache_cflags & KMC_NOTOUCH)); 3563 ASSERT(!(cp->cache_cflags & KMC_IDENTIFIER)); 3564 3565 /* 3566 * We should not be holding anyone's cache lock when calling 3567 * kmem_cache_alloc(), so allocate in all cases before acquiring the 3568 * lock. 3569 */ 3570 defrag = kmem_cache_alloc(kmem_defrag_cache, KM_SLEEP); 3571 3572 mutex_enter(&cp->cache_lock); 3573 3574 if (KMEM_IS_MOVABLE(cp)) { 3575 if (cp->cache_move == NULL) { 3576 ASSERT(cp->cache_slab_alloc == 0); 3577 3578 cp->cache_defrag = defrag; 3579 defrag = NULL; /* nothing to free */ 3580 bzero(cp->cache_defrag, sizeof (kmem_defrag_t)); 3581 avl_create(&cp->cache_defrag->kmd_moves_pending, 3582 kmem_move_cmp, sizeof (kmem_move_t), 3583 offsetof(kmem_move_t, kmm_entry)); 3584 /* LINTED: E_TRUE_LOGICAL_EXPR */ 3585 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t)); 3586 /* reuse the slab's AVL linkage for deadlist linkage */ 3587 list_create(&cp->cache_defrag->kmd_deadlist, 3588 sizeof (kmem_slab_t), 3589 offsetof(kmem_slab_t, slab_link)); 3590 kmem_reset_reclaim_threshold(cp->cache_defrag); 3591 } 3592 cp->cache_move = move; 3593 } 3594 3595 mutex_exit(&cp->cache_lock); 3596 3597 if (defrag != NULL) { 3598 kmem_cache_free(kmem_defrag_cache, defrag); /* unused */ 3599 } 3600 } 3601 3602 void 3603 kmem_cache_destroy(kmem_cache_t *cp) 3604 { 3605 int cpu_seqid; 3606 3607 /* 3608 * Remove the cache from the global cache list so that no one else 3609 * can schedule tasks on its behalf, wait for any pending tasks to 3610 * complete, purge the cache, and then destroy it. 3611 */ 3612 mutex_enter(&kmem_cache_lock); 3613 list_remove(&kmem_caches, cp); 3614 mutex_exit(&kmem_cache_lock); 3615 3616 if (kmem_taskq != NULL) 3617 taskq_wait(kmem_taskq); 3618 if (kmem_move_taskq != NULL) 3619 taskq_wait(kmem_move_taskq); 3620 3621 kmem_cache_magazine_purge(cp); 3622 3623 mutex_enter(&cp->cache_lock); 3624 if (cp->cache_buftotal != 0) 3625 cmn_err(CE_WARN, "kmem_cache_destroy: '%s' (%p) not empty", 3626 cp->cache_name, (void *)cp); 3627 if (cp->cache_defrag != NULL) { 3628 avl_destroy(&cp->cache_defrag->kmd_moves_pending); 3629 list_destroy(&cp->cache_defrag->kmd_deadlist); 3630 kmem_cache_free(kmem_defrag_cache, cp->cache_defrag); 3631 cp->cache_defrag = NULL; 3632 } 3633 /* 3634 * The cache is now dead. There should be no further activity. We 3635 * enforce this by setting land mines in the constructor, destructor, 3636 * reclaim, and move routines that induce a kernel text fault if 3637 * invoked. 3638 */ 3639 cp->cache_constructor = (int (*)(void *, void *, int))1; 3640 cp->cache_destructor = (void (*)(void *, void *))2; 3641 cp->cache_reclaim = (void (*)(void *))3; 3642 cp->cache_move = (kmem_cbrc_t (*)(void *, void *, size_t, void *))4; 3643 mutex_exit(&cp->cache_lock); 3644 3645 kstat_delete(cp->cache_kstat); 3646 3647 if (cp->cache_hash_table != NULL) 3648 vmem_free(kmem_hash_arena, cp->cache_hash_table, 3649 (cp->cache_hash_mask + 1) * sizeof (void *)); 3650 3651 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) 3652 mutex_destroy(&cp->cache_cpu[cpu_seqid].cc_lock); 3653 3654 mutex_destroy(&cp->cache_depot_lock); 3655 mutex_destroy(&cp->cache_lock); 3656 3657 vmem_free(kmem_cache_arena, cp, KMEM_CACHE_SIZE(max_ncpus)); 3658 } 3659 3660 /*ARGSUSED*/ 3661 static int 3662 kmem_cpu_setup(cpu_setup_t what, int id, void *arg) 3663 { 3664 ASSERT(MUTEX_HELD(&cpu_lock)); 3665 if (what == CPU_UNCONFIG) { 3666 kmem_cache_applyall(kmem_cache_magazine_purge, 3667 kmem_taskq, TQ_SLEEP); 3668 kmem_cache_applyall(kmem_cache_magazine_enable, 3669 kmem_taskq, TQ_SLEEP); 3670 } 3671 return (0); 3672 } 3673 3674 static void 3675 kmem_alloc_caches_create(const int *array, size_t count, 3676 kmem_cache_t **alloc_table, size_t maxbuf, uint_t shift) 3677 { 3678 char name[KMEM_CACHE_NAMELEN + 1]; 3679 size_t table_unit = (1 << shift); /* range of one alloc_table entry */ 3680 size_t size = table_unit; 3681 int i; 3682 3683 for (i = 0; i < count; i++) { 3684 size_t cache_size = array[i]; 3685 size_t align = KMEM_ALIGN; 3686 kmem_cache_t *cp; 3687 3688 /* if the table has an entry for maxbuf, we're done */ 3689 if (size > maxbuf) 3690 break; 3691 3692 /* cache size must be a multiple of the table unit */ 3693 ASSERT(P2PHASE(cache_size, table_unit) == 0); 3694 3695 /* 3696 * If they allocate a multiple of the coherency granularity, 3697 * they get a coherency-granularity-aligned address. 3698 */ 3699 if (IS_P2ALIGNED(cache_size, 64)) 3700 align = 64; 3701 if (IS_P2ALIGNED(cache_size, PAGESIZE)) 3702 align = PAGESIZE; 3703 (void) snprintf(name, sizeof (name), 3704 "kmem_alloc_%lu", cache_size); 3705 cp = kmem_cache_create(name, cache_size, align, 3706 NULL, NULL, NULL, NULL, NULL, KMC_KMEM_ALLOC); 3707 3708 while (size <= cache_size) { 3709 alloc_table[(size - 1) >> shift] = cp; 3710 size += table_unit; 3711 } 3712 } 3713 3714 ASSERT(size > maxbuf); /* i.e. maxbuf <= max(cache_size) */ 3715 } 3716 3717 static void 3718 kmem_cache_init(int pass, int use_large_pages) 3719 { 3720 int i; 3721 size_t maxbuf; 3722 kmem_magtype_t *mtp; 3723 3724 for (i = 0; i < sizeof (kmem_magtype) / sizeof (*mtp); i++) { 3725 char name[KMEM_CACHE_NAMELEN + 1]; 3726 3727 mtp = &kmem_magtype[i]; 3728 (void) sprintf(name, "kmem_magazine_%d", mtp->mt_magsize); 3729 mtp->mt_cache = kmem_cache_create(name, 3730 (mtp->mt_magsize + 1) * sizeof (void *), 3731 mtp->mt_align, NULL, NULL, NULL, NULL, 3732 kmem_msb_arena, KMC_NOHASH); 3733 } 3734 3735 kmem_slab_cache = kmem_cache_create("kmem_slab_cache", 3736 sizeof (kmem_slab_t), 0, NULL, NULL, NULL, NULL, 3737 kmem_msb_arena, KMC_NOHASH); 3738 3739 kmem_bufctl_cache = kmem_cache_create("kmem_bufctl_cache", 3740 sizeof (kmem_bufctl_t), 0, NULL, NULL, NULL, NULL, 3741 kmem_msb_arena, KMC_NOHASH); 3742 3743 kmem_bufctl_audit_cache = kmem_cache_create("kmem_bufctl_audit_cache", 3744 sizeof (kmem_bufctl_audit_t), 0, NULL, NULL, NULL, NULL, 3745 kmem_msb_arena, KMC_NOHASH); 3746 3747 if (pass == 2) { 3748 kmem_va_arena = vmem_create("kmem_va", 3749 NULL, 0, PAGESIZE, 3750 vmem_alloc, vmem_free, heap_arena, 3751 8 * PAGESIZE, VM_SLEEP); 3752 3753 if (use_large_pages) { 3754 kmem_default_arena = vmem_xcreate("kmem_default", 3755 NULL, 0, PAGESIZE, 3756 segkmem_alloc_lp, segkmem_free_lp, kmem_va_arena, 3757 0, VM_SLEEP); 3758 } else { 3759 kmem_default_arena = vmem_create("kmem_default", 3760 NULL, 0, PAGESIZE, 3761 segkmem_alloc, segkmem_free, kmem_va_arena, 3762 0, VM_SLEEP); 3763 } 3764 3765 /* Figure out what our maximum cache size is */ 3766 maxbuf = kmem_max_cached; 3767 if (maxbuf <= KMEM_MAXBUF) { 3768 maxbuf = 0; 3769 kmem_max_cached = KMEM_MAXBUF; 3770 } else { 3771 size_t size = 0; 3772 size_t max = 3773 sizeof (kmem_big_alloc_sizes) / sizeof (int); 3774 /* 3775 * Round maxbuf up to an existing cache size. If maxbuf 3776 * is larger than the largest cache, we truncate it to 3777 * the largest cache's size. 3778 */ 3779 for (i = 0; i < max; i++) { 3780 size = kmem_big_alloc_sizes[i]; 3781 if (maxbuf <= size) 3782 break; 3783 } 3784 kmem_max_cached = maxbuf = size; 3785 } 3786 3787 /* 3788 * The big alloc table may not be completely overwritten, so 3789 * we clear out any stale cache pointers from the first pass. 3790 */ 3791 bzero(kmem_big_alloc_table, sizeof (kmem_big_alloc_table)); 3792 } else { 3793 /* 3794 * During the first pass, the kmem_alloc_* caches 3795 * are treated as metadata. 3796 */ 3797 kmem_default_arena = kmem_msb_arena; 3798 maxbuf = KMEM_BIG_MAXBUF_32BIT; 3799 } 3800 3801 /* 3802 * Set up the default caches to back kmem_alloc() 3803 */ 3804 kmem_alloc_caches_create( 3805 kmem_alloc_sizes, sizeof (kmem_alloc_sizes) / sizeof (int), 3806 kmem_alloc_table, KMEM_MAXBUF, KMEM_ALIGN_SHIFT); 3807 3808 kmem_alloc_caches_create( 3809 kmem_big_alloc_sizes, sizeof (kmem_big_alloc_sizes) / sizeof (int), 3810 kmem_big_alloc_table, maxbuf, KMEM_BIG_SHIFT); 3811 3812 kmem_big_alloc_table_max = maxbuf >> KMEM_BIG_SHIFT; 3813 } 3814 3815 void 3816 kmem_init(void) 3817 { 3818 kmem_cache_t *cp; 3819 int old_kmem_flags = kmem_flags; 3820 int use_large_pages = 0; 3821 size_t maxverify, minfirewall; 3822 3823 kstat_init(); 3824 3825 /* 3826 * Small-memory systems (< 24 MB) can't handle kmem_flags overhead. 3827 */ 3828 if (physmem < btop(24 << 20) && !(old_kmem_flags & KMF_STICKY)) 3829 kmem_flags = 0; 3830 3831 /* 3832 * Don't do firewalled allocations if the heap is less than 1TB 3833 * (i.e. on a 32-bit kernel) 3834 * The resulting VM_NEXTFIT allocations would create too much 3835 * fragmentation in a small heap. 3836 */ 3837 #if defined(_LP64) 3838 maxverify = minfirewall = PAGESIZE / 2; 3839 #else 3840 maxverify = minfirewall = ULONG_MAX; 3841 #endif 3842 3843 /* LINTED */ 3844 ASSERT(sizeof (kmem_cpu_cache_t) == KMEM_CPU_CACHE_SIZE); 3845 3846 list_create(&kmem_caches, sizeof (kmem_cache_t), 3847 offsetof(kmem_cache_t, cache_link)); 3848 3849 kmem_metadata_arena = vmem_create("kmem_metadata", NULL, 0, PAGESIZE, 3850 vmem_alloc, vmem_free, heap_arena, 8 * PAGESIZE, 3851 VM_SLEEP | VMC_NO_QCACHE); 3852 3853 kmem_msb_arena = vmem_create("kmem_msb", NULL, 0, 3854 PAGESIZE, segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, 3855 VM_SLEEP); 3856 3857 kmem_cache_arena = vmem_create("kmem_cache", NULL, 0, KMEM_ALIGN, 3858 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP); 3859 3860 kmem_hash_arena = vmem_create("kmem_hash", NULL, 0, KMEM_ALIGN, 3861 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP); 3862 3863 kmem_log_arena = vmem_create("kmem_log", NULL, 0, KMEM_ALIGN, 3864 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP); 3865 3866 kmem_firewall_va_arena = vmem_create("kmem_firewall_va", 3867 NULL, 0, PAGESIZE, 3868 kmem_firewall_va_alloc, kmem_firewall_va_free, heap_arena, 3869 0, VM_SLEEP); 3870 3871 kmem_firewall_arena = vmem_create("kmem_firewall", NULL, 0, PAGESIZE, 3872 segkmem_alloc, segkmem_free, kmem_firewall_va_arena, 0, VM_SLEEP); 3873 3874 /* temporary oversize arena for mod_read_system_file */ 3875 kmem_oversize_arena = vmem_create("kmem_oversize", NULL, 0, PAGESIZE, 3876 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP); 3877 3878 kmem_reap_interval = 15 * hz; 3879 3880 /* 3881 * Read /etc/system. This is a chicken-and-egg problem because 3882 * kmem_flags may be set in /etc/system, but mod_read_system_file() 3883 * needs to use the allocator. The simplest solution is to create 3884 * all the standard kmem caches, read /etc/system, destroy all the 3885 * caches we just created, and then create them all again in light 3886 * of the (possibly) new kmem_flags and other kmem tunables. 3887 */ 3888 kmem_cache_init(1, 0); 3889 3890 mod_read_system_file(boothowto & RB_ASKNAME); 3891 3892 while ((cp = list_tail(&kmem_caches)) != NULL) 3893 kmem_cache_destroy(cp); 3894 3895 vmem_destroy(kmem_oversize_arena); 3896 3897 if (old_kmem_flags & KMF_STICKY) 3898 kmem_flags = old_kmem_flags; 3899 3900 if (!(kmem_flags & KMF_AUDIT)) 3901 vmem_seg_size = offsetof(vmem_seg_t, vs_thread); 3902 3903 if (kmem_maxverify == 0) 3904 kmem_maxverify = maxverify; 3905 3906 if (kmem_minfirewall == 0) 3907 kmem_minfirewall = minfirewall; 3908 3909 /* 3910 * give segkmem a chance to figure out if we are using large pages 3911 * for the kernel heap 3912 */ 3913 use_large_pages = segkmem_lpsetup(); 3914 3915 /* 3916 * To protect against corruption, we keep the actual number of callers 3917 * KMF_LITE records seperate from the tunable. We arbitrarily clamp 3918 * to 16, since the overhead for small buffers quickly gets out of 3919 * hand. 3920 * 3921 * The real limit would depend on the needs of the largest KMC_NOHASH 3922 * cache. 3923 */ 3924 kmem_lite_count = MIN(MAX(0, kmem_lite_pcs), 16); 3925 kmem_lite_pcs = kmem_lite_count; 3926 3927 /* 3928 * Normally, we firewall oversized allocations when possible, but 3929 * if we are using large pages for kernel memory, and we don't have 3930 * any non-LITE debugging flags set, we want to allocate oversized 3931 * buffers from large pages, and so skip the firewalling. 3932 */ 3933 if (use_large_pages && 3934 ((kmem_flags & KMF_LITE) || !(kmem_flags & KMF_DEBUG))) { 3935 kmem_oversize_arena = vmem_xcreate("kmem_oversize", NULL, 0, 3936 PAGESIZE, segkmem_alloc_lp, segkmem_free_lp, heap_arena, 3937 0, VM_SLEEP); 3938 } else { 3939 kmem_oversize_arena = vmem_create("kmem_oversize", 3940 NULL, 0, PAGESIZE, 3941 segkmem_alloc, segkmem_free, kmem_minfirewall < ULONG_MAX? 3942 kmem_firewall_va_arena : heap_arena, 0, VM_SLEEP); 3943 } 3944 3945 kmem_cache_init(2, use_large_pages); 3946 3947 if (kmem_flags & (KMF_AUDIT | KMF_RANDOMIZE)) { 3948 if (kmem_transaction_log_size == 0) 3949 kmem_transaction_log_size = kmem_maxavail() / 50; 3950 kmem_transaction_log = kmem_log_init(kmem_transaction_log_size); 3951 } 3952 3953 if (kmem_flags & (KMF_CONTENTS | KMF_RANDOMIZE)) { 3954 if (kmem_content_log_size == 0) 3955 kmem_content_log_size = kmem_maxavail() / 50; 3956 kmem_content_log = kmem_log_init(kmem_content_log_size); 3957 } 3958 3959 kmem_failure_log = kmem_log_init(kmem_failure_log_size); 3960 3961 kmem_slab_log = kmem_log_init(kmem_slab_log_size); 3962 3963 /* 3964 * Initialize STREAMS message caches so allocb() is available. 3965 * This allows us to initialize the logging framework (cmn_err(9F), 3966 * strlog(9F), etc) so we can start recording messages. 3967 */ 3968 streams_msg_init(); 3969 3970 /* 3971 * Initialize the ZSD framework in Zones so modules loaded henceforth 3972 * can register their callbacks. 3973 */ 3974 zone_zsd_init(); 3975 3976 log_init(); 3977 taskq_init(); 3978 3979 /* 3980 * Warn about invalid or dangerous values of kmem_flags. 3981 * Always warn about unsupported values. 3982 */ 3983 if (((kmem_flags & ~(KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | 3984 KMF_CONTENTS | KMF_LITE)) != 0) || 3985 ((kmem_flags & KMF_LITE) && kmem_flags != KMF_LITE)) 3986 cmn_err(CE_WARN, "kmem_flags set to unsupported value 0x%x. " 3987 "See the Solaris Tunable Parameters Reference Manual.", 3988 kmem_flags); 3989 3990 #ifdef DEBUG 3991 if ((kmem_flags & KMF_DEBUG) == 0) 3992 cmn_err(CE_NOTE, "kmem debugging disabled."); 3993 #else 3994 /* 3995 * For non-debug kernels, the only "normal" flags are 0, KMF_LITE, 3996 * KMF_REDZONE, and KMF_CONTENTS (the last because it is only enabled 3997 * if KMF_AUDIT is set). We should warn the user about the performance 3998 * penalty of KMF_AUDIT or KMF_DEADBEEF if they are set and KMF_LITE 3999 * isn't set (since that disables AUDIT). 4000 */ 4001 if (!(kmem_flags & KMF_LITE) && 4002 (kmem_flags & (KMF_AUDIT | KMF_DEADBEEF)) != 0) 4003 cmn_err(CE_WARN, "High-overhead kmem debugging features " 4004 "enabled (kmem_flags = 0x%x). Performance degradation " 4005 "and large memory overhead possible. See the Solaris " 4006 "Tunable Parameters Reference Manual.", kmem_flags); 4007 #endif /* not DEBUG */ 4008 4009 kmem_cache_applyall(kmem_cache_magazine_enable, NULL, TQ_SLEEP); 4010 4011 kmem_ready = 1; 4012 4013 /* 4014 * Initialize the platform-specific aligned/DMA memory allocator. 4015 */ 4016 ka_init(); 4017 4018 /* 4019 * Initialize 32-bit ID cache. 4020 */ 4021 id32_init(); 4022 4023 /* 4024 * Initialize the networking stack so modules loaded can 4025 * register their callbacks. 4026 */ 4027 netstack_init(); 4028 } 4029 4030 static void 4031 kmem_move_init(void) 4032 { 4033 kmem_defrag_cache = kmem_cache_create("kmem_defrag_cache", 4034 sizeof (kmem_defrag_t), 0, NULL, NULL, NULL, NULL, 4035 kmem_msb_arena, KMC_NOHASH); 4036 kmem_move_cache = kmem_cache_create("kmem_move_cache", 4037 sizeof (kmem_move_t), 0, NULL, NULL, NULL, NULL, 4038 kmem_msb_arena, KMC_NOHASH); 4039 4040 /* 4041 * kmem guarantees that move callbacks are sequential and that even 4042 * across multiple caches no two moves ever execute simultaneously. 4043 * Move callbacks are processed on a separate taskq so that client code 4044 * does not interfere with internal maintenance tasks. 4045 */ 4046 kmem_move_taskq = taskq_create_instance("kmem_move_taskq", 0, 1, 4047 minclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE); 4048 } 4049 4050 void 4051 kmem_thread_init(void) 4052 { 4053 kmem_move_init(); 4054 kmem_taskq = taskq_create_instance("kmem_taskq", 0, 1, minclsyspri, 4055 300, INT_MAX, TASKQ_PREPOPULATE); 4056 } 4057 4058 void 4059 kmem_mp_init(void) 4060 { 4061 mutex_enter(&cpu_lock); 4062 register_cpu_setup_func(kmem_cpu_setup, NULL); 4063 mutex_exit(&cpu_lock); 4064 4065 kmem_update_timeout(NULL); 4066 4067 taskq_mp_init(); 4068 } 4069 4070 /* 4071 * Return the slab of the allocated buffer, or NULL if the buffer is not 4072 * allocated. This function may be called with a known slab address to determine 4073 * whether or not the buffer is allocated, or with a NULL slab address to obtain 4074 * an allocated buffer's slab. 4075 */ 4076 static kmem_slab_t * 4077 kmem_slab_allocated(kmem_cache_t *cp, kmem_slab_t *sp, void *buf) 4078 { 4079 kmem_bufctl_t *bcp, *bufbcp; 4080 4081 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4082 ASSERT(sp == NULL || KMEM_SLAB_MEMBER(sp, buf)); 4083 4084 if (cp->cache_flags & KMF_HASH) { 4085 for (bcp = *KMEM_HASH(cp, buf); 4086 (bcp != NULL) && (bcp->bc_addr != buf); 4087 bcp = bcp->bc_next) { 4088 continue; 4089 } 4090 ASSERT(sp != NULL && bcp != NULL ? sp == bcp->bc_slab : 1); 4091 return (bcp == NULL ? NULL : bcp->bc_slab); 4092 } 4093 4094 if (sp == NULL) { 4095 sp = KMEM_SLAB(cp, buf); 4096 } 4097 bufbcp = KMEM_BUFCTL(cp, buf); 4098 for (bcp = sp->slab_head; 4099 (bcp != NULL) && (bcp != bufbcp); 4100 bcp = bcp->bc_next) { 4101 continue; 4102 } 4103 return (bcp == NULL ? sp : NULL); 4104 } 4105 4106 static boolean_t 4107 kmem_slab_is_reclaimable(kmem_cache_t *cp, kmem_slab_t *sp, int flags) 4108 { 4109 long refcnt; 4110 4111 ASSERT(cp->cache_defrag != NULL); 4112 4113 /* If we're desperate, we don't care if the client said NO. */ 4114 refcnt = sp->slab_refcnt; 4115 if (flags & KMM_DESPERATE) { 4116 return (refcnt < sp->slab_chunks); /* any partial */ 4117 } 4118 4119 if (sp->slab_flags & KMEM_SLAB_NOMOVE) { 4120 return (B_FALSE); 4121 } 4122 4123 if (kmem_move_any_partial) { 4124 return (refcnt < sp->slab_chunks); 4125 } 4126 4127 if ((refcnt == 1) && (refcnt < sp->slab_chunks)) { 4128 return (B_TRUE); 4129 } 4130 4131 /* 4132 * The reclaim threshold is adjusted at each kmem_cache_scan() so that 4133 * slabs with a progressively higher percentage of used buffers can be 4134 * reclaimed until the cache as a whole is no longer fragmented. 4135 * 4136 * sp->slab_refcnt kmd_reclaim_numer 4137 * --------------- < ------------------ 4138 * sp->slab_chunks KMEM_VOID_FRACTION 4139 */ 4140 return ((refcnt * KMEM_VOID_FRACTION) < 4141 (sp->slab_chunks * cp->cache_defrag->kmd_reclaim_numer)); 4142 } 4143 4144 static void * 4145 kmem_hunt_mag(kmem_cache_t *cp, kmem_magazine_t *m, int n, void *buf, 4146 void *tbuf) 4147 { 4148 int i; /* magazine round index */ 4149 4150 for (i = 0; i < n; i++) { 4151 if (buf == m->mag_round[i]) { 4152 if (cp->cache_flags & KMF_BUFTAG) { 4153 (void) kmem_cache_free_debug(cp, tbuf, 4154 caller()); 4155 } 4156 m->mag_round[i] = tbuf; 4157 return (buf); 4158 } 4159 } 4160 4161 return (NULL); 4162 } 4163 4164 /* 4165 * Hunt the magazine layer for the given buffer. If found, the buffer is 4166 * removed from the magazine layer and returned, otherwise NULL is returned. 4167 * The state of the returned buffer is freed and constructed. 4168 */ 4169 static void * 4170 kmem_hunt_mags(kmem_cache_t *cp, void *buf) 4171 { 4172 kmem_cpu_cache_t *ccp; 4173 kmem_magazine_t *m; 4174 int cpu_seqid; 4175 int n; /* magazine rounds */ 4176 void *tbuf; /* temporary swap buffer */ 4177 4178 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4179 4180 /* 4181 * Allocated a buffer to swap with the one we hope to pull out of a 4182 * magazine when found. 4183 */ 4184 tbuf = kmem_cache_alloc(cp, KM_NOSLEEP); 4185 if (tbuf == NULL) { 4186 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_alloc_fail); 4187 return (NULL); 4188 } 4189 if (tbuf == buf) { 4190 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_lucky); 4191 if (cp->cache_flags & KMF_BUFTAG) { 4192 (void) kmem_cache_free_debug(cp, buf, caller()); 4193 } 4194 return (buf); 4195 } 4196 4197 /* Hunt the depot. */ 4198 mutex_enter(&cp->cache_depot_lock); 4199 n = cp->cache_magtype->mt_magsize; 4200 for (m = cp->cache_full.ml_list; m != NULL; m = m->mag_next) { 4201 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) { 4202 mutex_exit(&cp->cache_depot_lock); 4203 return (buf); 4204 } 4205 } 4206 mutex_exit(&cp->cache_depot_lock); 4207 4208 /* Hunt the per-CPU magazines. */ 4209 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 4210 ccp = &cp->cache_cpu[cpu_seqid]; 4211 4212 mutex_enter(&ccp->cc_lock); 4213 m = ccp->cc_loaded; 4214 n = ccp->cc_rounds; 4215 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) { 4216 mutex_exit(&ccp->cc_lock); 4217 return (buf); 4218 } 4219 m = ccp->cc_ploaded; 4220 n = ccp->cc_prounds; 4221 if (kmem_hunt_mag(cp, m, n, buf, tbuf) != NULL) { 4222 mutex_exit(&ccp->cc_lock); 4223 return (buf); 4224 } 4225 mutex_exit(&ccp->cc_lock); 4226 } 4227 4228 kmem_cache_free(cp, tbuf); 4229 return (NULL); 4230 } 4231 4232 /* 4233 * May be called from the kmem_move_taskq, from kmem_cache_move_notify_task(), 4234 * or when the buffer is freed. 4235 */ 4236 static void 4237 kmem_slab_move_yes(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf) 4238 { 4239 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4240 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf)); 4241 4242 if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4243 return; 4244 } 4245 4246 if (sp->slab_flags & KMEM_SLAB_NOMOVE) { 4247 if (KMEM_SLAB_OFFSET(sp, from_buf) == sp->slab_stuck_offset) { 4248 avl_remove(&cp->cache_partial_slabs, sp); 4249 sp->slab_flags &= ~KMEM_SLAB_NOMOVE; 4250 sp->slab_stuck_offset = (uint32_t)-1; 4251 avl_add(&cp->cache_partial_slabs, sp); 4252 } 4253 } else { 4254 sp->slab_later_count = 0; 4255 sp->slab_stuck_offset = (uint32_t)-1; 4256 } 4257 } 4258 4259 static void 4260 kmem_slab_move_no(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf) 4261 { 4262 ASSERT(taskq_member(kmem_move_taskq, curthread)); 4263 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4264 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf)); 4265 4266 if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4267 return; 4268 } 4269 4270 avl_remove(&cp->cache_partial_slabs, sp); 4271 sp->slab_later_count = 0; 4272 sp->slab_flags |= KMEM_SLAB_NOMOVE; 4273 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, from_buf); 4274 avl_add(&cp->cache_partial_slabs, sp); 4275 } 4276 4277 static void kmem_move_end(kmem_cache_t *, kmem_move_t *); 4278 4279 /* 4280 * The move callback takes two buffer addresses, the buffer to be moved, and a 4281 * newly allocated and constructed buffer selected by kmem as the destination. 4282 * It also takes the size of the buffer and an optional user argument specified 4283 * at cache creation time. kmem guarantees that the buffer to be moved has not 4284 * been unmapped by the virtual memory subsystem. Beyond that, it cannot 4285 * guarantee the present whereabouts of the buffer to be moved, so it is up to 4286 * the client to safely determine whether or not it is still using the buffer. 4287 * The client must not free either of the buffers passed to the move callback, 4288 * since kmem wants to free them directly to the slab layer. The client response 4289 * tells kmem which of the two buffers to free: 4290 * 4291 * YES kmem frees the old buffer (the move was successful) 4292 * NO kmem frees the new buffer, marks the slab of the old buffer 4293 * non-reclaimable to avoid bothering the client again 4294 * LATER kmem frees the new buffer, increments slab_later_count 4295 * DONT_KNOW kmem frees the new buffer, searches mags for the old buffer 4296 * DONT_NEED kmem frees both the old buffer and the new buffer 4297 * 4298 * The pending callback argument now being processed contains both of the 4299 * buffers (old and new) passed to the move callback function, the slab of the 4300 * old buffer, and flags related to the move request, such as whether or not the 4301 * system was desperate for memory. 4302 */ 4303 static void 4304 kmem_move_buffer(kmem_move_t *callback) 4305 { 4306 kmem_cbrc_t response; 4307 kmem_slab_t *sp = callback->kmm_from_slab; 4308 kmem_cache_t *cp = sp->slab_cache; 4309 boolean_t free_on_slab; 4310 4311 ASSERT(taskq_member(kmem_move_taskq, curthread)); 4312 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4313 ASSERT(KMEM_SLAB_MEMBER(sp, callback->kmm_from_buf)); 4314 4315 /* 4316 * The number of allocated buffers on the slab may have changed since we 4317 * last checked the slab's reclaimability (when the pending move was 4318 * enqueued), or the client may have responded NO when asked to move 4319 * another buffer on the same slab. 4320 */ 4321 if (!kmem_slab_is_reclaimable(cp, sp, callback->kmm_flags)) { 4322 KMEM_STAT_ADD(kmem_move_stats.kms_no_longer_reclaimable); 4323 KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY), 4324 kmem_move_stats.kms_notify_no_longer_reclaimable); 4325 kmem_slab_free(cp, callback->kmm_to_buf); 4326 kmem_move_end(cp, callback); 4327 return; 4328 } 4329 4330 /* 4331 * Hunting magazines is expensive, so we'll wait to do that until the 4332 * client responds KMEM_CBRC_DONT_KNOW. However, checking the slab layer 4333 * is cheap, so we might as well do that here in case we can avoid 4334 * bothering the client. 4335 */ 4336 mutex_enter(&cp->cache_lock); 4337 free_on_slab = (kmem_slab_allocated(cp, sp, 4338 callback->kmm_from_buf) == NULL); 4339 mutex_exit(&cp->cache_lock); 4340 4341 if (free_on_slab) { 4342 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_slab); 4343 kmem_slab_free(cp, callback->kmm_to_buf); 4344 kmem_move_end(cp, callback); 4345 return; 4346 } 4347 4348 if (cp->cache_flags & KMF_BUFTAG) { 4349 /* 4350 * Make kmem_cache_alloc_debug() apply the constructor for us. 4351 */ 4352 if (kmem_cache_alloc_debug(cp, callback->kmm_to_buf, 4353 KM_NOSLEEP, 1, caller()) != 0) { 4354 KMEM_STAT_ADD(kmem_move_stats.kms_alloc_fail); 4355 kmem_move_end(cp, callback); 4356 return; 4357 } 4358 } else if (cp->cache_constructor != NULL && 4359 cp->cache_constructor(callback->kmm_to_buf, cp->cache_private, 4360 KM_NOSLEEP) != 0) { 4361 atomic_add_64(&cp->cache_alloc_fail, 1); 4362 KMEM_STAT_ADD(kmem_move_stats.kms_constructor_fail); 4363 kmem_slab_free(cp, callback->kmm_to_buf); 4364 kmem_move_end(cp, callback); 4365 return; 4366 } 4367 4368 KMEM_STAT_ADD(kmem_move_stats.kms_callbacks); 4369 KMEM_STAT_COND_ADD((callback->kmm_flags & KMM_NOTIFY), 4370 kmem_move_stats.kms_notify_callbacks); 4371 cp->cache_defrag->kmd_callbacks++; 4372 cp->cache_defrag->kmd_thread = curthread; 4373 cp->cache_defrag->kmd_from_buf = callback->kmm_from_buf; 4374 cp->cache_defrag->kmd_to_buf = callback->kmm_to_buf; 4375 DTRACE_PROBE2(kmem__move__start, kmem_cache_t *, cp, kmem_move_t *, 4376 callback); 4377 4378 response = cp->cache_move(callback->kmm_from_buf, 4379 callback->kmm_to_buf, cp->cache_bufsize, cp->cache_private); 4380 4381 DTRACE_PROBE3(kmem__move__end, kmem_cache_t *, cp, kmem_move_t *, 4382 callback, kmem_cbrc_t, response); 4383 cp->cache_defrag->kmd_thread = NULL; 4384 cp->cache_defrag->kmd_from_buf = NULL; 4385 cp->cache_defrag->kmd_to_buf = NULL; 4386 4387 if (response == KMEM_CBRC_YES) { 4388 KMEM_STAT_ADD(kmem_move_stats.kms_yes); 4389 cp->cache_defrag->kmd_yes++; 4390 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE); 4391 mutex_enter(&cp->cache_lock); 4392 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4393 mutex_exit(&cp->cache_lock); 4394 kmem_move_end(cp, callback); 4395 return; 4396 } 4397 4398 switch (response) { 4399 case KMEM_CBRC_NO: 4400 KMEM_STAT_ADD(kmem_move_stats.kms_no); 4401 cp->cache_defrag->kmd_no++; 4402 mutex_enter(&cp->cache_lock); 4403 kmem_slab_move_no(cp, sp, callback->kmm_from_buf); 4404 mutex_exit(&cp->cache_lock); 4405 break; 4406 case KMEM_CBRC_LATER: 4407 KMEM_STAT_ADD(kmem_move_stats.kms_later); 4408 cp->cache_defrag->kmd_later++; 4409 mutex_enter(&cp->cache_lock); 4410 if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4411 mutex_exit(&cp->cache_lock); 4412 break; 4413 } 4414 4415 if (++sp->slab_later_count >= KMEM_DISBELIEF) { 4416 KMEM_STAT_ADD(kmem_move_stats.kms_disbelief); 4417 kmem_slab_move_no(cp, sp, callback->kmm_from_buf); 4418 } else if (!(sp->slab_flags & KMEM_SLAB_NOMOVE)) { 4419 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, 4420 callback->kmm_from_buf); 4421 } 4422 mutex_exit(&cp->cache_lock); 4423 break; 4424 case KMEM_CBRC_DONT_NEED: 4425 KMEM_STAT_ADD(kmem_move_stats.kms_dont_need); 4426 cp->cache_defrag->kmd_dont_need++; 4427 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE); 4428 mutex_enter(&cp->cache_lock); 4429 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4430 mutex_exit(&cp->cache_lock); 4431 break; 4432 case KMEM_CBRC_DONT_KNOW: 4433 KMEM_STAT_ADD(kmem_move_stats.kms_dont_know); 4434 cp->cache_defrag->kmd_dont_know++; 4435 if (kmem_hunt_mags(cp, callback->kmm_from_buf) != NULL) { 4436 KMEM_STAT_ADD(kmem_move_stats.kms_hunt_found_mag); 4437 cp->cache_defrag->kmd_hunt_found++; 4438 kmem_slab_free_constructed(cp, callback->kmm_from_buf, 4439 B_TRUE); 4440 mutex_enter(&cp->cache_lock); 4441 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4442 mutex_exit(&cp->cache_lock); 4443 } 4444 break; 4445 default: 4446 panic("'%s' (%p) unexpected move callback response %d\n", 4447 cp->cache_name, (void *)cp, response); 4448 } 4449 4450 kmem_slab_free_constructed(cp, callback->kmm_to_buf, B_FALSE); 4451 kmem_move_end(cp, callback); 4452 } 4453 4454 /* Return B_FALSE if there is insufficient memory for the move request. */ 4455 static boolean_t 4456 kmem_move_begin(kmem_cache_t *cp, kmem_slab_t *sp, void *buf, int flags) 4457 { 4458 void *to_buf; 4459 avl_index_t index; 4460 kmem_move_t *callback, *pending; 4461 4462 ASSERT(taskq_member(kmem_taskq, curthread)); 4463 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4464 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4465 4466 callback = kmem_cache_alloc(kmem_move_cache, KM_NOSLEEP); 4467 if (callback == NULL) { 4468 KMEM_STAT_ADD(kmem_move_stats.kms_callback_alloc_fail); 4469 return (B_FALSE); 4470 } 4471 4472 callback->kmm_from_slab = sp; 4473 callback->kmm_from_buf = buf; 4474 callback->kmm_flags = flags; 4475 4476 mutex_enter(&cp->cache_lock); 4477 4478 if (avl_numnodes(&cp->cache_partial_slabs) <= 1) { 4479 mutex_exit(&cp->cache_lock); 4480 kmem_cache_free(kmem_move_cache, callback); 4481 return (B_TRUE); /* there is no need for the move request */ 4482 } 4483 4484 pending = avl_find(&cp->cache_defrag->kmd_moves_pending, buf, &index); 4485 if (pending != NULL) { 4486 /* 4487 * If the move is already pending and we're desperate now, 4488 * update the move flags. 4489 */ 4490 if (flags & KMM_DESPERATE) { 4491 pending->kmm_flags |= KMM_DESPERATE; 4492 } 4493 mutex_exit(&cp->cache_lock); 4494 KMEM_STAT_ADD(kmem_move_stats.kms_already_pending); 4495 kmem_cache_free(kmem_move_cache, callback); 4496 return (B_TRUE); 4497 } 4498 4499 to_buf = kmem_slab_alloc_impl(cp, avl_first(&cp->cache_partial_slabs)); 4500 callback->kmm_to_buf = to_buf; 4501 avl_insert(&cp->cache_defrag->kmd_moves_pending, callback, index); 4502 4503 mutex_exit(&cp->cache_lock); 4504 4505 if (!taskq_dispatch(kmem_move_taskq, (task_func_t *)kmem_move_buffer, 4506 callback, TQ_NOSLEEP)) { 4507 KMEM_STAT_ADD(kmem_move_stats.kms_callback_taskq_fail); 4508 mutex_enter(&cp->cache_lock); 4509 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback); 4510 mutex_exit(&cp->cache_lock); 4511 kmem_slab_free(cp, to_buf); 4512 kmem_cache_free(kmem_move_cache, callback); 4513 return (B_FALSE); 4514 } 4515 4516 return (B_TRUE); 4517 } 4518 4519 static void 4520 kmem_move_end(kmem_cache_t *cp, kmem_move_t *callback) 4521 { 4522 avl_index_t index; 4523 4524 ASSERT(cp->cache_defrag != NULL); 4525 ASSERT(taskq_member(kmem_move_taskq, curthread)); 4526 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4527 4528 mutex_enter(&cp->cache_lock); 4529 VERIFY(avl_find(&cp->cache_defrag->kmd_moves_pending, 4530 callback->kmm_from_buf, &index) != NULL); 4531 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback); 4532 if (avl_is_empty(&cp->cache_defrag->kmd_moves_pending)) { 4533 list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 4534 kmem_slab_t *sp; 4535 4536 /* 4537 * The last pending move completed. Release all slabs from the 4538 * front of the dead list except for any slab at the tail that 4539 * needs to be released from the context of kmem_move_buffers(). 4540 * kmem deferred unmapping the buffers on these slabs in order 4541 * to guarantee that buffers passed to the move callback have 4542 * been touched only by kmem or by the client itself. 4543 */ 4544 while ((sp = list_remove_head(deadlist)) != NULL) { 4545 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) { 4546 list_insert_tail(deadlist, sp); 4547 break; 4548 } 4549 cp->cache_defrag->kmd_deadcount--; 4550 cp->cache_slab_destroy++; 4551 mutex_exit(&cp->cache_lock); 4552 kmem_slab_destroy(cp, sp); 4553 KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed); 4554 mutex_enter(&cp->cache_lock); 4555 } 4556 } 4557 mutex_exit(&cp->cache_lock); 4558 kmem_cache_free(kmem_move_cache, callback); 4559 } 4560 4561 /* 4562 * Move buffers from least used slabs first by scanning backwards from the end 4563 * of the partial slab list. Scan at most max_scan candidate slabs and move 4564 * buffers from at most max_slabs slabs (0 for all partial slabs in both cases). 4565 * If desperate to reclaim memory, move buffers from any partial slab, otherwise 4566 * skip slabs with a ratio of allocated buffers at or above the current 4567 * threshold. Return the number of unskipped slabs (at most max_slabs, -1 if the 4568 * scan is aborted) so that the caller can adjust the reclaimability threshold 4569 * depending on how many reclaimable slabs it finds. 4570 * 4571 * kmem_move_buffers() drops and reacquires cache_lock every time it issues a 4572 * move request, since it is not valid for kmem_move_begin() to call 4573 * kmem_cache_alloc() or taskq_dispatch() with cache_lock held. 4574 */ 4575 static int 4576 kmem_move_buffers(kmem_cache_t *cp, size_t max_scan, size_t max_slabs, 4577 int flags) 4578 { 4579 kmem_slab_t *sp; 4580 void *buf; 4581 int i, j; /* slab index, buffer index */ 4582 int s; /* reclaimable slabs */ 4583 int b; /* allocated (movable) buffers on reclaimable slab */ 4584 boolean_t success; 4585 int refcnt; 4586 int nomove; 4587 4588 ASSERT(taskq_member(kmem_taskq, curthread)); 4589 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4590 ASSERT(kmem_move_cache != NULL); 4591 ASSERT(cp->cache_move != NULL && cp->cache_defrag != NULL); 4592 ASSERT(avl_numnodes(&cp->cache_partial_slabs) > 1); 4593 4594 if (kmem_move_blocked) { 4595 return (0); 4596 } 4597 4598 if (kmem_move_fulltilt) { 4599 max_slabs = 0; 4600 flags |= KMM_DESPERATE; 4601 } 4602 4603 if (max_scan == 0 || (flags & KMM_DESPERATE)) { 4604 /* 4605 * Scan as many slabs as needed to find the desired number of 4606 * candidate slabs. 4607 */ 4608 max_scan = (size_t)-1; 4609 } 4610 4611 if (max_slabs == 0 || (flags & KMM_DESPERATE)) { 4612 /* Find as many candidate slabs as possible. */ 4613 max_slabs = (size_t)-1; 4614 } 4615 4616 sp = avl_last(&cp->cache_partial_slabs); 4617 ASSERT(sp != NULL && KMEM_SLAB_IS_PARTIAL(sp)); 4618 for (i = 0, s = 0; (i < max_scan) && (s < max_slabs) && 4619 (sp != avl_first(&cp->cache_partial_slabs)); 4620 sp = AVL_PREV(&cp->cache_partial_slabs, sp), i++) { 4621 4622 if (!kmem_slab_is_reclaimable(cp, sp, flags)) { 4623 continue; 4624 } 4625 s++; 4626 4627 /* Look for allocated buffers to move. */ 4628 for (j = 0, b = 0, buf = sp->slab_base; 4629 (j < sp->slab_chunks) && (b < sp->slab_refcnt); 4630 buf = (((char *)buf) + cp->cache_chunksize), j++) { 4631 4632 if (kmem_slab_allocated(cp, sp, buf) == NULL) { 4633 continue; 4634 } 4635 4636 b++; 4637 4638 /* 4639 * Prevent the slab from being destroyed while we drop 4640 * cache_lock and while the pending move is not yet 4641 * registered. Flag the pending move while 4642 * kmd_moves_pending may still be empty, since we can't 4643 * yet rely on a non-zero pending move count to prevent 4644 * the slab from being destroyed. 4645 */ 4646 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING)); 4647 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING; 4648 /* 4649 * Recheck refcnt and nomove after reacquiring the lock, 4650 * since these control the order of partial slabs, and 4651 * we want to know if we can pick up the scan where we 4652 * left off. 4653 */ 4654 refcnt = sp->slab_refcnt; 4655 nomove = (sp->slab_flags & KMEM_SLAB_NOMOVE); 4656 mutex_exit(&cp->cache_lock); 4657 4658 success = kmem_move_begin(cp, sp, buf, flags); 4659 4660 /* 4661 * Now, before the lock is reacquired, kmem could 4662 * process all pending move requests and purge the 4663 * deadlist, so that upon reacquiring the lock, sp has 4664 * been remapped. Therefore, the KMEM_SLAB_MOVE_PENDING 4665 * flag causes the slab to be put at the end of the 4666 * deadlist and prevents it from being purged, since we 4667 * plan to destroy it here after reacquiring the lock. 4668 */ 4669 mutex_enter(&cp->cache_lock); 4670 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4671 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING; 4672 4673 /* 4674 * Destroy the slab now if it was completely freed while 4675 * we dropped cache_lock. 4676 */ 4677 if (sp->slab_refcnt == 0) { 4678 list_t *deadlist = 4679 &cp->cache_defrag->kmd_deadlist; 4680 4681 ASSERT(!list_is_empty(deadlist)); 4682 ASSERT(list_link_active((list_node_t *) 4683 &sp->slab_link)); 4684 4685 list_remove(deadlist, sp); 4686 cp->cache_defrag->kmd_deadcount--; 4687 cp->cache_slab_destroy++; 4688 mutex_exit(&cp->cache_lock); 4689 kmem_slab_destroy(cp, sp); 4690 KMEM_STAT_ADD(kmem_move_stats. 4691 kms_dead_slabs_freed); 4692 KMEM_STAT_ADD(kmem_move_stats. 4693 kms_endscan_slab_destroyed); 4694 mutex_enter(&cp->cache_lock); 4695 /* 4696 * Since we can't pick up the scan where we left 4697 * off, abort the scan and say nothing about the 4698 * number of reclaimable slabs. 4699 */ 4700 return (-1); 4701 } 4702 4703 if (!success) { 4704 /* 4705 * Abort the scan if there is not enough memory 4706 * for the request and say nothing about the 4707 * number of reclaimable slabs. 4708 */ 4709 KMEM_STAT_ADD( 4710 kmem_move_stats.kms_endscan_nomem); 4711 return (-1); 4712 } 4713 4714 /* 4715 * The slab may have been completely allocated while the 4716 * lock was dropped. 4717 */ 4718 if (KMEM_SLAB_IS_ALL_USED(sp)) { 4719 KMEM_STAT_ADD( 4720 kmem_move_stats.kms_endscan_slab_all_used); 4721 return (-1); 4722 } 4723 4724 /* 4725 * The slab's position changed while the lock was 4726 * dropped, so we don't know where we are in the 4727 * sequence any more. 4728 */ 4729 if (sp->slab_refcnt != refcnt) { 4730 KMEM_STAT_ADD( 4731 kmem_move_stats.kms_endscan_refcnt_changed); 4732 return (-1); 4733 } 4734 if ((sp->slab_flags & KMEM_SLAB_NOMOVE) != nomove) { 4735 KMEM_STAT_ADD( 4736 kmem_move_stats.kms_endscan_nomove_changed); 4737 return (-1); 4738 } 4739 4740 /* 4741 * Generating a move request allocates a destination 4742 * buffer from the slab layer, bumping the first slab if 4743 * it is completely allocated. 4744 */ 4745 ASSERT(!avl_is_empty(&cp->cache_partial_slabs)); 4746 if (sp == avl_first(&cp->cache_partial_slabs)) { 4747 goto end_scan; 4748 } 4749 } 4750 } 4751 end_scan: 4752 4753 KMEM_STAT_COND_ADD(sp == avl_first(&cp->cache_partial_slabs), 4754 kmem_move_stats.kms_endscan_freelist); 4755 4756 return (s); 4757 } 4758 4759 typedef struct kmem_move_notify_args { 4760 kmem_cache_t *kmna_cache; 4761 void *kmna_buf; 4762 } kmem_move_notify_args_t; 4763 4764 static void 4765 kmem_cache_move_notify_task(void *arg) 4766 { 4767 kmem_move_notify_args_t *args = arg; 4768 kmem_cache_t *cp = args->kmna_cache; 4769 void *buf = args->kmna_buf; 4770 kmem_slab_t *sp; 4771 4772 ASSERT(taskq_member(kmem_taskq, curthread)); 4773 ASSERT(list_link_active(&cp->cache_link)); 4774 4775 kmem_free(args, sizeof (kmem_move_notify_args_t)); 4776 mutex_enter(&cp->cache_lock); 4777 sp = kmem_slab_allocated(cp, NULL, buf); 4778 4779 /* Ignore the notification if the buffer is no longer allocated. */ 4780 if (sp == NULL) { 4781 mutex_exit(&cp->cache_lock); 4782 return; 4783 } 4784 4785 /* Ignore the notification if there's no reason to move the buffer. */ 4786 if (avl_numnodes(&cp->cache_partial_slabs) > 1) { 4787 /* 4788 * So far the notification is not ignored. Ignore the 4789 * notification if the slab is not marked by an earlier refusal 4790 * to move a buffer. 4791 */ 4792 if (!(sp->slab_flags & KMEM_SLAB_NOMOVE) && 4793 (sp->slab_later_count == 0)) { 4794 mutex_exit(&cp->cache_lock); 4795 return; 4796 } 4797 4798 kmem_slab_move_yes(cp, sp, buf); 4799 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING)); 4800 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING; 4801 mutex_exit(&cp->cache_lock); 4802 /* see kmem_move_buffers() about dropping the lock */ 4803 (void) kmem_move_begin(cp, sp, buf, KMM_NOTIFY); 4804 mutex_enter(&cp->cache_lock); 4805 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4806 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING; 4807 if (sp->slab_refcnt == 0) { 4808 list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 4809 4810 ASSERT(!list_is_empty(deadlist)); 4811 ASSERT(list_link_active((list_node_t *) 4812 &sp->slab_link)); 4813 4814 list_remove(deadlist, sp); 4815 cp->cache_defrag->kmd_deadcount--; 4816 cp->cache_slab_destroy++; 4817 mutex_exit(&cp->cache_lock); 4818 kmem_slab_destroy(cp, sp); 4819 KMEM_STAT_ADD(kmem_move_stats.kms_dead_slabs_freed); 4820 return; 4821 } 4822 } else { 4823 kmem_slab_move_yes(cp, sp, buf); 4824 } 4825 mutex_exit(&cp->cache_lock); 4826 } 4827 4828 void 4829 kmem_cache_move_notify(kmem_cache_t *cp, void *buf) 4830 { 4831 kmem_move_notify_args_t *args; 4832 4833 KMEM_STAT_ADD(kmem_move_stats.kms_notify); 4834 args = kmem_alloc(sizeof (kmem_move_notify_args_t), KM_NOSLEEP); 4835 if (args != NULL) { 4836 args->kmna_cache = cp; 4837 args->kmna_buf = buf; 4838 if (!taskq_dispatch(kmem_taskq, 4839 (task_func_t *)kmem_cache_move_notify_task, args, 4840 TQ_NOSLEEP)) 4841 kmem_free(args, sizeof (kmem_move_notify_args_t)); 4842 } 4843 } 4844 4845 static void 4846 kmem_cache_defrag(kmem_cache_t *cp) 4847 { 4848 size_t n; 4849 4850 ASSERT(cp->cache_defrag != NULL); 4851 4852 mutex_enter(&cp->cache_lock); 4853 n = avl_numnodes(&cp->cache_partial_slabs); 4854 if (n > 1) { 4855 /* kmem_move_buffers() drops and reacquires cache_lock */ 4856 (void) kmem_move_buffers(cp, n, 0, KMM_DESPERATE); 4857 KMEM_STAT_ADD(kmem_move_stats.kms_defrags); 4858 } 4859 mutex_exit(&cp->cache_lock); 4860 } 4861 4862 /* Is this cache above the fragmentation threshold? */ 4863 static boolean_t 4864 kmem_cache_frag_threshold(kmem_cache_t *cp, uint64_t nfree) 4865 { 4866 if (avl_numnodes(&cp->cache_partial_slabs) <= 1) 4867 return (B_FALSE); 4868 4869 /* 4870 * nfree kmem_frag_numer 4871 * ------------------ > --------------- 4872 * cp->cache_buftotal kmem_frag_denom 4873 */ 4874 return ((nfree * kmem_frag_denom) > 4875 (cp->cache_buftotal * kmem_frag_numer)); 4876 } 4877 4878 static boolean_t 4879 kmem_cache_is_fragmented(kmem_cache_t *cp, boolean_t *doreap) 4880 { 4881 boolean_t fragmented; 4882 uint64_t nfree; 4883 4884 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4885 *doreap = B_FALSE; 4886 4887 if (!kmem_move_fulltilt && ((cp->cache_complete_slab_count + 4888 avl_numnodes(&cp->cache_partial_slabs)) < kmem_frag_minslabs)) 4889 return (B_FALSE); 4890 4891 nfree = cp->cache_bufslab; 4892 fragmented = kmem_cache_frag_threshold(cp, nfree); 4893 /* 4894 * Free buffers in the magazine layer appear allocated from the point of 4895 * view of the slab layer. We want to know if the slab layer would 4896 * appear fragmented if we included free buffers from magazines that 4897 * have fallen out of the working set. 4898 */ 4899 if (!fragmented) { 4900 long reap; 4901 4902 mutex_enter(&cp->cache_depot_lock); 4903 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 4904 reap = MIN(reap, cp->cache_full.ml_total); 4905 mutex_exit(&cp->cache_depot_lock); 4906 4907 nfree += ((uint64_t)reap * cp->cache_magtype->mt_magsize); 4908 if (kmem_cache_frag_threshold(cp, nfree)) { 4909 *doreap = B_TRUE; 4910 } 4911 } 4912 4913 return (fragmented); 4914 } 4915 4916 /* Called periodically from kmem_taskq */ 4917 static void 4918 kmem_cache_scan(kmem_cache_t *cp) 4919 { 4920 boolean_t reap = B_FALSE; 4921 4922 ASSERT(taskq_member(kmem_taskq, curthread)); 4923 ASSERT(cp->cache_defrag != NULL); 4924 4925 mutex_enter(&cp->cache_lock); 4926 4927 if (kmem_cache_is_fragmented(cp, &reap)) { 4928 kmem_defrag_t *kmd = cp->cache_defrag; 4929 size_t slabs_found; 4930 4931 /* 4932 * Consolidate reclaimable slabs from the end of the partial 4933 * slab list (scan at most kmem_reclaim_scan_range slabs to find 4934 * reclaimable slabs). Keep track of how many candidate slabs we 4935 * looked for and how many we actually found so we can adjust 4936 * the definition of a candidate slab if we're having trouble 4937 * finding them. 4938 * 4939 * kmem_move_buffers() drops and reacquires cache_lock. 4940 */ 4941 slabs_found = kmem_move_buffers(cp, kmem_reclaim_scan_range, 4942 kmem_reclaim_max_slabs, 0); 4943 if (slabs_found >= 0) { 4944 kmd->kmd_slabs_sought += kmem_reclaim_max_slabs; 4945 kmd->kmd_slabs_found += slabs_found; 4946 } 4947 4948 if (++kmd->kmd_scans >= kmem_reclaim_scan_range) { 4949 kmd->kmd_scans = 0; 4950 4951 /* 4952 * If we had difficulty finding candidate slabs in 4953 * previous scans, adjust the threshold so that 4954 * candidates are easier to find. 4955 */ 4956 if (kmd->kmd_slabs_found == kmd->kmd_slabs_sought) { 4957 kmem_adjust_reclaim_threshold(kmd, -1); 4958 } else if ((kmd->kmd_slabs_found * 2) < 4959 kmd->kmd_slabs_sought) { 4960 kmem_adjust_reclaim_threshold(kmd, 1); 4961 } 4962 kmd->kmd_slabs_sought = 0; 4963 kmd->kmd_slabs_found = 0; 4964 } 4965 } else { 4966 kmem_reset_reclaim_threshold(cp->cache_defrag); 4967 #ifdef DEBUG 4968 if (avl_numnodes(&cp->cache_partial_slabs) > 1) { 4969 /* 4970 * In a debug kernel we want the consolidator to 4971 * run occasionally even when there is plenty of 4972 * memory. 4973 */ 4974 uint32_t debug_rand; 4975 4976 (void) random_get_bytes((uint8_t *)&debug_rand, 4); 4977 if (!kmem_move_noreap && 4978 ((debug_rand % kmem_mtb_reap) == 0)) { 4979 mutex_exit(&cp->cache_lock); 4980 kmem_cache_reap(cp); 4981 KMEM_STAT_ADD(kmem_move_stats.kms_debug_reaps); 4982 return; 4983 } else if ((debug_rand % kmem_mtb_move) == 0) { 4984 (void) kmem_move_buffers(cp, 4985 kmem_reclaim_scan_range, 1, 0); 4986 KMEM_STAT_ADD(kmem_move_stats. 4987 kms_debug_move_scans); 4988 } 4989 } 4990 #endif /* DEBUG */ 4991 } 4992 4993 mutex_exit(&cp->cache_lock); 4994 4995 if (reap) { 4996 KMEM_STAT_ADD(kmem_move_stats.kms_scan_depot_ws_reaps); 4997 kmem_depot_ws_reap(cp); 4998 } 4999 } 5000