1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright (c) 1994, 2010, Oracle and/or its affiliates. All rights reserved. 23 * Copyright (c) 2012, 2017 by Delphix. All rights reserved. 24 * Copyright 2015 Nexenta Systems, Inc. All rights reserved. 25 */ 26 27 /* 28 * Kernel memory allocator, as described in the following two papers and a 29 * statement about the consolidator: 30 * 31 * Jeff Bonwick, 32 * The Slab Allocator: An Object-Caching Kernel Memory Allocator. 33 * Proceedings of the Summer 1994 Usenix Conference. 34 * Available as /shared/sac/PSARC/1994/028/materials/kmem.pdf. 35 * 36 * Jeff Bonwick and Jonathan Adams, 37 * Magazines and vmem: Extending the Slab Allocator to Many CPUs and 38 * Arbitrary Resources. 39 * Proceedings of the 2001 Usenix Conference. 40 * Available as /shared/sac/PSARC/2000/550/materials/vmem.pdf. 41 * 42 * kmem Slab Consolidator Big Theory Statement: 43 * 44 * 1. Motivation 45 * 46 * As stated in Bonwick94, slabs provide the following advantages over other 47 * allocation structures in terms of memory fragmentation: 48 * 49 * - Internal fragmentation (per-buffer wasted space) is minimal. 50 * - Severe external fragmentation (unused buffers on the free list) is 51 * unlikely. 52 * 53 * Segregating objects by size eliminates one source of external fragmentation, 54 * and according to Bonwick: 55 * 56 * The other reason that slabs reduce external fragmentation is that all 57 * objects in a slab are of the same type, so they have the same lifetime 58 * distribution. The resulting segregation of short-lived and long-lived 59 * objects at slab granularity reduces the likelihood of an entire page being 60 * held hostage due to a single long-lived allocation [Barrett93, Hanson90]. 61 * 62 * While unlikely, severe external fragmentation remains possible. Clients that 63 * allocate both short- and long-lived objects from the same cache cannot 64 * anticipate the distribution of long-lived objects within the allocator's slab 65 * implementation. Even a small percentage of long-lived objects distributed 66 * randomly across many slabs can lead to a worst case scenario where the client 67 * frees the majority of its objects and the system gets back almost none of the 68 * slabs. Despite the client doing what it reasonably can to help the system 69 * reclaim memory, the allocator cannot shake free enough slabs because of 70 * lonely allocations stubbornly hanging on. Although the allocator is in a 71 * position to diagnose the fragmentation, there is nothing that the allocator 72 * by itself can do about it. It only takes a single allocated object to prevent 73 * an entire slab from being reclaimed, and any object handed out by 74 * kmem_cache_alloc() is by definition in the client's control. Conversely, 75 * although the client is in a position to move a long-lived object, it has no 76 * way of knowing if the object is causing fragmentation, and if so, where to 77 * move it. A solution necessarily requires further cooperation between the 78 * allocator and the client. 79 * 80 * 2. Move Callback 81 * 82 * The kmem slab consolidator therefore adds a move callback to the 83 * allocator/client interface, improving worst-case external fragmentation in 84 * kmem caches that supply a function to move objects from one memory location 85 * to another. In a situation of low memory kmem attempts to consolidate all of 86 * a cache's slabs at once; otherwise it works slowly to bring external 87 * fragmentation within the 1/8 limit guaranteed for internal fragmentation, 88 * thereby helping to avoid a low memory situation in the future. 89 * 90 * The callback has the following signature: 91 * 92 * kmem_cbrc_t move(void *old, void *new, size_t size, void *user_arg) 93 * 94 * It supplies the kmem client with two addresses: the allocated object that 95 * kmem wants to move and a buffer selected by kmem for the client to use as the 96 * copy destination. The callback is kmem's way of saying "Please get off of 97 * this buffer and use this one instead." kmem knows where it wants to move the 98 * object in order to best reduce fragmentation. All the client needs to know 99 * about the second argument (void *new) is that it is an allocated, constructed 100 * object ready to take the contents of the old object. When the move function 101 * is called, the system is likely to be low on memory, and the new object 102 * spares the client from having to worry about allocating memory for the 103 * requested move. The third argument supplies the size of the object, in case a 104 * single move function handles multiple caches whose objects differ only in 105 * size (such as zio_buf_512, zio_buf_1024, etc). Finally, the same optional 106 * user argument passed to the constructor, destructor, and reclaim functions is 107 * also passed to the move callback. 108 * 109 * 2.1 Setting the Move Callback 110 * 111 * The client sets the move callback after creating the cache and before 112 * allocating from it: 113 * 114 * object_cache = kmem_cache_create(...); 115 * kmem_cache_set_move(object_cache, object_move); 116 * 117 * 2.2 Move Callback Return Values 118 * 119 * Only the client knows about its own data and when is a good time to move it. 120 * The client is cooperating with kmem to return unused memory to the system, 121 * and kmem respectfully accepts this help at the client's convenience. When 122 * asked to move an object, the client can respond with any of the following: 123 * 124 * typedef enum kmem_cbrc { 125 * KMEM_CBRC_YES, 126 * KMEM_CBRC_NO, 127 * KMEM_CBRC_LATER, 128 * KMEM_CBRC_DONT_NEED, 129 * KMEM_CBRC_DONT_KNOW 130 * } kmem_cbrc_t; 131 * 132 * The client must not explicitly kmem_cache_free() either of the objects passed 133 * to the callback, since kmem wants to free them directly to the slab layer 134 * (bypassing the per-CPU magazine layer). The response tells kmem which of the 135 * objects to free: 136 * 137 * YES: (Did it) The client moved the object, so kmem frees the old one. 138 * NO: (Never) The client refused, so kmem frees the new object (the 139 * unused copy destination). kmem also marks the slab of the old 140 * object so as not to bother the client with further callbacks for 141 * that object as long as the slab remains on the partial slab list. 142 * (The system won't be getting the slab back as long as the 143 * immovable object holds it hostage, so there's no point in moving 144 * any of its objects.) 145 * LATER: The client is using the object and cannot move it now, so kmem 146 * frees the new object (the unused copy destination). kmem still 147 * attempts to move other objects off the slab, since it expects to 148 * succeed in clearing the slab in a later callback. The client 149 * should use LATER instead of NO if the object is likely to become 150 * movable very soon. 151 * DONT_NEED: The client no longer needs the object, so kmem frees the old along 152 * with the new object (the unused copy destination). This response 153 * is the client's opportunity to be a model citizen and give back as 154 * much as it can. 155 * DONT_KNOW: The client does not know about the object because 156 * a) the client has just allocated the object and not yet put it 157 * wherever it expects to find known objects 158 * b) the client has removed the object from wherever it expects to 159 * find known objects and is about to free it, or 160 * c) the client has freed the object. 161 * In all these cases (a, b, and c) kmem frees the new object (the 162 * unused copy destination). In the first case, the object is in 163 * use and the correct action is that for LATER; in the latter two 164 * cases, we know that the object is either freed or about to be 165 * freed, in which case it is either already in a magazine or about 166 * to be in one. In these cases, we know that the object will either 167 * be reallocated and reused, or it will end up in a full magazine 168 * that will be reaped (thereby liberating the slab). Because it 169 * is prohibitively expensive to differentiate these cases, and 170 * because the defrag code is executed when we're low on memory 171 * (thereby biasing the system to reclaim full magazines) we treat 172 * all DONT_KNOW cases as LATER and rely on cache reaping to 173 * generally clean up full magazines. While we take the same action 174 * for these cases, we maintain their semantic distinction: if 175 * defragmentation is not occurring, it is useful to know if this 176 * is due to objects in use (LATER) or objects in an unknown state 177 * of transition (DONT_KNOW). 178 * 179 * 2.3 Object States 180 * 181 * Neither kmem nor the client can be assumed to know the object's whereabouts 182 * at the time of the callback. An object belonging to a kmem cache may be in 183 * any of the following states: 184 * 185 * 1. Uninitialized on the slab 186 * 2. Allocated from the slab but not constructed (still uninitialized) 187 * 3. Allocated from the slab, constructed, but not yet ready for business 188 * (not in a valid state for the move callback) 189 * 4. In use (valid and known to the client) 190 * 5. About to be freed (no longer in a valid state for the move callback) 191 * 6. Freed to a magazine (still constructed) 192 * 7. Allocated from a magazine, not yet ready for business (not in a valid 193 * state for the move callback), and about to return to state #4 194 * 8. Deconstructed on a magazine that is about to be freed 195 * 9. Freed to the slab 196 * 197 * Since the move callback may be called at any time while the object is in any 198 * of the above states (except state #1), the client needs a safe way to 199 * determine whether or not it knows about the object. Specifically, the client 200 * needs to know whether or not the object is in state #4, the only state in 201 * which a move is valid. If the object is in any other state, the client should 202 * immediately return KMEM_CBRC_DONT_KNOW, since it is unsafe to access any of 203 * the object's fields. 204 * 205 * Note that although an object may be in state #4 when kmem initiates the move 206 * request, the object may no longer be in that state by the time kmem actually 207 * calls the move function. Not only does the client free objects 208 * asynchronously, kmem itself puts move requests on a queue where thay are 209 * pending until kmem processes them from another context. Also, objects freed 210 * to a magazine appear allocated from the point of view of the slab layer, so 211 * kmem may even initiate requests for objects in a state other than state #4. 212 * 213 * 2.3.1 Magazine Layer 214 * 215 * An important insight revealed by the states listed above is that the magazine 216 * layer is populated only by kmem_cache_free(). Magazines of constructed 217 * objects are never populated directly from the slab layer (which contains raw, 218 * unconstructed objects). Whenever an allocation request cannot be satisfied 219 * from the magazine layer, the magazines are bypassed and the request is 220 * satisfied from the slab layer (creating a new slab if necessary). kmem calls 221 * the object constructor only when allocating from the slab layer, and only in 222 * response to kmem_cache_alloc() or to prepare the destination buffer passed in 223 * the move callback. kmem does not preconstruct objects in anticipation of 224 * kmem_cache_alloc(). 225 * 226 * 2.3.2 Object Constructor and Destructor 227 * 228 * If the client supplies a destructor, it must be valid to call the destructor 229 * on a newly created object (immediately after the constructor). 230 * 231 * 2.4 Recognizing Known Objects 232 * 233 * There is a simple test to determine safely whether or not the client knows 234 * about a given object in the move callback. It relies on the fact that kmem 235 * guarantees that the object of the move callback has only been touched by the 236 * client itself or else by kmem. kmem does this by ensuring that none of the 237 * cache's slabs are freed to the virtual memory (VM) subsystem while a move 238 * callback is pending. When the last object on a slab is freed, if there is a 239 * pending move, kmem puts the slab on a per-cache dead list and defers freeing 240 * slabs on that list until all pending callbacks are completed. That way, 241 * clients can be certain that the object of a move callback is in one of the 242 * states listed above, making it possible to distinguish known objects (in 243 * state #4) using the two low order bits of any pointer member (with the 244 * exception of 'char *' or 'short *' which may not be 4-byte aligned on some 245 * platforms). 246 * 247 * The test works as long as the client always transitions objects from state #4 248 * (known, in use) to state #5 (about to be freed, invalid) by setting the low 249 * order bit of the client-designated pointer member. Since kmem only writes 250 * invalid memory patterns, such as 0xbaddcafe to uninitialized memory and 251 * 0xdeadbeef to freed memory, any scribbling on the object done by kmem is 252 * guaranteed to set at least one of the two low order bits. Therefore, given an 253 * object with a back pointer to a 'container_t *o_container', the client can 254 * test 255 * 256 * container_t *container = object->o_container; 257 * if ((uintptr_t)container & 0x3) { 258 * return (KMEM_CBRC_DONT_KNOW); 259 * } 260 * 261 * Typically, an object will have a pointer to some structure with a list or 262 * hash where objects from the cache are kept while in use. Assuming that the 263 * client has some way of knowing that the container structure is valid and will 264 * not go away during the move, and assuming that the structure includes a lock 265 * to protect whatever collection is used, then the client would continue as 266 * follows: 267 * 268 * // Ensure that the container structure does not go away. 269 * if (container_hold(container) == 0) { 270 * return (KMEM_CBRC_DONT_KNOW); 271 * } 272 * mutex_enter(&container->c_objects_lock); 273 * if (container != object->o_container) { 274 * mutex_exit(&container->c_objects_lock); 275 * container_rele(container); 276 * return (KMEM_CBRC_DONT_KNOW); 277 * } 278 * 279 * At this point the client knows that the object cannot be freed as long as 280 * c_objects_lock is held. Note that after acquiring the lock, the client must 281 * recheck the o_container pointer in case the object was removed just before 282 * acquiring the lock. 283 * 284 * When the client is about to free an object, it must first remove that object 285 * from the list, hash, or other structure where it is kept. At that time, to 286 * mark the object so it can be distinguished from the remaining, known objects, 287 * the client sets the designated low order bit: 288 * 289 * mutex_enter(&container->c_objects_lock); 290 * object->o_container = (void *)((uintptr_t)object->o_container | 0x1); 291 * list_remove(&container->c_objects, object); 292 * mutex_exit(&container->c_objects_lock); 293 * 294 * In the common case, the object is freed to the magazine layer, where it may 295 * be reused on a subsequent allocation without the overhead of calling the 296 * constructor. While in the magazine it appears allocated from the point of 297 * view of the slab layer, making it a candidate for the move callback. Most 298 * objects unrecognized by the client in the move callback fall into this 299 * category and are cheaply distinguished from known objects by the test 300 * described earlier. Because searching magazines is prohibitively expensive 301 * for kmem, clients that do not mark freed objects (and therefore return 302 * KMEM_CBRC_DONT_KNOW for large numbers of objects) may find defragmentation 303 * efficacy reduced. 304 * 305 * Invalidating the designated pointer member before freeing the object marks 306 * the object to be avoided in the callback, and conversely, assigning a valid 307 * value to the designated pointer member after allocating the object makes the 308 * object fair game for the callback: 309 * 310 * ... allocate object ... 311 * ... set any initial state not set by the constructor ... 312 * 313 * mutex_enter(&container->c_objects_lock); 314 * list_insert_tail(&container->c_objects, object); 315 * membar_producer(); 316 * object->o_container = container; 317 * mutex_exit(&container->c_objects_lock); 318 * 319 * Note that everything else must be valid before setting o_container makes the 320 * object fair game for the move callback. The membar_producer() call ensures 321 * that all the object's state is written to memory before setting the pointer 322 * that transitions the object from state #3 or #7 (allocated, constructed, not 323 * yet in use) to state #4 (in use, valid). That's important because the move 324 * function has to check the validity of the pointer before it can safely 325 * acquire the lock protecting the collection where it expects to find known 326 * objects. 327 * 328 * This method of distinguishing known objects observes the usual symmetry: 329 * invalidating the designated pointer is the first thing the client does before 330 * freeing the object, and setting the designated pointer is the last thing the 331 * client does after allocating the object. Of course, the client is not 332 * required to use this method. Fundamentally, how the client recognizes known 333 * objects is completely up to the client, but this method is recommended as an 334 * efficient and safe way to take advantage of the guarantees made by kmem. If 335 * the entire object is arbitrary data without any markable bits from a suitable 336 * pointer member, then the client must find some other method, such as 337 * searching a hash table of known objects. 338 * 339 * 2.5 Preventing Objects From Moving 340 * 341 * Besides a way to distinguish known objects, the other thing that the client 342 * needs is a strategy to ensure that an object will not move while the client 343 * is actively using it. The details of satisfying this requirement tend to be 344 * highly cache-specific. It might seem that the same rules that let a client 345 * remove an object safely should also decide when an object can be moved 346 * safely. However, any object state that makes a removal attempt invalid is 347 * likely to be long-lasting for objects that the client does not expect to 348 * remove. kmem knows nothing about the object state and is equally likely (from 349 * the client's point of view) to request a move for any object in the cache, 350 * whether prepared for removal or not. Even a low percentage of objects stuck 351 * in place by unremovability will defeat the consolidator if the stuck objects 352 * are the same long-lived allocations likely to hold slabs hostage. 353 * Fundamentally, the consolidator is not aimed at common cases. Severe external 354 * fragmentation is a worst case scenario manifested as sparsely allocated 355 * slabs, by definition a low percentage of the cache's objects. When deciding 356 * what makes an object movable, keep in mind the goal of the consolidator: to 357 * bring worst-case external fragmentation within the limits guaranteed for 358 * internal fragmentation. Removability is a poor criterion if it is likely to 359 * exclude more than an insignificant percentage of objects for long periods of 360 * time. 361 * 362 * A tricky general solution exists, and it has the advantage of letting you 363 * move any object at almost any moment, practically eliminating the likelihood 364 * that an object can hold a slab hostage. However, if there is a cache-specific 365 * way to ensure that an object is not actively in use in the vast majority of 366 * cases, a simpler solution that leverages this cache-specific knowledge is 367 * preferred. 368 * 369 * 2.5.1 Cache-Specific Solution 370 * 371 * As an example of a cache-specific solution, the ZFS znode cache takes 372 * advantage of the fact that the vast majority of znodes are only being 373 * referenced from the DNLC. (A typical case might be a few hundred in active 374 * use and a hundred thousand in the DNLC.) In the move callback, after the ZFS 375 * client has established that it recognizes the znode and can access its fields 376 * safely (using the method described earlier), it then tests whether the znode 377 * is referenced by anything other than the DNLC. If so, it assumes that the 378 * znode may be in active use and is unsafe to move, so it drops its locks and 379 * returns KMEM_CBRC_LATER. The advantage of this strategy is that everywhere 380 * else znodes are used, no change is needed to protect against the possibility 381 * of the znode moving. The disadvantage is that it remains possible for an 382 * application to hold a znode slab hostage with an open file descriptor. 383 * However, this case ought to be rare and the consolidator has a way to deal 384 * with it: If the client responds KMEM_CBRC_LATER repeatedly for the same 385 * object, kmem eventually stops believing it and treats the slab as if the 386 * client had responded KMEM_CBRC_NO. Having marked the hostage slab, kmem can 387 * then focus on getting it off of the partial slab list by allocating rather 388 * than freeing all of its objects. (Either way of getting a slab off the 389 * free list reduces fragmentation.) 390 * 391 * 2.5.2 General Solution 392 * 393 * The general solution, on the other hand, requires an explicit hold everywhere 394 * the object is used to prevent it from moving. To keep the client locking 395 * strategy as uncomplicated as possible, kmem guarantees the simplifying 396 * assumption that move callbacks are sequential, even across multiple caches. 397 * Internally, a global queue processed by a single thread supports all caches 398 * implementing the callback function. No matter how many caches supply a move 399 * function, the consolidator never moves more than one object at a time, so the 400 * client does not have to worry about tricky lock ordering involving several 401 * related objects from different kmem caches. 402 * 403 * The general solution implements the explicit hold as a read-write lock, which 404 * allows multiple readers to access an object from the cache simultaneously 405 * while a single writer is excluded from moving it. A single rwlock for the 406 * entire cache would lock out all threads from using any of the cache's objects 407 * even though only a single object is being moved, so to reduce contention, 408 * the client can fan out the single rwlock into an array of rwlocks hashed by 409 * the object address, making it probable that moving one object will not 410 * prevent other threads from using a different object. The rwlock cannot be a 411 * member of the object itself, because the possibility of the object moving 412 * makes it unsafe to access any of the object's fields until the lock is 413 * acquired. 414 * 415 * Assuming a small, fixed number of locks, it's possible that multiple objects 416 * will hash to the same lock. A thread that needs to use multiple objects in 417 * the same function may acquire the same lock multiple times. Since rwlocks are 418 * reentrant for readers, and since there is never more than a single writer at 419 * a time (assuming that the client acquires the lock as a writer only when 420 * moving an object inside the callback), there would seem to be no problem. 421 * However, a client locking multiple objects in the same function must handle 422 * one case of potential deadlock: Assume that thread A needs to prevent both 423 * object 1 and object 2 from moving, and thread B, the callback, meanwhile 424 * tries to move object 3. It's possible, if objects 1, 2, and 3 all hash to the 425 * same lock, that thread A will acquire the lock for object 1 as a reader 426 * before thread B sets the lock's write-wanted bit, preventing thread A from 427 * reacquiring the lock for object 2 as a reader. Unable to make forward 428 * progress, thread A will never release the lock for object 1, resulting in 429 * deadlock. 430 * 431 * There are two ways of avoiding the deadlock just described. The first is to 432 * use rw_tryenter() rather than rw_enter() in the callback function when 433 * attempting to acquire the lock as a writer. If tryenter discovers that the 434 * same object (or another object hashed to the same lock) is already in use, it 435 * aborts the callback and returns KMEM_CBRC_LATER. The second way is to use 436 * rprwlock_t (declared in common/fs/zfs/sys/rprwlock.h) instead of rwlock_t, 437 * since it allows a thread to acquire the lock as a reader in spite of a 438 * waiting writer. This second approach insists on moving the object now, no 439 * matter how many readers the move function must wait for in order to do so, 440 * and could delay the completion of the callback indefinitely (blocking 441 * callbacks to other clients). In practice, a less insistent callback using 442 * rw_tryenter() returns KMEM_CBRC_LATER infrequently enough that there seems 443 * little reason to use anything else. 444 * 445 * Avoiding deadlock is not the only problem that an implementation using an 446 * explicit hold needs to solve. Locking the object in the first place (to 447 * prevent it from moving) remains a problem, since the object could move 448 * between the time you obtain a pointer to the object and the time you acquire 449 * the rwlock hashed to that pointer value. Therefore the client needs to 450 * recheck the value of the pointer after acquiring the lock, drop the lock if 451 * the value has changed, and try again. This requires a level of indirection: 452 * something that points to the object rather than the object itself, that the 453 * client can access safely while attempting to acquire the lock. (The object 454 * itself cannot be referenced safely because it can move at any time.) 455 * The following lock-acquisition function takes whatever is safe to reference 456 * (arg), follows its pointer to the object (using function f), and tries as 457 * often as necessary to acquire the hashed lock and verify that the object 458 * still has not moved: 459 * 460 * object_t * 461 * object_hold(object_f f, void *arg) 462 * { 463 * object_t *op; 464 * 465 * op = f(arg); 466 * if (op == NULL) { 467 * return (NULL); 468 * } 469 * 470 * rw_enter(OBJECT_RWLOCK(op), RW_READER); 471 * while (op != f(arg)) { 472 * rw_exit(OBJECT_RWLOCK(op)); 473 * op = f(arg); 474 * if (op == NULL) { 475 * break; 476 * } 477 * rw_enter(OBJECT_RWLOCK(op), RW_READER); 478 * } 479 * 480 * return (op); 481 * } 482 * 483 * The OBJECT_RWLOCK macro hashes the object address to obtain the rwlock. The 484 * lock reacquisition loop, while necessary, almost never executes. The function 485 * pointer f (used to obtain the object pointer from arg) has the following type 486 * definition: 487 * 488 * typedef object_t *(*object_f)(void *arg); 489 * 490 * An object_f implementation is likely to be as simple as accessing a structure 491 * member: 492 * 493 * object_t * 494 * s_object(void *arg) 495 * { 496 * something_t *sp = arg; 497 * return (sp->s_object); 498 * } 499 * 500 * The flexibility of a function pointer allows the path to the object to be 501 * arbitrarily complex and also supports the notion that depending on where you 502 * are using the object, you may need to get it from someplace different. 503 * 504 * The function that releases the explicit hold is simpler because it does not 505 * have to worry about the object moving: 506 * 507 * void 508 * object_rele(object_t *op) 509 * { 510 * rw_exit(OBJECT_RWLOCK(op)); 511 * } 512 * 513 * The caller is spared these details so that obtaining and releasing an 514 * explicit hold feels like a simple mutex_enter()/mutex_exit() pair. The caller 515 * of object_hold() only needs to know that the returned object pointer is valid 516 * if not NULL and that the object will not move until released. 517 * 518 * Although object_hold() prevents an object from moving, it does not prevent it 519 * from being freed. The caller must take measures before calling object_hold() 520 * (afterwards is too late) to ensure that the held object cannot be freed. The 521 * caller must do so without accessing the unsafe object reference, so any lock 522 * or reference count used to ensure the continued existence of the object must 523 * live outside the object itself. 524 * 525 * Obtaining a new object is a special case where an explicit hold is impossible 526 * for the caller. Any function that returns a newly allocated object (either as 527 * a return value, or as an in-out paramter) must return it already held; after 528 * the caller gets it is too late, since the object cannot be safely accessed 529 * without the level of indirection described earlier. The following 530 * object_alloc() example uses the same code shown earlier to transition a new 531 * object into the state of being recognized (by the client) as a known object. 532 * The function must acquire the hold (rw_enter) before that state transition 533 * makes the object movable: 534 * 535 * static object_t * 536 * object_alloc(container_t *container) 537 * { 538 * object_t *object = kmem_cache_alloc(object_cache, 0); 539 * ... set any initial state not set by the constructor ... 540 * rw_enter(OBJECT_RWLOCK(object), RW_READER); 541 * mutex_enter(&container->c_objects_lock); 542 * list_insert_tail(&container->c_objects, object); 543 * membar_producer(); 544 * object->o_container = container; 545 * mutex_exit(&container->c_objects_lock); 546 * return (object); 547 * } 548 * 549 * Functions that implicitly acquire an object hold (any function that calls 550 * object_alloc() to supply an object for the caller) need to be carefully noted 551 * so that the matching object_rele() is not neglected. Otherwise, leaked holds 552 * prevent all objects hashed to the affected rwlocks from ever being moved. 553 * 554 * The pointer to a held object can be hashed to the holding rwlock even after 555 * the object has been freed. Although it is possible to release the hold 556 * after freeing the object, you may decide to release the hold implicitly in 557 * whatever function frees the object, so as to release the hold as soon as 558 * possible, and for the sake of symmetry with the function that implicitly 559 * acquires the hold when it allocates the object. Here, object_free() releases 560 * the hold acquired by object_alloc(). Its implicit object_rele() forms a 561 * matching pair with object_hold(): 562 * 563 * void 564 * object_free(object_t *object) 565 * { 566 * container_t *container; 567 * 568 * ASSERT(object_held(object)); 569 * container = object->o_container; 570 * mutex_enter(&container->c_objects_lock); 571 * object->o_container = 572 * (void *)((uintptr_t)object->o_container | 0x1); 573 * list_remove(&container->c_objects, object); 574 * mutex_exit(&container->c_objects_lock); 575 * object_rele(object); 576 * kmem_cache_free(object_cache, object); 577 * } 578 * 579 * Note that object_free() cannot safely accept an object pointer as an argument 580 * unless the object is already held. Any function that calls object_free() 581 * needs to be carefully noted since it similarly forms a matching pair with 582 * object_hold(). 583 * 584 * To complete the picture, the following callback function implements the 585 * general solution by moving objects only if they are currently unheld: 586 * 587 * static kmem_cbrc_t 588 * object_move(void *buf, void *newbuf, size_t size, void *arg) 589 * { 590 * object_t *op = buf, *np = newbuf; 591 * container_t *container; 592 * 593 * container = op->o_container; 594 * if ((uintptr_t)container & 0x3) { 595 * return (KMEM_CBRC_DONT_KNOW); 596 * } 597 * 598 * // Ensure that the container structure does not go away. 599 * if (container_hold(container) == 0) { 600 * return (KMEM_CBRC_DONT_KNOW); 601 * } 602 * 603 * mutex_enter(&container->c_objects_lock); 604 * if (container != op->o_container) { 605 * mutex_exit(&container->c_objects_lock); 606 * container_rele(container); 607 * return (KMEM_CBRC_DONT_KNOW); 608 * } 609 * 610 * if (rw_tryenter(OBJECT_RWLOCK(op), RW_WRITER) == 0) { 611 * mutex_exit(&container->c_objects_lock); 612 * container_rele(container); 613 * return (KMEM_CBRC_LATER); 614 * } 615 * 616 * object_move_impl(op, np); // critical section 617 * rw_exit(OBJECT_RWLOCK(op)); 618 * 619 * op->o_container = (void *)((uintptr_t)op->o_container | 0x1); 620 * list_link_replace(&op->o_link_node, &np->o_link_node); 621 * mutex_exit(&container->c_objects_lock); 622 * container_rele(container); 623 * return (KMEM_CBRC_YES); 624 * } 625 * 626 * Note that object_move() must invalidate the designated o_container pointer of 627 * the old object in the same way that object_free() does, since kmem will free 628 * the object in response to the KMEM_CBRC_YES return value. 629 * 630 * The lock order in object_move() differs from object_alloc(), which locks 631 * OBJECT_RWLOCK first and &container->c_objects_lock second, but as long as the 632 * callback uses rw_tryenter() (preventing the deadlock described earlier), it's 633 * not a problem. Holding the lock on the object list in the example above 634 * through the entire callback not only prevents the object from going away, it 635 * also allows you to lock the list elsewhere and know that none of its elements 636 * will move during iteration. 637 * 638 * Adding an explicit hold everywhere an object from the cache is used is tricky 639 * and involves much more change to client code than a cache-specific solution 640 * that leverages existing state to decide whether or not an object is 641 * movable. However, this approach has the advantage that no object remains 642 * immovable for any significant length of time, making it extremely unlikely 643 * that long-lived allocations can continue holding slabs hostage; and it works 644 * for any cache. 645 * 646 * 3. Consolidator Implementation 647 * 648 * Once the client supplies a move function that a) recognizes known objects and 649 * b) avoids moving objects that are actively in use, the remaining work is up 650 * to the consolidator to decide which objects to move and when to issue 651 * callbacks. 652 * 653 * The consolidator relies on the fact that a cache's slabs are ordered by 654 * usage. Each slab has a fixed number of objects. Depending on the slab's 655 * "color" (the offset of the first object from the beginning of the slab; 656 * offsets are staggered to mitigate false sharing of cache lines) it is either 657 * the maximum number of objects per slab determined at cache creation time or 658 * else the number closest to the maximum that fits within the space remaining 659 * after the initial offset. A completely allocated slab may contribute some 660 * internal fragmentation (per-slab overhead) but no external fragmentation, so 661 * it is of no interest to the consolidator. At the other extreme, slabs whose 662 * objects have all been freed to the slab are released to the virtual memory 663 * (VM) subsystem (objects freed to magazines are still allocated as far as the 664 * slab is concerned). External fragmentation exists when there are slabs 665 * somewhere between these extremes. A partial slab has at least one but not all 666 * of its objects allocated. The more partial slabs, and the fewer allocated 667 * objects on each of them, the higher the fragmentation. Hence the 668 * consolidator's overall strategy is to reduce the number of partial slabs by 669 * moving allocated objects from the least allocated slabs to the most allocated 670 * slabs. 671 * 672 * Partial slabs are kept in an AVL tree ordered by usage. Completely allocated 673 * slabs are kept separately in an unordered list. Since the majority of slabs 674 * tend to be completely allocated (a typical unfragmented cache may have 675 * thousands of complete slabs and only a single partial slab), separating 676 * complete slabs improves the efficiency of partial slab ordering, since the 677 * complete slabs do not affect the depth or balance of the AVL tree. This 678 * ordered sequence of partial slabs acts as a "free list" supplying objects for 679 * allocation requests. 680 * 681 * Objects are always allocated from the first partial slab in the free list, 682 * where the allocation is most likely to eliminate a partial slab (by 683 * completely allocating it). Conversely, when a single object from a completely 684 * allocated slab is freed to the slab, that slab is added to the front of the 685 * free list. Since most free list activity involves highly allocated slabs 686 * coming and going at the front of the list, slabs tend naturally toward the 687 * ideal order: highly allocated at the front, sparsely allocated at the back. 688 * Slabs with few allocated objects are likely to become completely free if they 689 * keep a safe distance away from the front of the free list. Slab misorders 690 * interfere with the natural tendency of slabs to become completely free or 691 * completely allocated. For example, a slab with a single allocated object 692 * needs only a single free to escape the cache; its natural desire is 693 * frustrated when it finds itself at the front of the list where a second 694 * allocation happens just before the free could have released it. Another slab 695 * with all but one object allocated might have supplied the buffer instead, so 696 * that both (as opposed to neither) of the slabs would have been taken off the 697 * free list. 698 * 699 * Although slabs tend naturally toward the ideal order, misorders allowed by a 700 * simple list implementation defeat the consolidator's strategy of merging 701 * least- and most-allocated slabs. Without an AVL tree to guarantee order, kmem 702 * needs another way to fix misorders to optimize its callback strategy. One 703 * approach is to periodically scan a limited number of slabs, advancing a 704 * marker to hold the current scan position, and to move extreme misorders to 705 * the front or back of the free list and to the front or back of the current 706 * scan range. By making consecutive scan ranges overlap by one slab, the least 707 * allocated slab in the current range can be carried along from the end of one 708 * scan to the start of the next. 709 * 710 * Maintaining partial slabs in an AVL tree relieves kmem of this additional 711 * task, however. Since most of the cache's activity is in the magazine layer, 712 * and allocations from the slab layer represent only a startup cost, the 713 * overhead of maintaining a balanced tree is not a significant concern compared 714 * to the opportunity of reducing complexity by eliminating the partial slab 715 * scanner just described. The overhead of an AVL tree is minimized by 716 * maintaining only partial slabs in the tree and keeping completely allocated 717 * slabs separately in a list. To avoid increasing the size of the slab 718 * structure the AVL linkage pointers are reused for the slab's list linkage, 719 * since the slab will always be either partial or complete, never stored both 720 * ways at the same time. To further minimize the overhead of the AVL tree the 721 * compare function that orders partial slabs by usage divides the range of 722 * allocated object counts into bins such that counts within the same bin are 723 * considered equal. Binning partial slabs makes it less likely that allocating 724 * or freeing a single object will change the slab's order, requiring a tree 725 * reinsertion (an avl_remove() followed by an avl_add(), both potentially 726 * requiring some rebalancing of the tree). Allocation counts closest to 727 * completely free and completely allocated are left unbinned (finely sorted) to 728 * better support the consolidator's strategy of merging slabs at either 729 * extreme. 730 * 731 * 3.1 Assessing Fragmentation and Selecting Candidate Slabs 732 * 733 * The consolidator piggybacks on the kmem maintenance thread and is called on 734 * the same interval as kmem_cache_update(), once per cache every fifteen 735 * seconds. kmem maintains a running count of unallocated objects in the slab 736 * layer (cache_bufslab). The consolidator checks whether that number exceeds 737 * 12.5% (1/8) of the total objects in the cache (cache_buftotal), and whether 738 * there is a significant number of slabs in the cache (arbitrarily a minimum 739 * 101 total slabs). Unused objects that have fallen out of the magazine layer's 740 * working set are included in the assessment, and magazines in the depot are 741 * reaped if those objects would lift cache_bufslab above the fragmentation 742 * threshold. Once the consolidator decides that a cache is fragmented, it looks 743 * for a candidate slab to reclaim, starting at the end of the partial slab free 744 * list and scanning backwards. At first the consolidator is choosy: only a slab 745 * with fewer than 12.5% (1/8) of its objects allocated qualifies (or else a 746 * single allocated object, regardless of percentage). If there is difficulty 747 * finding a candidate slab, kmem raises the allocation threshold incrementally, 748 * up to a maximum 87.5% (7/8), so that eventually the consolidator will reduce 749 * external fragmentation (unused objects on the free list) below 12.5% (1/8), 750 * even in the worst case of every slab in the cache being almost 7/8 allocated. 751 * The threshold can also be lowered incrementally when candidate slabs are easy 752 * to find, and the threshold is reset to the minimum 1/8 as soon as the cache 753 * is no longer fragmented. 754 * 755 * 3.2 Generating Callbacks 756 * 757 * Once an eligible slab is chosen, a callback is generated for every allocated 758 * object on the slab, in the hope that the client will move everything off the 759 * slab and make it reclaimable. Objects selected as move destinations are 760 * chosen from slabs at the front of the free list. Assuming slabs in the ideal 761 * order (most allocated at the front, least allocated at the back) and a 762 * cooperative client, the consolidator will succeed in removing slabs from both 763 * ends of the free list, completely allocating on the one hand and completely 764 * freeing on the other. Objects selected as move destinations are allocated in 765 * the kmem maintenance thread where move requests are enqueued. A separate 766 * callback thread removes pending callbacks from the queue and calls the 767 * client. The separate thread ensures that client code (the move function) does 768 * not interfere with internal kmem maintenance tasks. A map of pending 769 * callbacks keyed by object address (the object to be moved) is checked to 770 * ensure that duplicate callbacks are not generated for the same object. 771 * Allocating the move destination (the object to move to) prevents subsequent 772 * callbacks from selecting the same destination as an earlier pending callback. 773 * 774 * Move requests can also be generated by kmem_cache_reap() when the system is 775 * desperate for memory and by kmem_cache_move_notify(), called by the client to 776 * notify kmem that a move refused earlier with KMEM_CBRC_LATER is now possible. 777 * The map of pending callbacks is protected by the same lock that protects the 778 * slab layer. 779 * 780 * When the system is desperate for memory, kmem does not bother to determine 781 * whether or not the cache exceeds the fragmentation threshold, but tries to 782 * consolidate as many slabs as possible. Normally, the consolidator chews 783 * slowly, one sparsely allocated slab at a time during each maintenance 784 * interval that the cache is fragmented. When desperate, the consolidator 785 * starts at the last partial slab and enqueues callbacks for every allocated 786 * object on every partial slab, working backwards until it reaches the first 787 * partial slab. The first partial slab, meanwhile, advances in pace with the 788 * consolidator as allocations to supply move destinations for the enqueued 789 * callbacks use up the highly allocated slabs at the front of the free list. 790 * Ideally, the overgrown free list collapses like an accordion, starting at 791 * both ends and ending at the center with a single partial slab. 792 * 793 * 3.3 Client Responses 794 * 795 * When the client returns KMEM_CBRC_NO in response to the move callback, kmem 796 * marks the slab that supplied the stuck object non-reclaimable and moves it to 797 * front of the free list. The slab remains marked as long as it remains on the 798 * free list, and it appears more allocated to the partial slab compare function 799 * than any unmarked slab, no matter how many of its objects are allocated. 800 * Since even one immovable object ties up the entire slab, the goal is to 801 * completely allocate any slab that cannot be completely freed. kmem does not 802 * bother generating callbacks to move objects from a marked slab unless the 803 * system is desperate. 804 * 805 * When the client responds KMEM_CBRC_LATER, kmem increments a count for the 806 * slab. If the client responds LATER too many times, kmem disbelieves and 807 * treats the response as a NO. The count is cleared when the slab is taken off 808 * the partial slab list or when the client moves one of the slab's objects. 809 * 810 * 4. Observability 811 * 812 * A kmem cache's external fragmentation is best observed with 'mdb -k' using 813 * the ::kmem_slabs dcmd. For a complete description of the command, enter 814 * '::help kmem_slabs' at the mdb prompt. 815 */ 816 817 #include <sys/kmem_impl.h> 818 #include <sys/vmem_impl.h> 819 #include <sys/param.h> 820 #include <sys/sysmacros.h> 821 #include <sys/vm.h> 822 #include <sys/proc.h> 823 #include <sys/tuneable.h> 824 #include <sys/systm.h> 825 #include <sys/cmn_err.h> 826 #include <sys/debug.h> 827 #include <sys/sdt.h> 828 #include <sys/mutex.h> 829 #include <sys/bitmap.h> 830 #include <sys/atomic.h> 831 #include <sys/kobj.h> 832 #include <sys/disp.h> 833 #include <vm/seg_kmem.h> 834 #include <sys/log.h> 835 #include <sys/callb.h> 836 #include <sys/taskq.h> 837 #include <sys/modctl.h> 838 #include <sys/reboot.h> 839 #include <sys/id32.h> 840 #include <sys/zone.h> 841 #include <sys/netstack.h> 842 #ifdef DEBUG 843 #include <sys/random.h> 844 #endif 845 846 extern void streams_msg_init(void); 847 extern int segkp_fromheap; 848 extern void segkp_cache_free(void); 849 extern int callout_init_done; 850 851 struct kmem_cache_kstat { 852 kstat_named_t kmc_buf_size; 853 kstat_named_t kmc_align; 854 kstat_named_t kmc_chunk_size; 855 kstat_named_t kmc_slab_size; 856 kstat_named_t kmc_alloc; 857 kstat_named_t kmc_alloc_fail; 858 kstat_named_t kmc_free; 859 kstat_named_t kmc_depot_alloc; 860 kstat_named_t kmc_depot_free; 861 kstat_named_t kmc_depot_contention; 862 kstat_named_t kmc_slab_alloc; 863 kstat_named_t kmc_slab_free; 864 kstat_named_t kmc_buf_constructed; 865 kstat_named_t kmc_buf_avail; 866 kstat_named_t kmc_buf_inuse; 867 kstat_named_t kmc_buf_total; 868 kstat_named_t kmc_buf_max; 869 kstat_named_t kmc_slab_create; 870 kstat_named_t kmc_slab_destroy; 871 kstat_named_t kmc_vmem_source; 872 kstat_named_t kmc_hash_size; 873 kstat_named_t kmc_hash_lookup_depth; 874 kstat_named_t kmc_hash_rescale; 875 kstat_named_t kmc_full_magazines; 876 kstat_named_t kmc_empty_magazines; 877 kstat_named_t kmc_magazine_size; 878 kstat_named_t kmc_reap; /* number of kmem_cache_reap() calls */ 879 kstat_named_t kmc_defrag; /* attempts to defrag all partial slabs */ 880 kstat_named_t kmc_scan; /* attempts to defrag one partial slab */ 881 kstat_named_t kmc_move_callbacks; /* sum of yes, no, later, dn, dk */ 882 kstat_named_t kmc_move_yes; 883 kstat_named_t kmc_move_no; 884 kstat_named_t kmc_move_later; 885 kstat_named_t kmc_move_dont_need; 886 kstat_named_t kmc_move_dont_know; /* obj unrecognized by client ... */ 887 kstat_named_t kmc_move_hunt_found; /* ... but found in mag layer */ 888 kstat_named_t kmc_move_slabs_freed; /* slabs freed by consolidator */ 889 kstat_named_t kmc_move_reclaimable; /* buffers, if consolidator ran */ 890 } kmem_cache_kstat = { 891 { "buf_size", KSTAT_DATA_UINT64 }, 892 { "align", KSTAT_DATA_UINT64 }, 893 { "chunk_size", KSTAT_DATA_UINT64 }, 894 { "slab_size", KSTAT_DATA_UINT64 }, 895 { "alloc", KSTAT_DATA_UINT64 }, 896 { "alloc_fail", KSTAT_DATA_UINT64 }, 897 { "free", KSTAT_DATA_UINT64 }, 898 { "depot_alloc", KSTAT_DATA_UINT64 }, 899 { "depot_free", KSTAT_DATA_UINT64 }, 900 { "depot_contention", KSTAT_DATA_UINT64 }, 901 { "slab_alloc", KSTAT_DATA_UINT64 }, 902 { "slab_free", KSTAT_DATA_UINT64 }, 903 { "buf_constructed", KSTAT_DATA_UINT64 }, 904 { "buf_avail", KSTAT_DATA_UINT64 }, 905 { "buf_inuse", KSTAT_DATA_UINT64 }, 906 { "buf_total", KSTAT_DATA_UINT64 }, 907 { "buf_max", KSTAT_DATA_UINT64 }, 908 { "slab_create", KSTAT_DATA_UINT64 }, 909 { "slab_destroy", KSTAT_DATA_UINT64 }, 910 { "vmem_source", KSTAT_DATA_UINT64 }, 911 { "hash_size", KSTAT_DATA_UINT64 }, 912 { "hash_lookup_depth", KSTAT_DATA_UINT64 }, 913 { "hash_rescale", KSTAT_DATA_UINT64 }, 914 { "full_magazines", KSTAT_DATA_UINT64 }, 915 { "empty_magazines", KSTAT_DATA_UINT64 }, 916 { "magazine_size", KSTAT_DATA_UINT64 }, 917 { "reap", KSTAT_DATA_UINT64 }, 918 { "defrag", KSTAT_DATA_UINT64 }, 919 { "scan", KSTAT_DATA_UINT64 }, 920 { "move_callbacks", KSTAT_DATA_UINT64 }, 921 { "move_yes", KSTAT_DATA_UINT64 }, 922 { "move_no", KSTAT_DATA_UINT64 }, 923 { "move_later", KSTAT_DATA_UINT64 }, 924 { "move_dont_need", KSTAT_DATA_UINT64 }, 925 { "move_dont_know", KSTAT_DATA_UINT64 }, 926 { "move_hunt_found", KSTAT_DATA_UINT64 }, 927 { "move_slabs_freed", KSTAT_DATA_UINT64 }, 928 { "move_reclaimable", KSTAT_DATA_UINT64 }, 929 }; 930 931 static kmutex_t kmem_cache_kstat_lock; 932 933 /* 934 * The default set of caches to back kmem_alloc(). 935 * These sizes should be reevaluated periodically. 936 * 937 * We want allocations that are multiples of the coherency granularity 938 * (64 bytes) to be satisfied from a cache which is a multiple of 64 939 * bytes, so that it will be 64-byte aligned. For all multiples of 64, 940 * the next kmem_cache_size greater than or equal to it must be a 941 * multiple of 64. 942 * 943 * We split the table into two sections: size <= 4k and size > 4k. This 944 * saves a lot of space and cache footprint in our cache tables. 945 */ 946 static const int kmem_alloc_sizes[] = { 947 1 * 8, 948 2 * 8, 949 3 * 8, 950 4 * 8, 5 * 8, 6 * 8, 7 * 8, 951 4 * 16, 5 * 16, 6 * 16, 7 * 16, 952 4 * 32, 5 * 32, 6 * 32, 7 * 32, 953 4 * 64, 5 * 64, 6 * 64, 7 * 64, 954 4 * 128, 5 * 128, 6 * 128, 7 * 128, 955 P2ALIGN(8192 / 7, 64), 956 P2ALIGN(8192 / 6, 64), 957 P2ALIGN(8192 / 5, 64), 958 P2ALIGN(8192 / 4, 64), 959 P2ALIGN(8192 / 3, 64), 960 P2ALIGN(8192 / 2, 64), 961 }; 962 963 static const int kmem_big_alloc_sizes[] = { 964 2 * 4096, 3 * 4096, 965 2 * 8192, 3 * 8192, 966 4 * 8192, 5 * 8192, 6 * 8192, 7 * 8192, 967 8 * 8192, 9 * 8192, 10 * 8192, 11 * 8192, 968 12 * 8192, 13 * 8192, 14 * 8192, 15 * 8192, 969 16 * 8192 970 }; 971 972 #define KMEM_MAXBUF 4096 973 #define KMEM_BIG_MAXBUF_32BIT 32768 974 #define KMEM_BIG_MAXBUF 131072 975 976 #define KMEM_BIG_MULTIPLE 4096 /* big_alloc_sizes must be a multiple */ 977 #define KMEM_BIG_SHIFT 12 /* lg(KMEM_BIG_MULTIPLE) */ 978 979 static kmem_cache_t *kmem_alloc_table[KMEM_MAXBUF >> KMEM_ALIGN_SHIFT]; 980 static kmem_cache_t *kmem_big_alloc_table[KMEM_BIG_MAXBUF >> KMEM_BIG_SHIFT]; 981 982 #define KMEM_ALLOC_TABLE_MAX (KMEM_MAXBUF >> KMEM_ALIGN_SHIFT) 983 static size_t kmem_big_alloc_table_max = 0; /* # of filled elements */ 984 985 static kmem_magtype_t kmem_magtype[] = { 986 { 1, 8, 3200, 65536 }, 987 { 3, 16, 256, 32768 }, 988 { 7, 32, 64, 16384 }, 989 { 15, 64, 0, 8192 }, 990 { 31, 64, 0, 4096 }, 991 { 47, 64, 0, 2048 }, 992 { 63, 64, 0, 1024 }, 993 { 95, 64, 0, 512 }, 994 { 143, 64, 0, 0 }, 995 }; 996 997 static uint32_t kmem_reaping; 998 static uint32_t kmem_reaping_idspace; 999 1000 /* 1001 * kmem tunables 1002 */ 1003 clock_t kmem_reap_interval; /* cache reaping rate [15 * HZ ticks] */ 1004 int kmem_depot_contention = 3; /* max failed tryenters per real interval */ 1005 pgcnt_t kmem_reapahead = 0; /* start reaping N pages before pageout */ 1006 int kmem_panic = 1; /* whether to panic on error */ 1007 int kmem_logging = 1; /* kmem_log_enter() override */ 1008 uint32_t kmem_mtbf = 0; /* mean time between failures [default: off] */ 1009 size_t kmem_transaction_log_size; /* transaction log size [2% of memory] */ 1010 size_t kmem_content_log_size; /* content log size [2% of memory] */ 1011 size_t kmem_failure_log_size; /* failure log [4 pages per CPU] */ 1012 size_t kmem_slab_log_size; /* slab create log [4 pages per CPU] */ 1013 size_t kmem_content_maxsave = 256; /* KMF_CONTENTS max bytes to log */ 1014 size_t kmem_lite_minsize = 0; /* minimum buffer size for KMF_LITE */ 1015 size_t kmem_lite_maxalign = 1024; /* maximum buffer alignment for KMF_LITE */ 1016 int kmem_lite_pcs = 4; /* number of PCs to store in KMF_LITE mode */ 1017 size_t kmem_maxverify; /* maximum bytes to inspect in debug routines */ 1018 size_t kmem_minfirewall; /* hardware-enforced redzone threshold */ 1019 1020 #ifdef _LP64 1021 size_t kmem_max_cached = KMEM_BIG_MAXBUF; /* maximum kmem_alloc cache */ 1022 #else 1023 size_t kmem_max_cached = KMEM_BIG_MAXBUF_32BIT; /* maximum kmem_alloc cache */ 1024 #endif 1025 1026 #ifdef DEBUG 1027 int kmem_flags = KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | KMF_CONTENTS; 1028 #else 1029 int kmem_flags = 0; 1030 #endif 1031 int kmem_ready; 1032 1033 static kmem_cache_t *kmem_slab_cache; 1034 static kmem_cache_t *kmem_bufctl_cache; 1035 static kmem_cache_t *kmem_bufctl_audit_cache; 1036 1037 static kmutex_t kmem_cache_lock; /* inter-cache linkage only */ 1038 static list_t kmem_caches; 1039 1040 static taskq_t *kmem_taskq; 1041 static kmutex_t kmem_flags_lock; 1042 static vmem_t *kmem_metadata_arena; 1043 static vmem_t *kmem_msb_arena; /* arena for metadata caches */ 1044 static vmem_t *kmem_cache_arena; 1045 static vmem_t *kmem_hash_arena; 1046 static vmem_t *kmem_log_arena; 1047 static vmem_t *kmem_oversize_arena; 1048 static vmem_t *kmem_va_arena; 1049 static vmem_t *kmem_default_arena; 1050 static vmem_t *kmem_firewall_va_arena; 1051 static vmem_t *kmem_firewall_arena; 1052 1053 /* 1054 * kmem slab consolidator thresholds (tunables) 1055 */ 1056 size_t kmem_frag_minslabs = 101; /* minimum total slabs */ 1057 size_t kmem_frag_numer = 1; /* free buffers (numerator) */ 1058 size_t kmem_frag_denom = KMEM_VOID_FRACTION; /* buffers (denominator) */ 1059 /* 1060 * Maximum number of slabs from which to move buffers during a single 1061 * maintenance interval while the system is not low on memory. 1062 */ 1063 size_t kmem_reclaim_max_slabs = 1; 1064 /* 1065 * Number of slabs to scan backwards from the end of the partial slab list 1066 * when searching for buffers to relocate. 1067 */ 1068 size_t kmem_reclaim_scan_range = 12; 1069 1070 /* consolidator knobs */ 1071 boolean_t kmem_move_noreap; 1072 boolean_t kmem_move_blocked; 1073 boolean_t kmem_move_fulltilt; 1074 boolean_t kmem_move_any_partial; 1075 1076 #ifdef DEBUG 1077 /* 1078 * kmem consolidator debug tunables: 1079 * Ensure code coverage by occasionally running the consolidator even when the 1080 * caches are not fragmented (they may never be). These intervals are mean time 1081 * in cache maintenance intervals (kmem_cache_update). 1082 */ 1083 uint32_t kmem_mtb_move = 60; /* defrag 1 slab (~15min) */ 1084 uint32_t kmem_mtb_reap = 1800; /* defrag all slabs (~7.5hrs) */ 1085 #endif /* DEBUG */ 1086 1087 static kmem_cache_t *kmem_defrag_cache; 1088 static kmem_cache_t *kmem_move_cache; 1089 static taskq_t *kmem_move_taskq; 1090 1091 static void kmem_cache_scan(kmem_cache_t *); 1092 static void kmem_cache_defrag(kmem_cache_t *); 1093 static void kmem_slab_prefill(kmem_cache_t *, kmem_slab_t *); 1094 1095 1096 kmem_log_header_t *kmem_transaction_log; 1097 kmem_log_header_t *kmem_content_log; 1098 kmem_log_header_t *kmem_failure_log; 1099 kmem_log_header_t *kmem_slab_log; 1100 1101 static int kmem_lite_count; /* # of PCs in kmem_buftag_lite_t */ 1102 1103 #define KMEM_BUFTAG_LITE_ENTER(bt, count, caller) \ 1104 if ((count) > 0) { \ 1105 pc_t *_s = ((kmem_buftag_lite_t *)(bt))->bt_history; \ 1106 pc_t *_e; \ 1107 /* memmove() the old entries down one notch */ \ 1108 for (_e = &_s[(count) - 1]; _e > _s; _e--) \ 1109 *_e = *(_e - 1); \ 1110 *_s = (uintptr_t)(caller); \ 1111 } 1112 1113 #define KMERR_MODIFIED 0 /* buffer modified while on freelist */ 1114 #define KMERR_REDZONE 1 /* redzone violation (write past end of buf) */ 1115 #define KMERR_DUPFREE 2 /* freed a buffer twice */ 1116 #define KMERR_BADADDR 3 /* freed a bad (unallocated) address */ 1117 #define KMERR_BADBUFTAG 4 /* buftag corrupted */ 1118 #define KMERR_BADBUFCTL 5 /* bufctl corrupted */ 1119 #define KMERR_BADCACHE 6 /* freed a buffer to the wrong cache */ 1120 #define KMERR_BADSIZE 7 /* alloc size != free size */ 1121 #define KMERR_BADBASE 8 /* buffer base address wrong */ 1122 1123 struct { 1124 hrtime_t kmp_timestamp; /* timestamp of panic */ 1125 int kmp_error; /* type of kmem error */ 1126 void *kmp_buffer; /* buffer that induced panic */ 1127 void *kmp_realbuf; /* real start address for buffer */ 1128 kmem_cache_t *kmp_cache; /* buffer's cache according to client */ 1129 kmem_cache_t *kmp_realcache; /* actual cache containing buffer */ 1130 kmem_slab_t *kmp_slab; /* slab accoring to kmem_findslab() */ 1131 kmem_bufctl_t *kmp_bufctl; /* bufctl */ 1132 } kmem_panic_info; 1133 1134 1135 static void 1136 copy_pattern(uint64_t pattern, void *buf_arg, size_t size) 1137 { 1138 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 1139 uint64_t *buf = buf_arg; 1140 1141 while (buf < bufend) 1142 *buf++ = pattern; 1143 } 1144 1145 static void * 1146 verify_pattern(uint64_t pattern, void *buf_arg, size_t size) 1147 { 1148 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 1149 uint64_t *buf; 1150 1151 for (buf = buf_arg; buf < bufend; buf++) 1152 if (*buf != pattern) 1153 return (buf); 1154 return (NULL); 1155 } 1156 1157 static void * 1158 verify_and_copy_pattern(uint64_t old, uint64_t new, void *buf_arg, size_t size) 1159 { 1160 uint64_t *bufend = (uint64_t *)((char *)buf_arg + size); 1161 uint64_t *buf; 1162 1163 for (buf = buf_arg; buf < bufend; buf++) { 1164 if (*buf != old) { 1165 copy_pattern(old, buf_arg, 1166 (char *)buf - (char *)buf_arg); 1167 return (buf); 1168 } 1169 *buf = new; 1170 } 1171 1172 return (NULL); 1173 } 1174 1175 static void 1176 kmem_cache_applyall(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag) 1177 { 1178 kmem_cache_t *cp; 1179 1180 mutex_enter(&kmem_cache_lock); 1181 for (cp = list_head(&kmem_caches); cp != NULL; 1182 cp = list_next(&kmem_caches, cp)) 1183 if (tq != NULL) 1184 (void) taskq_dispatch(tq, (task_func_t *)func, cp, 1185 tqflag); 1186 else 1187 func(cp); 1188 mutex_exit(&kmem_cache_lock); 1189 } 1190 1191 static void 1192 kmem_cache_applyall_id(void (*func)(kmem_cache_t *), taskq_t *tq, int tqflag) 1193 { 1194 kmem_cache_t *cp; 1195 1196 mutex_enter(&kmem_cache_lock); 1197 for (cp = list_head(&kmem_caches); cp != NULL; 1198 cp = list_next(&kmem_caches, cp)) { 1199 if (!(cp->cache_cflags & KMC_IDENTIFIER)) 1200 continue; 1201 if (tq != NULL) 1202 (void) taskq_dispatch(tq, (task_func_t *)func, cp, 1203 tqflag); 1204 else 1205 func(cp); 1206 } 1207 mutex_exit(&kmem_cache_lock); 1208 } 1209 1210 /* 1211 * Debugging support. Given a buffer address, find its slab. 1212 */ 1213 static kmem_slab_t * 1214 kmem_findslab(kmem_cache_t *cp, void *buf) 1215 { 1216 kmem_slab_t *sp; 1217 1218 mutex_enter(&cp->cache_lock); 1219 for (sp = list_head(&cp->cache_complete_slabs); sp != NULL; 1220 sp = list_next(&cp->cache_complete_slabs, sp)) { 1221 if (KMEM_SLAB_MEMBER(sp, buf)) { 1222 mutex_exit(&cp->cache_lock); 1223 return (sp); 1224 } 1225 } 1226 for (sp = avl_first(&cp->cache_partial_slabs); sp != NULL; 1227 sp = AVL_NEXT(&cp->cache_partial_slabs, sp)) { 1228 if (KMEM_SLAB_MEMBER(sp, buf)) { 1229 mutex_exit(&cp->cache_lock); 1230 return (sp); 1231 } 1232 } 1233 mutex_exit(&cp->cache_lock); 1234 1235 return (NULL); 1236 } 1237 1238 static void 1239 kmem_error(int error, kmem_cache_t *cparg, void *bufarg) 1240 { 1241 kmem_buftag_t *btp = NULL; 1242 kmem_bufctl_t *bcp = NULL; 1243 kmem_cache_t *cp = cparg; 1244 kmem_slab_t *sp; 1245 uint64_t *off; 1246 void *buf = bufarg; 1247 1248 kmem_logging = 0; /* stop logging when a bad thing happens */ 1249 1250 kmem_panic_info.kmp_timestamp = gethrtime(); 1251 1252 sp = kmem_findslab(cp, buf); 1253 if (sp == NULL) { 1254 for (cp = list_tail(&kmem_caches); cp != NULL; 1255 cp = list_prev(&kmem_caches, cp)) { 1256 if ((sp = kmem_findslab(cp, buf)) != NULL) 1257 break; 1258 } 1259 } 1260 1261 if (sp == NULL) { 1262 cp = NULL; 1263 error = KMERR_BADADDR; 1264 } else { 1265 if (cp != cparg) 1266 error = KMERR_BADCACHE; 1267 else 1268 buf = (char *)bufarg - ((uintptr_t)bufarg - 1269 (uintptr_t)sp->slab_base) % cp->cache_chunksize; 1270 if (buf != bufarg) 1271 error = KMERR_BADBASE; 1272 if (cp->cache_flags & KMF_BUFTAG) 1273 btp = KMEM_BUFTAG(cp, buf); 1274 if (cp->cache_flags & KMF_HASH) { 1275 mutex_enter(&cp->cache_lock); 1276 for (bcp = *KMEM_HASH(cp, buf); bcp; bcp = bcp->bc_next) 1277 if (bcp->bc_addr == buf) 1278 break; 1279 mutex_exit(&cp->cache_lock); 1280 if (bcp == NULL && btp != NULL) 1281 bcp = btp->bt_bufctl; 1282 if (kmem_findslab(cp->cache_bufctl_cache, bcp) == 1283 NULL || P2PHASE((uintptr_t)bcp, KMEM_ALIGN) || 1284 bcp->bc_addr != buf) { 1285 error = KMERR_BADBUFCTL; 1286 bcp = NULL; 1287 } 1288 } 1289 } 1290 1291 kmem_panic_info.kmp_error = error; 1292 kmem_panic_info.kmp_buffer = bufarg; 1293 kmem_panic_info.kmp_realbuf = buf; 1294 kmem_panic_info.kmp_cache = cparg; 1295 kmem_panic_info.kmp_realcache = cp; 1296 kmem_panic_info.kmp_slab = sp; 1297 kmem_panic_info.kmp_bufctl = bcp; 1298 1299 printf("kernel memory allocator: "); 1300 1301 switch (error) { 1302 1303 case KMERR_MODIFIED: 1304 printf("buffer modified after being freed\n"); 1305 off = verify_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 1306 if (off == NULL) /* shouldn't happen */ 1307 off = buf; 1308 printf("modification occurred at offset 0x%lx " 1309 "(0x%llx replaced by 0x%llx)\n", 1310 (uintptr_t)off - (uintptr_t)buf, 1311 (longlong_t)KMEM_FREE_PATTERN, (longlong_t)*off); 1312 break; 1313 1314 case KMERR_REDZONE: 1315 printf("redzone violation: write past end of buffer\n"); 1316 break; 1317 1318 case KMERR_BADADDR: 1319 printf("invalid free: buffer not in cache\n"); 1320 break; 1321 1322 case KMERR_DUPFREE: 1323 printf("duplicate free: buffer freed twice\n"); 1324 break; 1325 1326 case KMERR_BADBUFTAG: 1327 printf("boundary tag corrupted\n"); 1328 printf("bcp ^ bxstat = %lx, should be %lx\n", 1329 (intptr_t)btp->bt_bufctl ^ btp->bt_bxstat, 1330 KMEM_BUFTAG_FREE); 1331 break; 1332 1333 case KMERR_BADBUFCTL: 1334 printf("bufctl corrupted\n"); 1335 break; 1336 1337 case KMERR_BADCACHE: 1338 printf("buffer freed to wrong cache\n"); 1339 printf("buffer was allocated from %s,\n", cp->cache_name); 1340 printf("caller attempting free to %s.\n", cparg->cache_name); 1341 break; 1342 1343 case KMERR_BADSIZE: 1344 printf("bad free: free size (%u) != alloc size (%u)\n", 1345 KMEM_SIZE_DECODE(((uint32_t *)btp)[0]), 1346 KMEM_SIZE_DECODE(((uint32_t *)btp)[1])); 1347 break; 1348 1349 case KMERR_BADBASE: 1350 printf("bad free: free address (%p) != alloc address (%p)\n", 1351 bufarg, buf); 1352 break; 1353 } 1354 1355 printf("buffer=%p bufctl=%p cache: %s\n", 1356 bufarg, (void *)bcp, cparg->cache_name); 1357 1358 if (bcp != NULL && (cp->cache_flags & KMF_AUDIT) && 1359 error != KMERR_BADBUFCTL) { 1360 int d; 1361 timestruc_t ts; 1362 kmem_bufctl_audit_t *bcap = (kmem_bufctl_audit_t *)bcp; 1363 1364 hrt2ts(kmem_panic_info.kmp_timestamp - bcap->bc_timestamp, &ts); 1365 printf("previous transaction on buffer %p:\n", buf); 1366 printf("thread=%p time=T-%ld.%09ld slab=%p cache: %s\n", 1367 (void *)bcap->bc_thread, ts.tv_sec, ts.tv_nsec, 1368 (void *)sp, cp->cache_name); 1369 for (d = 0; d < MIN(bcap->bc_depth, KMEM_STACK_DEPTH); d++) { 1370 ulong_t off; 1371 char *sym = kobj_getsymname(bcap->bc_stack[d], &off); 1372 printf("%s+%lx\n", sym ? sym : "?", off); 1373 } 1374 } 1375 if (kmem_panic > 0) 1376 panic("kernel heap corruption detected"); 1377 if (kmem_panic == 0) 1378 debug_enter(NULL); 1379 kmem_logging = 1; /* resume logging */ 1380 } 1381 1382 static kmem_log_header_t * 1383 kmem_log_init(size_t logsize) 1384 { 1385 kmem_log_header_t *lhp; 1386 int nchunks = 4 * max_ncpus; 1387 size_t lhsize = (size_t)&((kmem_log_header_t *)0)->lh_cpu[max_ncpus]; 1388 int i; 1389 1390 /* 1391 * Make sure that lhp->lh_cpu[] is nicely aligned 1392 * to prevent false sharing of cache lines. 1393 */ 1394 lhsize = P2ROUNDUP(lhsize, KMEM_ALIGN); 1395 lhp = vmem_xalloc(kmem_log_arena, lhsize, 64, P2NPHASE(lhsize, 64), 0, 1396 NULL, NULL, VM_SLEEP); 1397 bzero(lhp, lhsize); 1398 1399 mutex_init(&lhp->lh_lock, NULL, MUTEX_DEFAULT, NULL); 1400 lhp->lh_nchunks = nchunks; 1401 lhp->lh_chunksize = P2ROUNDUP(logsize / nchunks + 1, PAGESIZE); 1402 lhp->lh_base = vmem_alloc(kmem_log_arena, 1403 lhp->lh_chunksize * nchunks, VM_SLEEP); 1404 lhp->lh_free = vmem_alloc(kmem_log_arena, 1405 nchunks * sizeof (int), VM_SLEEP); 1406 bzero(lhp->lh_base, lhp->lh_chunksize * nchunks); 1407 1408 for (i = 0; i < max_ncpus; i++) { 1409 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[i]; 1410 mutex_init(&clhp->clh_lock, NULL, MUTEX_DEFAULT, NULL); 1411 clhp->clh_chunk = i; 1412 } 1413 1414 for (i = max_ncpus; i < nchunks; i++) 1415 lhp->lh_free[i] = i; 1416 1417 lhp->lh_head = max_ncpus; 1418 lhp->lh_tail = 0; 1419 1420 return (lhp); 1421 } 1422 1423 static void * 1424 kmem_log_enter(kmem_log_header_t *lhp, void *data, size_t size) 1425 { 1426 void *logspace; 1427 kmem_cpu_log_header_t *clhp = &lhp->lh_cpu[CPU->cpu_seqid]; 1428 1429 if (lhp == NULL || kmem_logging == 0 || panicstr) 1430 return (NULL); 1431 1432 mutex_enter(&clhp->clh_lock); 1433 clhp->clh_hits++; 1434 if (size > clhp->clh_avail) { 1435 mutex_enter(&lhp->lh_lock); 1436 lhp->lh_hits++; 1437 lhp->lh_free[lhp->lh_tail] = clhp->clh_chunk; 1438 lhp->lh_tail = (lhp->lh_tail + 1) % lhp->lh_nchunks; 1439 clhp->clh_chunk = lhp->lh_free[lhp->lh_head]; 1440 lhp->lh_head = (lhp->lh_head + 1) % lhp->lh_nchunks; 1441 clhp->clh_current = lhp->lh_base + 1442 clhp->clh_chunk * lhp->lh_chunksize; 1443 clhp->clh_avail = lhp->lh_chunksize; 1444 if (size > lhp->lh_chunksize) 1445 size = lhp->lh_chunksize; 1446 mutex_exit(&lhp->lh_lock); 1447 } 1448 logspace = clhp->clh_current; 1449 clhp->clh_current += size; 1450 clhp->clh_avail -= size; 1451 bcopy(data, logspace, size); 1452 mutex_exit(&clhp->clh_lock); 1453 return (logspace); 1454 } 1455 1456 #define KMEM_AUDIT(lp, cp, bcp) \ 1457 { \ 1458 kmem_bufctl_audit_t *_bcp = (kmem_bufctl_audit_t *)(bcp); \ 1459 _bcp->bc_timestamp = gethrtime(); \ 1460 _bcp->bc_thread = curthread; \ 1461 _bcp->bc_depth = getpcstack(_bcp->bc_stack, KMEM_STACK_DEPTH); \ 1462 _bcp->bc_lastlog = kmem_log_enter((lp), _bcp, sizeof (*_bcp)); \ 1463 } 1464 1465 static void 1466 kmem_log_event(kmem_log_header_t *lp, kmem_cache_t *cp, 1467 kmem_slab_t *sp, void *addr) 1468 { 1469 kmem_bufctl_audit_t bca; 1470 1471 bzero(&bca, sizeof (kmem_bufctl_audit_t)); 1472 bca.bc_addr = addr; 1473 bca.bc_slab = sp; 1474 bca.bc_cache = cp; 1475 KMEM_AUDIT(lp, cp, &bca); 1476 } 1477 1478 /* 1479 * Create a new slab for cache cp. 1480 */ 1481 static kmem_slab_t * 1482 kmem_slab_create(kmem_cache_t *cp, int kmflag) 1483 { 1484 size_t slabsize = cp->cache_slabsize; 1485 size_t chunksize = cp->cache_chunksize; 1486 int cache_flags = cp->cache_flags; 1487 size_t color, chunks; 1488 char *buf, *slab; 1489 kmem_slab_t *sp; 1490 kmem_bufctl_t *bcp; 1491 vmem_t *vmp = cp->cache_arena; 1492 1493 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 1494 1495 color = cp->cache_color + cp->cache_align; 1496 if (color > cp->cache_maxcolor) 1497 color = cp->cache_mincolor; 1498 cp->cache_color = color; 1499 1500 slab = vmem_alloc(vmp, slabsize, kmflag & KM_VMFLAGS); 1501 1502 if (slab == NULL) 1503 goto vmem_alloc_failure; 1504 1505 ASSERT(P2PHASE((uintptr_t)slab, vmp->vm_quantum) == 0); 1506 1507 /* 1508 * Reverify what was already checked in kmem_cache_set_move(), since the 1509 * consolidator depends (for correctness) on slabs being initialized 1510 * with the 0xbaddcafe memory pattern (setting a low order bit usable by 1511 * clients to distinguish uninitialized memory from known objects). 1512 */ 1513 ASSERT((cp->cache_move == NULL) || !(cp->cache_cflags & KMC_NOTOUCH)); 1514 if (!(cp->cache_cflags & KMC_NOTOUCH)) 1515 copy_pattern(KMEM_UNINITIALIZED_PATTERN, slab, slabsize); 1516 1517 if (cache_flags & KMF_HASH) { 1518 if ((sp = kmem_cache_alloc(kmem_slab_cache, kmflag)) == NULL) 1519 goto slab_alloc_failure; 1520 chunks = (slabsize - color) / chunksize; 1521 } else { 1522 sp = KMEM_SLAB(cp, slab); 1523 chunks = (slabsize - sizeof (kmem_slab_t) - color) / chunksize; 1524 } 1525 1526 sp->slab_cache = cp; 1527 sp->slab_head = NULL; 1528 sp->slab_refcnt = 0; 1529 sp->slab_base = buf = slab + color; 1530 sp->slab_chunks = chunks; 1531 sp->slab_stuck_offset = (uint32_t)-1; 1532 sp->slab_later_count = 0; 1533 sp->slab_flags = 0; 1534 1535 ASSERT(chunks > 0); 1536 while (chunks-- != 0) { 1537 if (cache_flags & KMF_HASH) { 1538 bcp = kmem_cache_alloc(cp->cache_bufctl_cache, kmflag); 1539 if (bcp == NULL) 1540 goto bufctl_alloc_failure; 1541 if (cache_flags & KMF_AUDIT) { 1542 kmem_bufctl_audit_t *bcap = 1543 (kmem_bufctl_audit_t *)bcp; 1544 bzero(bcap, sizeof (kmem_bufctl_audit_t)); 1545 bcap->bc_cache = cp; 1546 } 1547 bcp->bc_addr = buf; 1548 bcp->bc_slab = sp; 1549 } else { 1550 bcp = KMEM_BUFCTL(cp, buf); 1551 } 1552 if (cache_flags & KMF_BUFTAG) { 1553 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 1554 btp->bt_redzone = KMEM_REDZONE_PATTERN; 1555 btp->bt_bufctl = bcp; 1556 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 1557 if (cache_flags & KMF_DEADBEEF) { 1558 copy_pattern(KMEM_FREE_PATTERN, buf, 1559 cp->cache_verify); 1560 } 1561 } 1562 bcp->bc_next = sp->slab_head; 1563 sp->slab_head = bcp; 1564 buf += chunksize; 1565 } 1566 1567 kmem_log_event(kmem_slab_log, cp, sp, slab); 1568 1569 return (sp); 1570 1571 bufctl_alloc_failure: 1572 1573 while ((bcp = sp->slab_head) != NULL) { 1574 sp->slab_head = bcp->bc_next; 1575 kmem_cache_free(cp->cache_bufctl_cache, bcp); 1576 } 1577 kmem_cache_free(kmem_slab_cache, sp); 1578 1579 slab_alloc_failure: 1580 1581 vmem_free(vmp, slab, slabsize); 1582 1583 vmem_alloc_failure: 1584 1585 kmem_log_event(kmem_failure_log, cp, NULL, NULL); 1586 atomic_inc_64(&cp->cache_alloc_fail); 1587 1588 return (NULL); 1589 } 1590 1591 /* 1592 * Destroy a slab. 1593 */ 1594 static void 1595 kmem_slab_destroy(kmem_cache_t *cp, kmem_slab_t *sp) 1596 { 1597 vmem_t *vmp = cp->cache_arena; 1598 void *slab = (void *)P2ALIGN((uintptr_t)sp->slab_base, vmp->vm_quantum); 1599 1600 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 1601 ASSERT(sp->slab_refcnt == 0); 1602 1603 if (cp->cache_flags & KMF_HASH) { 1604 kmem_bufctl_t *bcp; 1605 while ((bcp = sp->slab_head) != NULL) { 1606 sp->slab_head = bcp->bc_next; 1607 kmem_cache_free(cp->cache_bufctl_cache, bcp); 1608 } 1609 kmem_cache_free(kmem_slab_cache, sp); 1610 } 1611 vmem_free(vmp, slab, cp->cache_slabsize); 1612 } 1613 1614 static void * 1615 kmem_slab_alloc_impl(kmem_cache_t *cp, kmem_slab_t *sp, boolean_t prefill) 1616 { 1617 kmem_bufctl_t *bcp, **hash_bucket; 1618 void *buf; 1619 boolean_t new_slab = (sp->slab_refcnt == 0); 1620 1621 ASSERT(MUTEX_HELD(&cp->cache_lock)); 1622 /* 1623 * kmem_slab_alloc() drops cache_lock when it creates a new slab, so we 1624 * can't ASSERT(avl_is_empty(&cp->cache_partial_slabs)) here when the 1625 * slab is newly created. 1626 */ 1627 ASSERT(new_slab || (KMEM_SLAB_IS_PARTIAL(sp) && 1628 (sp == avl_first(&cp->cache_partial_slabs)))); 1629 ASSERT(sp->slab_cache == cp); 1630 1631 cp->cache_slab_alloc++; 1632 cp->cache_bufslab--; 1633 sp->slab_refcnt++; 1634 1635 bcp = sp->slab_head; 1636 sp->slab_head = bcp->bc_next; 1637 1638 if (cp->cache_flags & KMF_HASH) { 1639 /* 1640 * Add buffer to allocated-address hash table. 1641 */ 1642 buf = bcp->bc_addr; 1643 hash_bucket = KMEM_HASH(cp, buf); 1644 bcp->bc_next = *hash_bucket; 1645 *hash_bucket = bcp; 1646 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) { 1647 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 1648 } 1649 } else { 1650 buf = KMEM_BUF(cp, bcp); 1651 } 1652 1653 ASSERT(KMEM_SLAB_MEMBER(sp, buf)); 1654 1655 if (sp->slab_head == NULL) { 1656 ASSERT(KMEM_SLAB_IS_ALL_USED(sp)); 1657 if (new_slab) { 1658 ASSERT(sp->slab_chunks == 1); 1659 } else { 1660 ASSERT(sp->slab_chunks > 1); /* the slab was partial */ 1661 avl_remove(&cp->cache_partial_slabs, sp); 1662 sp->slab_later_count = 0; /* clear history */ 1663 sp->slab_flags &= ~KMEM_SLAB_NOMOVE; 1664 sp->slab_stuck_offset = (uint32_t)-1; 1665 } 1666 list_insert_head(&cp->cache_complete_slabs, sp); 1667 cp->cache_complete_slab_count++; 1668 return (buf); 1669 } 1670 1671 ASSERT(KMEM_SLAB_IS_PARTIAL(sp)); 1672 /* 1673 * Peek to see if the magazine layer is enabled before 1674 * we prefill. We're not holding the cpu cache lock, 1675 * so the peek could be wrong, but there's no harm in it. 1676 */ 1677 if (new_slab && prefill && (cp->cache_flags & KMF_PREFILL) && 1678 (KMEM_CPU_CACHE(cp)->cc_magsize != 0)) { 1679 kmem_slab_prefill(cp, sp); 1680 return (buf); 1681 } 1682 1683 if (new_slab) { 1684 avl_add(&cp->cache_partial_slabs, sp); 1685 return (buf); 1686 } 1687 1688 /* 1689 * The slab is now more allocated than it was, so the 1690 * order remains unchanged. 1691 */ 1692 ASSERT(!avl_update(&cp->cache_partial_slabs, sp)); 1693 return (buf); 1694 } 1695 1696 /* 1697 * Allocate a raw (unconstructed) buffer from cp's slab layer. 1698 */ 1699 static void * 1700 kmem_slab_alloc(kmem_cache_t *cp, int kmflag) 1701 { 1702 kmem_slab_t *sp; 1703 void *buf; 1704 boolean_t test_destructor; 1705 1706 mutex_enter(&cp->cache_lock); 1707 test_destructor = (cp->cache_slab_alloc == 0); 1708 sp = avl_first(&cp->cache_partial_slabs); 1709 if (sp == NULL) { 1710 ASSERT(cp->cache_bufslab == 0); 1711 1712 /* 1713 * The freelist is empty. Create a new slab. 1714 */ 1715 mutex_exit(&cp->cache_lock); 1716 if ((sp = kmem_slab_create(cp, kmflag)) == NULL) { 1717 return (NULL); 1718 } 1719 mutex_enter(&cp->cache_lock); 1720 cp->cache_slab_create++; 1721 if ((cp->cache_buftotal += sp->slab_chunks) > cp->cache_bufmax) 1722 cp->cache_bufmax = cp->cache_buftotal; 1723 cp->cache_bufslab += sp->slab_chunks; 1724 } 1725 1726 buf = kmem_slab_alloc_impl(cp, sp, B_TRUE); 1727 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) == 1728 (cp->cache_complete_slab_count + 1729 avl_numnodes(&cp->cache_partial_slabs) + 1730 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount))); 1731 mutex_exit(&cp->cache_lock); 1732 1733 if (test_destructor && cp->cache_destructor != NULL) { 1734 /* 1735 * On the first kmem_slab_alloc(), assert that it is valid to 1736 * call the destructor on a newly constructed object without any 1737 * client involvement. 1738 */ 1739 if ((cp->cache_constructor == NULL) || 1740 cp->cache_constructor(buf, cp->cache_private, 1741 kmflag) == 0) { 1742 cp->cache_destructor(buf, cp->cache_private); 1743 } 1744 copy_pattern(KMEM_UNINITIALIZED_PATTERN, buf, 1745 cp->cache_bufsize); 1746 if (cp->cache_flags & KMF_DEADBEEF) { 1747 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 1748 } 1749 } 1750 1751 return (buf); 1752 } 1753 1754 static void kmem_slab_move_yes(kmem_cache_t *, kmem_slab_t *, void *); 1755 1756 /* 1757 * Free a raw (unconstructed) buffer to cp's slab layer. 1758 */ 1759 static void 1760 kmem_slab_free(kmem_cache_t *cp, void *buf) 1761 { 1762 kmem_slab_t *sp; 1763 kmem_bufctl_t *bcp, **prev_bcpp; 1764 1765 ASSERT(buf != NULL); 1766 1767 mutex_enter(&cp->cache_lock); 1768 cp->cache_slab_free++; 1769 1770 if (cp->cache_flags & KMF_HASH) { 1771 /* 1772 * Look up buffer in allocated-address hash table. 1773 */ 1774 prev_bcpp = KMEM_HASH(cp, buf); 1775 while ((bcp = *prev_bcpp) != NULL) { 1776 if (bcp->bc_addr == buf) { 1777 *prev_bcpp = bcp->bc_next; 1778 sp = bcp->bc_slab; 1779 break; 1780 } 1781 cp->cache_lookup_depth++; 1782 prev_bcpp = &bcp->bc_next; 1783 } 1784 } else { 1785 bcp = KMEM_BUFCTL(cp, buf); 1786 sp = KMEM_SLAB(cp, buf); 1787 } 1788 1789 if (bcp == NULL || sp->slab_cache != cp || !KMEM_SLAB_MEMBER(sp, buf)) { 1790 mutex_exit(&cp->cache_lock); 1791 kmem_error(KMERR_BADADDR, cp, buf); 1792 return; 1793 } 1794 1795 if (KMEM_SLAB_OFFSET(sp, buf) == sp->slab_stuck_offset) { 1796 /* 1797 * If this is the buffer that prevented the consolidator from 1798 * clearing the slab, we can reset the slab flags now that the 1799 * buffer is freed. (It makes sense to do this in 1800 * kmem_cache_free(), where the client gives up ownership of the 1801 * buffer, but on the hot path the test is too expensive.) 1802 */ 1803 kmem_slab_move_yes(cp, sp, buf); 1804 } 1805 1806 if ((cp->cache_flags & (KMF_AUDIT | KMF_BUFTAG)) == KMF_AUDIT) { 1807 if (cp->cache_flags & KMF_CONTENTS) 1808 ((kmem_bufctl_audit_t *)bcp)->bc_contents = 1809 kmem_log_enter(kmem_content_log, buf, 1810 cp->cache_contents); 1811 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 1812 } 1813 1814 bcp->bc_next = sp->slab_head; 1815 sp->slab_head = bcp; 1816 1817 cp->cache_bufslab++; 1818 ASSERT(sp->slab_refcnt >= 1); 1819 1820 if (--sp->slab_refcnt == 0) { 1821 /* 1822 * There are no outstanding allocations from this slab, 1823 * so we can reclaim the memory. 1824 */ 1825 if (sp->slab_chunks == 1) { 1826 list_remove(&cp->cache_complete_slabs, sp); 1827 cp->cache_complete_slab_count--; 1828 } else { 1829 avl_remove(&cp->cache_partial_slabs, sp); 1830 } 1831 1832 cp->cache_buftotal -= sp->slab_chunks; 1833 cp->cache_bufslab -= sp->slab_chunks; 1834 /* 1835 * Defer releasing the slab to the virtual memory subsystem 1836 * while there is a pending move callback, since we guarantee 1837 * that buffers passed to the move callback have only been 1838 * touched by kmem or by the client itself. Since the memory 1839 * patterns baddcafe (uninitialized) and deadbeef (freed) both 1840 * set at least one of the two lowest order bits, the client can 1841 * test those bits in the move callback to determine whether or 1842 * not it knows about the buffer (assuming that the client also 1843 * sets one of those low order bits whenever it frees a buffer). 1844 */ 1845 if (cp->cache_defrag == NULL || 1846 (avl_is_empty(&cp->cache_defrag->kmd_moves_pending) && 1847 !(sp->slab_flags & KMEM_SLAB_MOVE_PENDING))) { 1848 cp->cache_slab_destroy++; 1849 mutex_exit(&cp->cache_lock); 1850 kmem_slab_destroy(cp, sp); 1851 } else { 1852 list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 1853 /* 1854 * Slabs are inserted at both ends of the deadlist to 1855 * distinguish between slabs freed while move callbacks 1856 * are pending (list head) and a slab freed while the 1857 * lock is dropped in kmem_move_buffers() (list tail) so 1858 * that in both cases slab_destroy() is called from the 1859 * right context. 1860 */ 1861 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) { 1862 list_insert_tail(deadlist, sp); 1863 } else { 1864 list_insert_head(deadlist, sp); 1865 } 1866 cp->cache_defrag->kmd_deadcount++; 1867 mutex_exit(&cp->cache_lock); 1868 } 1869 return; 1870 } 1871 1872 if (bcp->bc_next == NULL) { 1873 /* Transition the slab from completely allocated to partial. */ 1874 ASSERT(sp->slab_refcnt == (sp->slab_chunks - 1)); 1875 ASSERT(sp->slab_chunks > 1); 1876 list_remove(&cp->cache_complete_slabs, sp); 1877 cp->cache_complete_slab_count--; 1878 avl_add(&cp->cache_partial_slabs, sp); 1879 } else { 1880 (void) avl_update_gt(&cp->cache_partial_slabs, sp); 1881 } 1882 1883 ASSERT((cp->cache_slab_create - cp->cache_slab_destroy) == 1884 (cp->cache_complete_slab_count + 1885 avl_numnodes(&cp->cache_partial_slabs) + 1886 (cp->cache_defrag == NULL ? 0 : cp->cache_defrag->kmd_deadcount))); 1887 mutex_exit(&cp->cache_lock); 1888 } 1889 1890 /* 1891 * Return -1 if kmem_error, 1 if constructor fails, 0 if successful. 1892 */ 1893 static int 1894 kmem_cache_alloc_debug(kmem_cache_t *cp, void *buf, int kmflag, int construct, 1895 caddr_t caller) 1896 { 1897 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 1898 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl; 1899 uint32_t mtbf; 1900 1901 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) { 1902 kmem_error(KMERR_BADBUFTAG, cp, buf); 1903 return (-1); 1904 } 1905 1906 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_ALLOC; 1907 1908 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) { 1909 kmem_error(KMERR_BADBUFCTL, cp, buf); 1910 return (-1); 1911 } 1912 1913 if (cp->cache_flags & KMF_DEADBEEF) { 1914 if (!construct && (cp->cache_flags & KMF_LITE)) { 1915 if (*(uint64_t *)buf != KMEM_FREE_PATTERN) { 1916 kmem_error(KMERR_MODIFIED, cp, buf); 1917 return (-1); 1918 } 1919 if (cp->cache_constructor != NULL) 1920 *(uint64_t *)buf = btp->bt_redzone; 1921 else 1922 *(uint64_t *)buf = KMEM_UNINITIALIZED_PATTERN; 1923 } else { 1924 construct = 1; 1925 if (verify_and_copy_pattern(KMEM_FREE_PATTERN, 1926 KMEM_UNINITIALIZED_PATTERN, buf, 1927 cp->cache_verify)) { 1928 kmem_error(KMERR_MODIFIED, cp, buf); 1929 return (-1); 1930 } 1931 } 1932 } 1933 btp->bt_redzone = KMEM_REDZONE_PATTERN; 1934 1935 if ((mtbf = kmem_mtbf | cp->cache_mtbf) != 0 && 1936 gethrtime() % mtbf == 0 && 1937 (kmflag & (KM_NOSLEEP | KM_PANIC)) == KM_NOSLEEP) { 1938 kmem_log_event(kmem_failure_log, cp, NULL, NULL); 1939 if (!construct && cp->cache_destructor != NULL) 1940 cp->cache_destructor(buf, cp->cache_private); 1941 } else { 1942 mtbf = 0; 1943 } 1944 1945 if (mtbf || (construct && cp->cache_constructor != NULL && 1946 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0)) { 1947 atomic_inc_64(&cp->cache_alloc_fail); 1948 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 1949 if (cp->cache_flags & KMF_DEADBEEF) 1950 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 1951 kmem_slab_free(cp, buf); 1952 return (1); 1953 } 1954 1955 if (cp->cache_flags & KMF_AUDIT) { 1956 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 1957 } 1958 1959 if ((cp->cache_flags & KMF_LITE) && 1960 !(cp->cache_cflags & KMC_KMEM_ALLOC)) { 1961 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller); 1962 } 1963 1964 return (0); 1965 } 1966 1967 static int 1968 kmem_cache_free_debug(kmem_cache_t *cp, void *buf, caddr_t caller) 1969 { 1970 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 1971 kmem_bufctl_audit_t *bcp = (kmem_bufctl_audit_t *)btp->bt_bufctl; 1972 kmem_slab_t *sp; 1973 1974 if (btp->bt_bxstat != ((intptr_t)bcp ^ KMEM_BUFTAG_ALLOC)) { 1975 if (btp->bt_bxstat == ((intptr_t)bcp ^ KMEM_BUFTAG_FREE)) { 1976 kmem_error(KMERR_DUPFREE, cp, buf); 1977 return (-1); 1978 } 1979 sp = kmem_findslab(cp, buf); 1980 if (sp == NULL || sp->slab_cache != cp) 1981 kmem_error(KMERR_BADADDR, cp, buf); 1982 else 1983 kmem_error(KMERR_REDZONE, cp, buf); 1984 return (-1); 1985 } 1986 1987 btp->bt_bxstat = (intptr_t)bcp ^ KMEM_BUFTAG_FREE; 1988 1989 if ((cp->cache_flags & KMF_HASH) && bcp->bc_addr != buf) { 1990 kmem_error(KMERR_BADBUFCTL, cp, buf); 1991 return (-1); 1992 } 1993 1994 if (btp->bt_redzone != KMEM_REDZONE_PATTERN) { 1995 kmem_error(KMERR_REDZONE, cp, buf); 1996 return (-1); 1997 } 1998 1999 if (cp->cache_flags & KMF_AUDIT) { 2000 if (cp->cache_flags & KMF_CONTENTS) 2001 bcp->bc_contents = kmem_log_enter(kmem_content_log, 2002 buf, cp->cache_contents); 2003 KMEM_AUDIT(kmem_transaction_log, cp, bcp); 2004 } 2005 2006 if ((cp->cache_flags & KMF_LITE) && 2007 !(cp->cache_cflags & KMC_KMEM_ALLOC)) { 2008 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller); 2009 } 2010 2011 if (cp->cache_flags & KMF_DEADBEEF) { 2012 if (cp->cache_flags & KMF_LITE) 2013 btp->bt_redzone = *(uint64_t *)buf; 2014 else if (cp->cache_destructor != NULL) 2015 cp->cache_destructor(buf, cp->cache_private); 2016 2017 copy_pattern(KMEM_FREE_PATTERN, buf, cp->cache_verify); 2018 } 2019 2020 return (0); 2021 } 2022 2023 /* 2024 * Free each object in magazine mp to cp's slab layer, and free mp itself. 2025 */ 2026 static void 2027 kmem_magazine_destroy(kmem_cache_t *cp, kmem_magazine_t *mp, int nrounds) 2028 { 2029 int round; 2030 2031 ASSERT(!list_link_active(&cp->cache_link) || 2032 taskq_member(kmem_taskq, curthread)); 2033 2034 for (round = 0; round < nrounds; round++) { 2035 void *buf = mp->mag_round[round]; 2036 2037 if (cp->cache_flags & KMF_DEADBEEF) { 2038 if (verify_pattern(KMEM_FREE_PATTERN, buf, 2039 cp->cache_verify) != NULL) { 2040 kmem_error(KMERR_MODIFIED, cp, buf); 2041 continue; 2042 } 2043 if ((cp->cache_flags & KMF_LITE) && 2044 cp->cache_destructor != NULL) { 2045 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2046 *(uint64_t *)buf = btp->bt_redzone; 2047 cp->cache_destructor(buf, cp->cache_private); 2048 *(uint64_t *)buf = KMEM_FREE_PATTERN; 2049 } 2050 } else if (cp->cache_destructor != NULL) { 2051 cp->cache_destructor(buf, cp->cache_private); 2052 } 2053 2054 kmem_slab_free(cp, buf); 2055 } 2056 ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 2057 kmem_cache_free(cp->cache_magtype->mt_cache, mp); 2058 } 2059 2060 /* 2061 * Allocate a magazine from the depot. 2062 */ 2063 static kmem_magazine_t * 2064 kmem_depot_alloc(kmem_cache_t *cp, kmem_maglist_t *mlp) 2065 { 2066 kmem_magazine_t *mp; 2067 2068 /* 2069 * If we can't get the depot lock without contention, 2070 * update our contention count. We use the depot 2071 * contention rate to determine whether we need to 2072 * increase the magazine size for better scalability. 2073 */ 2074 if (!mutex_tryenter(&cp->cache_depot_lock)) { 2075 mutex_enter(&cp->cache_depot_lock); 2076 cp->cache_depot_contention++; 2077 } 2078 2079 if ((mp = mlp->ml_list) != NULL) { 2080 ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 2081 mlp->ml_list = mp->mag_next; 2082 if (--mlp->ml_total < mlp->ml_min) 2083 mlp->ml_min = mlp->ml_total; 2084 mlp->ml_alloc++; 2085 } 2086 2087 mutex_exit(&cp->cache_depot_lock); 2088 2089 return (mp); 2090 } 2091 2092 /* 2093 * Free a magazine to the depot. 2094 */ 2095 static void 2096 kmem_depot_free(kmem_cache_t *cp, kmem_maglist_t *mlp, kmem_magazine_t *mp) 2097 { 2098 mutex_enter(&cp->cache_depot_lock); 2099 ASSERT(KMEM_MAGAZINE_VALID(cp, mp)); 2100 mp->mag_next = mlp->ml_list; 2101 mlp->ml_list = mp; 2102 mlp->ml_total++; 2103 mutex_exit(&cp->cache_depot_lock); 2104 } 2105 2106 /* 2107 * Update the working set statistics for cp's depot. 2108 */ 2109 static void 2110 kmem_depot_ws_update(kmem_cache_t *cp) 2111 { 2112 mutex_enter(&cp->cache_depot_lock); 2113 cp->cache_full.ml_reaplimit = cp->cache_full.ml_min; 2114 cp->cache_full.ml_min = cp->cache_full.ml_total; 2115 cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_min; 2116 cp->cache_empty.ml_min = cp->cache_empty.ml_total; 2117 mutex_exit(&cp->cache_depot_lock); 2118 } 2119 2120 /* 2121 * Set the working set statistics for cp's depot to zero. (Everything is 2122 * eligible for reaping.) 2123 */ 2124 static void 2125 kmem_depot_ws_zero(kmem_cache_t *cp) 2126 { 2127 mutex_enter(&cp->cache_depot_lock); 2128 cp->cache_full.ml_reaplimit = cp->cache_full.ml_total; 2129 cp->cache_full.ml_min = cp->cache_full.ml_total; 2130 cp->cache_empty.ml_reaplimit = cp->cache_empty.ml_total; 2131 cp->cache_empty.ml_min = cp->cache_empty.ml_total; 2132 mutex_exit(&cp->cache_depot_lock); 2133 } 2134 2135 /* 2136 * The number of bytes to reap before we call kpreempt(). The default (1MB) 2137 * causes us to preempt reaping up to hundreds of times per second. Using a 2138 * larger value (1GB) causes this to have virtually no effect. 2139 */ 2140 size_t kmem_reap_preempt_bytes = 1024 * 1024; 2141 2142 /* 2143 * Reap all magazines that have fallen out of the depot's working set. 2144 */ 2145 static void 2146 kmem_depot_ws_reap(kmem_cache_t *cp) 2147 { 2148 size_t bytes = 0; 2149 long reap; 2150 kmem_magazine_t *mp; 2151 2152 ASSERT(!list_link_active(&cp->cache_link) || 2153 taskq_member(kmem_taskq, curthread)); 2154 2155 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 2156 while (reap-- && 2157 (mp = kmem_depot_alloc(cp, &cp->cache_full)) != NULL) { 2158 kmem_magazine_destroy(cp, mp, cp->cache_magtype->mt_magsize); 2159 bytes += cp->cache_magtype->mt_magsize * cp->cache_bufsize; 2160 if (bytes > kmem_reap_preempt_bytes) { 2161 kpreempt(KPREEMPT_SYNC); 2162 bytes = 0; 2163 } 2164 } 2165 2166 reap = MIN(cp->cache_empty.ml_reaplimit, cp->cache_empty.ml_min); 2167 while (reap-- && 2168 (mp = kmem_depot_alloc(cp, &cp->cache_empty)) != NULL) { 2169 kmem_magazine_destroy(cp, mp, 0); 2170 bytes += cp->cache_magtype->mt_magsize * cp->cache_bufsize; 2171 if (bytes > kmem_reap_preempt_bytes) { 2172 kpreempt(KPREEMPT_SYNC); 2173 bytes = 0; 2174 } 2175 } 2176 } 2177 2178 static void 2179 kmem_cpu_reload(kmem_cpu_cache_t *ccp, kmem_magazine_t *mp, int rounds) 2180 { 2181 ASSERT((ccp->cc_loaded == NULL && ccp->cc_rounds == -1) || 2182 (ccp->cc_loaded && ccp->cc_rounds + rounds == ccp->cc_magsize)); 2183 ASSERT(ccp->cc_magsize > 0); 2184 2185 ccp->cc_ploaded = ccp->cc_loaded; 2186 ccp->cc_prounds = ccp->cc_rounds; 2187 ccp->cc_loaded = mp; 2188 ccp->cc_rounds = rounds; 2189 } 2190 2191 /* 2192 * Intercept kmem alloc/free calls during crash dump in order to avoid 2193 * changing kmem state while memory is being saved to the dump device. 2194 * Otherwise, ::kmem_verify will report "corrupt buffers". Note that 2195 * there are no locks because only one CPU calls kmem during a crash 2196 * dump. To enable this feature, first create the associated vmem 2197 * arena with VMC_DUMPSAFE. 2198 */ 2199 static void *kmem_dump_start; /* start of pre-reserved heap */ 2200 static void *kmem_dump_end; /* end of heap area */ 2201 static void *kmem_dump_curr; /* current free heap pointer */ 2202 static size_t kmem_dump_size; /* size of heap area */ 2203 2204 /* append to each buf created in the pre-reserved heap */ 2205 typedef struct kmem_dumpctl { 2206 void *kdc_next; /* cache dump free list linkage */ 2207 } kmem_dumpctl_t; 2208 2209 #define KMEM_DUMPCTL(cp, buf) \ 2210 ((kmem_dumpctl_t *)P2ROUNDUP((uintptr_t)(buf) + (cp)->cache_bufsize, \ 2211 sizeof (void *))) 2212 2213 /* Keep some simple stats. */ 2214 #define KMEM_DUMP_LOGS (100) 2215 2216 typedef struct kmem_dump_log { 2217 kmem_cache_t *kdl_cache; 2218 uint_t kdl_allocs; /* # of dump allocations */ 2219 uint_t kdl_frees; /* # of dump frees */ 2220 uint_t kdl_alloc_fails; /* # of allocation failures */ 2221 uint_t kdl_free_nondump; /* # of non-dump frees */ 2222 uint_t kdl_unsafe; /* cache was used, but unsafe */ 2223 } kmem_dump_log_t; 2224 2225 static kmem_dump_log_t *kmem_dump_log; 2226 static int kmem_dump_log_idx; 2227 2228 #define KDI_LOG(cp, stat) { \ 2229 kmem_dump_log_t *kdl; \ 2230 if ((kdl = (kmem_dump_log_t *)((cp)->cache_dumplog)) != NULL) { \ 2231 kdl->stat++; \ 2232 } else if (kmem_dump_log_idx < KMEM_DUMP_LOGS) { \ 2233 kdl = &kmem_dump_log[kmem_dump_log_idx++]; \ 2234 kdl->stat++; \ 2235 kdl->kdl_cache = (cp); \ 2236 (cp)->cache_dumplog = kdl; \ 2237 } \ 2238 } 2239 2240 /* set non zero for full report */ 2241 uint_t kmem_dump_verbose = 0; 2242 2243 /* stats for overize heap */ 2244 uint_t kmem_dump_oversize_allocs = 0; 2245 uint_t kmem_dump_oversize_max = 0; 2246 2247 static void 2248 kmem_dumppr(char **pp, char *e, const char *format, ...) 2249 { 2250 char *p = *pp; 2251 2252 if (p < e) { 2253 int n; 2254 va_list ap; 2255 2256 va_start(ap, format); 2257 n = vsnprintf(p, e - p, format, ap); 2258 va_end(ap); 2259 *pp = p + n; 2260 } 2261 } 2262 2263 /* 2264 * Called when dumpadm(1M) configures dump parameters. 2265 */ 2266 void 2267 kmem_dump_init(size_t size) 2268 { 2269 if (kmem_dump_start != NULL) 2270 kmem_free(kmem_dump_start, kmem_dump_size); 2271 2272 if (kmem_dump_log == NULL) 2273 kmem_dump_log = (kmem_dump_log_t *)kmem_zalloc(KMEM_DUMP_LOGS * 2274 sizeof (kmem_dump_log_t), KM_SLEEP); 2275 2276 kmem_dump_start = kmem_alloc(size, KM_SLEEP); 2277 2278 if (kmem_dump_start != NULL) { 2279 kmem_dump_size = size; 2280 kmem_dump_curr = kmem_dump_start; 2281 kmem_dump_end = (void *)((char *)kmem_dump_start + size); 2282 copy_pattern(KMEM_UNINITIALIZED_PATTERN, kmem_dump_start, size); 2283 } else { 2284 kmem_dump_size = 0; 2285 kmem_dump_curr = NULL; 2286 kmem_dump_end = NULL; 2287 } 2288 } 2289 2290 /* 2291 * Set flag for each kmem_cache_t if is safe to use alternate dump 2292 * memory. Called just before panic crash dump starts. Set the flag 2293 * for the calling CPU. 2294 */ 2295 void 2296 kmem_dump_begin(void) 2297 { 2298 ASSERT(panicstr != NULL); 2299 if (kmem_dump_start != NULL) { 2300 kmem_cache_t *cp; 2301 2302 for (cp = list_head(&kmem_caches); cp != NULL; 2303 cp = list_next(&kmem_caches, cp)) { 2304 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2305 2306 if (cp->cache_arena->vm_cflags & VMC_DUMPSAFE) { 2307 cp->cache_flags |= KMF_DUMPDIVERT; 2308 ccp->cc_flags |= KMF_DUMPDIVERT; 2309 ccp->cc_dump_rounds = ccp->cc_rounds; 2310 ccp->cc_dump_prounds = ccp->cc_prounds; 2311 ccp->cc_rounds = ccp->cc_prounds = -1; 2312 } else { 2313 cp->cache_flags |= KMF_DUMPUNSAFE; 2314 ccp->cc_flags |= KMF_DUMPUNSAFE; 2315 } 2316 } 2317 } 2318 } 2319 2320 /* 2321 * finished dump intercept 2322 * print any warnings on the console 2323 * return verbose information to dumpsys() in the given buffer 2324 */ 2325 size_t 2326 kmem_dump_finish(char *buf, size_t size) 2327 { 2328 int kdi_idx; 2329 int kdi_end = kmem_dump_log_idx; 2330 int percent = 0; 2331 int header = 0; 2332 int warn = 0; 2333 size_t used; 2334 kmem_cache_t *cp; 2335 kmem_dump_log_t *kdl; 2336 char *e = buf + size; 2337 char *p = buf; 2338 2339 if (kmem_dump_size == 0 || kmem_dump_verbose == 0) 2340 return (0); 2341 2342 used = (char *)kmem_dump_curr - (char *)kmem_dump_start; 2343 percent = (used * 100) / kmem_dump_size; 2344 2345 kmem_dumppr(&p, e, "%% heap used,%d\n", percent); 2346 kmem_dumppr(&p, e, "used bytes,%ld\n", used); 2347 kmem_dumppr(&p, e, "heap size,%ld\n", kmem_dump_size); 2348 kmem_dumppr(&p, e, "Oversize allocs,%d\n", 2349 kmem_dump_oversize_allocs); 2350 kmem_dumppr(&p, e, "Oversize max size,%ld\n", 2351 kmem_dump_oversize_max); 2352 2353 for (kdi_idx = 0; kdi_idx < kdi_end; kdi_idx++) { 2354 kdl = &kmem_dump_log[kdi_idx]; 2355 cp = kdl->kdl_cache; 2356 if (cp == NULL) 2357 break; 2358 if (kdl->kdl_alloc_fails) 2359 ++warn; 2360 if (header == 0) { 2361 kmem_dumppr(&p, e, 2362 "Cache Name,Allocs,Frees,Alloc Fails," 2363 "Nondump Frees,Unsafe Allocs/Frees\n"); 2364 header = 1; 2365 } 2366 kmem_dumppr(&p, e, "%s,%d,%d,%d,%d,%d\n", 2367 cp->cache_name, kdl->kdl_allocs, kdl->kdl_frees, 2368 kdl->kdl_alloc_fails, kdl->kdl_free_nondump, 2369 kdl->kdl_unsafe); 2370 } 2371 2372 /* return buffer size used */ 2373 if (p < e) 2374 bzero(p, e - p); 2375 return (p - buf); 2376 } 2377 2378 /* 2379 * Allocate a constructed object from alternate dump memory. 2380 */ 2381 void * 2382 kmem_cache_alloc_dump(kmem_cache_t *cp, int kmflag) 2383 { 2384 void *buf; 2385 void *curr; 2386 char *bufend; 2387 2388 /* return a constructed object */ 2389 if ((buf = cp->cache_dumpfreelist) != NULL) { 2390 cp->cache_dumpfreelist = KMEM_DUMPCTL(cp, buf)->kdc_next; 2391 KDI_LOG(cp, kdl_allocs); 2392 return (buf); 2393 } 2394 2395 /* create a new constructed object */ 2396 curr = kmem_dump_curr; 2397 buf = (void *)P2ROUNDUP((uintptr_t)curr, cp->cache_align); 2398 bufend = (char *)KMEM_DUMPCTL(cp, buf) + sizeof (kmem_dumpctl_t); 2399 2400 /* hat layer objects cannot cross a page boundary */ 2401 if (cp->cache_align < PAGESIZE) { 2402 char *page = (char *)P2ROUNDUP((uintptr_t)buf, PAGESIZE); 2403 if (bufend > page) { 2404 bufend += page - (char *)buf; 2405 buf = (void *)page; 2406 } 2407 } 2408 2409 /* fall back to normal alloc if reserved area is used up */ 2410 if (bufend > (char *)kmem_dump_end) { 2411 kmem_dump_curr = kmem_dump_end; 2412 KDI_LOG(cp, kdl_alloc_fails); 2413 return (NULL); 2414 } 2415 2416 /* 2417 * Must advance curr pointer before calling a constructor that 2418 * may also allocate memory. 2419 */ 2420 kmem_dump_curr = bufend; 2421 2422 /* run constructor */ 2423 if (cp->cache_constructor != NULL && 2424 cp->cache_constructor(buf, cp->cache_private, kmflag) 2425 != 0) { 2426 #ifdef DEBUG 2427 printf("name='%s' cache=0x%p: kmem cache constructor failed\n", 2428 cp->cache_name, (void *)cp); 2429 #endif 2430 /* reset curr pointer iff no allocs were done */ 2431 if (kmem_dump_curr == bufend) 2432 kmem_dump_curr = curr; 2433 2434 /* fall back to normal alloc if the constructor fails */ 2435 KDI_LOG(cp, kdl_alloc_fails); 2436 return (NULL); 2437 } 2438 2439 KDI_LOG(cp, kdl_allocs); 2440 return (buf); 2441 } 2442 2443 /* 2444 * Free a constructed object in alternate dump memory. 2445 */ 2446 int 2447 kmem_cache_free_dump(kmem_cache_t *cp, void *buf) 2448 { 2449 /* save constructed buffers for next time */ 2450 if ((char *)buf >= (char *)kmem_dump_start && 2451 (char *)buf < (char *)kmem_dump_end) { 2452 KMEM_DUMPCTL(cp, buf)->kdc_next = cp->cache_dumpfreelist; 2453 cp->cache_dumpfreelist = buf; 2454 KDI_LOG(cp, kdl_frees); 2455 return (0); 2456 } 2457 2458 /* count all non-dump buf frees */ 2459 KDI_LOG(cp, kdl_free_nondump); 2460 2461 /* just drop buffers that were allocated before dump started */ 2462 if (kmem_dump_curr < kmem_dump_end) 2463 return (0); 2464 2465 /* fall back to normal free if reserved area is used up */ 2466 return (1); 2467 } 2468 2469 /* 2470 * Allocate a constructed object from cache cp. 2471 */ 2472 void * 2473 kmem_cache_alloc(kmem_cache_t *cp, int kmflag) 2474 { 2475 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2476 kmem_magazine_t *fmp; 2477 void *buf; 2478 2479 mutex_enter(&ccp->cc_lock); 2480 for (;;) { 2481 /* 2482 * If there's an object available in the current CPU's 2483 * loaded magazine, just take it and return. 2484 */ 2485 if (ccp->cc_rounds > 0) { 2486 buf = ccp->cc_loaded->mag_round[--ccp->cc_rounds]; 2487 ccp->cc_alloc++; 2488 mutex_exit(&ccp->cc_lock); 2489 if (ccp->cc_flags & (KMF_BUFTAG | KMF_DUMPUNSAFE)) { 2490 if (ccp->cc_flags & KMF_DUMPUNSAFE) { 2491 ASSERT(!(ccp->cc_flags & 2492 KMF_DUMPDIVERT)); 2493 KDI_LOG(cp, kdl_unsafe); 2494 } 2495 if ((ccp->cc_flags & KMF_BUFTAG) && 2496 kmem_cache_alloc_debug(cp, buf, kmflag, 0, 2497 caller()) != 0) { 2498 if (kmflag & KM_NOSLEEP) 2499 return (NULL); 2500 mutex_enter(&ccp->cc_lock); 2501 continue; 2502 } 2503 } 2504 return (buf); 2505 } 2506 2507 /* 2508 * The loaded magazine is empty. If the previously loaded 2509 * magazine was full, exchange them and try again. 2510 */ 2511 if (ccp->cc_prounds > 0) { 2512 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds); 2513 continue; 2514 } 2515 2516 /* 2517 * Return an alternate buffer at dump time to preserve 2518 * the heap. 2519 */ 2520 if (ccp->cc_flags & (KMF_DUMPDIVERT | KMF_DUMPUNSAFE)) { 2521 if (ccp->cc_flags & KMF_DUMPUNSAFE) { 2522 ASSERT(!(ccp->cc_flags & KMF_DUMPDIVERT)); 2523 /* log it so that we can warn about it */ 2524 KDI_LOG(cp, kdl_unsafe); 2525 } else { 2526 if ((buf = kmem_cache_alloc_dump(cp, kmflag)) != 2527 NULL) { 2528 mutex_exit(&ccp->cc_lock); 2529 return (buf); 2530 } 2531 break; /* fall back to slab layer */ 2532 } 2533 } 2534 2535 /* 2536 * If the magazine layer is disabled, break out now. 2537 */ 2538 if (ccp->cc_magsize == 0) 2539 break; 2540 2541 /* 2542 * Try to get a full magazine from the depot. 2543 */ 2544 fmp = kmem_depot_alloc(cp, &cp->cache_full); 2545 if (fmp != NULL) { 2546 if (ccp->cc_ploaded != NULL) 2547 kmem_depot_free(cp, &cp->cache_empty, 2548 ccp->cc_ploaded); 2549 kmem_cpu_reload(ccp, fmp, ccp->cc_magsize); 2550 continue; 2551 } 2552 2553 /* 2554 * There are no full magazines in the depot, 2555 * so fall through to the slab layer. 2556 */ 2557 break; 2558 } 2559 mutex_exit(&ccp->cc_lock); 2560 2561 /* 2562 * We couldn't allocate a constructed object from the magazine layer, 2563 * so get a raw buffer from the slab layer and apply its constructor. 2564 */ 2565 buf = kmem_slab_alloc(cp, kmflag); 2566 2567 if (buf == NULL) 2568 return (NULL); 2569 2570 if (cp->cache_flags & KMF_BUFTAG) { 2571 /* 2572 * Make kmem_cache_alloc_debug() apply the constructor for us. 2573 */ 2574 int rc = kmem_cache_alloc_debug(cp, buf, kmflag, 1, caller()); 2575 if (rc != 0) { 2576 if (kmflag & KM_NOSLEEP) 2577 return (NULL); 2578 /* 2579 * kmem_cache_alloc_debug() detected corruption 2580 * but didn't panic (kmem_panic <= 0). We should not be 2581 * here because the constructor failed (indicated by a 2582 * return code of 1). Try again. 2583 */ 2584 ASSERT(rc == -1); 2585 return (kmem_cache_alloc(cp, kmflag)); 2586 } 2587 return (buf); 2588 } 2589 2590 if (cp->cache_constructor != NULL && 2591 cp->cache_constructor(buf, cp->cache_private, kmflag) != 0) { 2592 atomic_inc_64(&cp->cache_alloc_fail); 2593 kmem_slab_free(cp, buf); 2594 return (NULL); 2595 } 2596 2597 return (buf); 2598 } 2599 2600 /* 2601 * The freed argument tells whether or not kmem_cache_free_debug() has already 2602 * been called so that we can avoid the duplicate free error. For example, a 2603 * buffer on a magazine has already been freed by the client but is still 2604 * constructed. 2605 */ 2606 static void 2607 kmem_slab_free_constructed(kmem_cache_t *cp, void *buf, boolean_t freed) 2608 { 2609 if (!freed && (cp->cache_flags & KMF_BUFTAG)) 2610 if (kmem_cache_free_debug(cp, buf, caller()) == -1) 2611 return; 2612 2613 /* 2614 * Note that if KMF_DEADBEEF is in effect and KMF_LITE is not, 2615 * kmem_cache_free_debug() will have already applied the destructor. 2616 */ 2617 if ((cp->cache_flags & (KMF_DEADBEEF | KMF_LITE)) != KMF_DEADBEEF && 2618 cp->cache_destructor != NULL) { 2619 if (cp->cache_flags & KMF_DEADBEEF) { /* KMF_LITE implied */ 2620 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2621 *(uint64_t *)buf = btp->bt_redzone; 2622 cp->cache_destructor(buf, cp->cache_private); 2623 *(uint64_t *)buf = KMEM_FREE_PATTERN; 2624 } else { 2625 cp->cache_destructor(buf, cp->cache_private); 2626 } 2627 } 2628 2629 kmem_slab_free(cp, buf); 2630 } 2631 2632 /* 2633 * Used when there's no room to free a buffer to the per-CPU cache. 2634 * Drops and re-acquires &ccp->cc_lock, and returns non-zero if the 2635 * caller should try freeing to the per-CPU cache again. 2636 * Note that we don't directly install the magazine in the cpu cache, 2637 * since its state may have changed wildly while the lock was dropped. 2638 */ 2639 static int 2640 kmem_cpucache_magazine_alloc(kmem_cpu_cache_t *ccp, kmem_cache_t *cp) 2641 { 2642 kmem_magazine_t *emp; 2643 kmem_magtype_t *mtp; 2644 2645 ASSERT(MUTEX_HELD(&ccp->cc_lock)); 2646 ASSERT(((uint_t)ccp->cc_rounds == ccp->cc_magsize || 2647 ((uint_t)ccp->cc_rounds == -1)) && 2648 ((uint_t)ccp->cc_prounds == ccp->cc_magsize || 2649 ((uint_t)ccp->cc_prounds == -1))); 2650 2651 emp = kmem_depot_alloc(cp, &cp->cache_empty); 2652 if (emp != NULL) { 2653 if (ccp->cc_ploaded != NULL) 2654 kmem_depot_free(cp, &cp->cache_full, 2655 ccp->cc_ploaded); 2656 kmem_cpu_reload(ccp, emp, 0); 2657 return (1); 2658 } 2659 /* 2660 * There are no empty magazines in the depot, 2661 * so try to allocate a new one. We must drop all locks 2662 * across kmem_cache_alloc() because lower layers may 2663 * attempt to allocate from this cache. 2664 */ 2665 mtp = cp->cache_magtype; 2666 mutex_exit(&ccp->cc_lock); 2667 emp = kmem_cache_alloc(mtp->mt_cache, KM_NOSLEEP); 2668 mutex_enter(&ccp->cc_lock); 2669 2670 if (emp != NULL) { 2671 /* 2672 * We successfully allocated an empty magazine. 2673 * However, we had to drop ccp->cc_lock to do it, 2674 * so the cache's magazine size may have changed. 2675 * If so, free the magazine and try again. 2676 */ 2677 if (ccp->cc_magsize != mtp->mt_magsize) { 2678 mutex_exit(&ccp->cc_lock); 2679 kmem_cache_free(mtp->mt_cache, emp); 2680 mutex_enter(&ccp->cc_lock); 2681 return (1); 2682 } 2683 2684 /* 2685 * We got a magazine of the right size. Add it to 2686 * the depot and try the whole dance again. 2687 */ 2688 kmem_depot_free(cp, &cp->cache_empty, emp); 2689 return (1); 2690 } 2691 2692 /* 2693 * We couldn't allocate an empty magazine, 2694 * so fall through to the slab layer. 2695 */ 2696 return (0); 2697 } 2698 2699 /* 2700 * Free a constructed object to cache cp. 2701 */ 2702 void 2703 kmem_cache_free(kmem_cache_t *cp, void *buf) 2704 { 2705 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2706 2707 /* 2708 * The client must not free either of the buffers passed to the move 2709 * callback function. 2710 */ 2711 ASSERT(cp->cache_defrag == NULL || 2712 cp->cache_defrag->kmd_thread != curthread || 2713 (buf != cp->cache_defrag->kmd_from_buf && 2714 buf != cp->cache_defrag->kmd_to_buf)); 2715 2716 if (ccp->cc_flags & (KMF_BUFTAG | KMF_DUMPDIVERT | KMF_DUMPUNSAFE)) { 2717 if (ccp->cc_flags & KMF_DUMPUNSAFE) { 2718 ASSERT(!(ccp->cc_flags & KMF_DUMPDIVERT)); 2719 /* log it so that we can warn about it */ 2720 KDI_LOG(cp, kdl_unsafe); 2721 } else if (KMEM_DUMPCC(ccp) && !kmem_cache_free_dump(cp, buf)) { 2722 return; 2723 } 2724 if (ccp->cc_flags & KMF_BUFTAG) { 2725 if (kmem_cache_free_debug(cp, buf, caller()) == -1) 2726 return; 2727 } 2728 } 2729 2730 mutex_enter(&ccp->cc_lock); 2731 /* 2732 * Any changes to this logic should be reflected in kmem_slab_prefill() 2733 */ 2734 for (;;) { 2735 /* 2736 * If there's a slot available in the current CPU's 2737 * loaded magazine, just put the object there and return. 2738 */ 2739 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) { 2740 ccp->cc_loaded->mag_round[ccp->cc_rounds++] = buf; 2741 ccp->cc_free++; 2742 mutex_exit(&ccp->cc_lock); 2743 return; 2744 } 2745 2746 /* 2747 * The loaded magazine is full. If the previously loaded 2748 * magazine was empty, exchange them and try again. 2749 */ 2750 if (ccp->cc_prounds == 0) { 2751 kmem_cpu_reload(ccp, ccp->cc_ploaded, ccp->cc_prounds); 2752 continue; 2753 } 2754 2755 /* 2756 * If the magazine layer is disabled, break out now. 2757 */ 2758 if (ccp->cc_magsize == 0) 2759 break; 2760 2761 if (!kmem_cpucache_magazine_alloc(ccp, cp)) { 2762 /* 2763 * We couldn't free our constructed object to the 2764 * magazine layer, so apply its destructor and free it 2765 * to the slab layer. 2766 */ 2767 break; 2768 } 2769 } 2770 mutex_exit(&ccp->cc_lock); 2771 kmem_slab_free_constructed(cp, buf, B_TRUE); 2772 } 2773 2774 static void 2775 kmem_slab_prefill(kmem_cache_t *cp, kmem_slab_t *sp) 2776 { 2777 kmem_cpu_cache_t *ccp = KMEM_CPU_CACHE(cp); 2778 int cache_flags = cp->cache_flags; 2779 2780 kmem_bufctl_t *next, *head; 2781 size_t nbufs; 2782 2783 /* 2784 * Completely allocate the newly created slab and put the pre-allocated 2785 * buffers in magazines. Any of the buffers that cannot be put in 2786 * magazines must be returned to the slab. 2787 */ 2788 ASSERT(MUTEX_HELD(&cp->cache_lock)); 2789 ASSERT((cache_flags & (KMF_PREFILL|KMF_BUFTAG)) == KMF_PREFILL); 2790 ASSERT(cp->cache_constructor == NULL); 2791 ASSERT(sp->slab_cache == cp); 2792 ASSERT(sp->slab_refcnt == 1); 2793 ASSERT(sp->slab_head != NULL && sp->slab_chunks > sp->slab_refcnt); 2794 ASSERT(avl_find(&cp->cache_partial_slabs, sp, NULL) == NULL); 2795 2796 head = sp->slab_head; 2797 nbufs = (sp->slab_chunks - sp->slab_refcnt); 2798 sp->slab_head = NULL; 2799 sp->slab_refcnt += nbufs; 2800 cp->cache_bufslab -= nbufs; 2801 cp->cache_slab_alloc += nbufs; 2802 list_insert_head(&cp->cache_complete_slabs, sp); 2803 cp->cache_complete_slab_count++; 2804 mutex_exit(&cp->cache_lock); 2805 mutex_enter(&ccp->cc_lock); 2806 2807 while (head != NULL) { 2808 void *buf = KMEM_BUF(cp, head); 2809 /* 2810 * If there's a slot available in the current CPU's 2811 * loaded magazine, just put the object there and 2812 * continue. 2813 */ 2814 if ((uint_t)ccp->cc_rounds < ccp->cc_magsize) { 2815 ccp->cc_loaded->mag_round[ccp->cc_rounds++] = 2816 buf; 2817 ccp->cc_free++; 2818 nbufs--; 2819 head = head->bc_next; 2820 continue; 2821 } 2822 2823 /* 2824 * The loaded magazine is full. If the previously 2825 * loaded magazine was empty, exchange them and try 2826 * again. 2827 */ 2828 if (ccp->cc_prounds == 0) { 2829 kmem_cpu_reload(ccp, ccp->cc_ploaded, 2830 ccp->cc_prounds); 2831 continue; 2832 } 2833 2834 /* 2835 * If the magazine layer is disabled, break out now. 2836 */ 2837 2838 if (ccp->cc_magsize == 0) { 2839 break; 2840 } 2841 2842 if (!kmem_cpucache_magazine_alloc(ccp, cp)) 2843 break; 2844 } 2845 mutex_exit(&ccp->cc_lock); 2846 if (nbufs != 0) { 2847 ASSERT(head != NULL); 2848 2849 /* 2850 * If there was a failure, return remaining objects to 2851 * the slab 2852 */ 2853 while (head != NULL) { 2854 ASSERT(nbufs != 0); 2855 next = head->bc_next; 2856 head->bc_next = NULL; 2857 kmem_slab_free(cp, KMEM_BUF(cp, head)); 2858 head = next; 2859 nbufs--; 2860 } 2861 } 2862 ASSERT(head == NULL); 2863 ASSERT(nbufs == 0); 2864 mutex_enter(&cp->cache_lock); 2865 } 2866 2867 void * 2868 kmem_zalloc(size_t size, int kmflag) 2869 { 2870 size_t index; 2871 void *buf; 2872 2873 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) { 2874 kmem_cache_t *cp = kmem_alloc_table[index]; 2875 buf = kmem_cache_alloc(cp, kmflag); 2876 if (buf != NULL) { 2877 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp)) { 2878 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2879 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE; 2880 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size); 2881 2882 if (cp->cache_flags & KMF_LITE) { 2883 KMEM_BUFTAG_LITE_ENTER(btp, 2884 kmem_lite_count, caller()); 2885 } 2886 } 2887 bzero(buf, size); 2888 } 2889 } else { 2890 buf = kmem_alloc(size, kmflag); 2891 if (buf != NULL) 2892 bzero(buf, size); 2893 } 2894 return (buf); 2895 } 2896 2897 void * 2898 kmem_alloc(size_t size, int kmflag) 2899 { 2900 size_t index; 2901 kmem_cache_t *cp; 2902 void *buf; 2903 2904 if ((index = ((size - 1) >> KMEM_ALIGN_SHIFT)) < KMEM_ALLOC_TABLE_MAX) { 2905 cp = kmem_alloc_table[index]; 2906 /* fall through to kmem_cache_alloc() */ 2907 2908 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) < 2909 kmem_big_alloc_table_max) { 2910 cp = kmem_big_alloc_table[index]; 2911 /* fall through to kmem_cache_alloc() */ 2912 2913 } else { 2914 if (size == 0) 2915 return (NULL); 2916 2917 buf = vmem_alloc(kmem_oversize_arena, size, 2918 kmflag & KM_VMFLAGS); 2919 if (buf == NULL) 2920 kmem_log_event(kmem_failure_log, NULL, NULL, 2921 (void *)size); 2922 else if (KMEM_DUMP(kmem_slab_cache)) { 2923 /* stats for dump intercept */ 2924 kmem_dump_oversize_allocs++; 2925 if (size > kmem_dump_oversize_max) 2926 kmem_dump_oversize_max = size; 2927 } 2928 return (buf); 2929 } 2930 2931 buf = kmem_cache_alloc(cp, kmflag); 2932 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp) && buf != NULL) { 2933 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2934 ((uint8_t *)buf)[size] = KMEM_REDZONE_BYTE; 2935 ((uint32_t *)btp)[1] = KMEM_SIZE_ENCODE(size); 2936 2937 if (cp->cache_flags & KMF_LITE) { 2938 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, caller()); 2939 } 2940 } 2941 return (buf); 2942 } 2943 2944 void 2945 kmem_free(void *buf, size_t size) 2946 { 2947 size_t index; 2948 kmem_cache_t *cp; 2949 2950 if ((index = (size - 1) >> KMEM_ALIGN_SHIFT) < KMEM_ALLOC_TABLE_MAX) { 2951 cp = kmem_alloc_table[index]; 2952 /* fall through to kmem_cache_free() */ 2953 2954 } else if ((index = ((size - 1) >> KMEM_BIG_SHIFT)) < 2955 kmem_big_alloc_table_max) { 2956 cp = kmem_big_alloc_table[index]; 2957 /* fall through to kmem_cache_free() */ 2958 2959 } else { 2960 EQUIV(buf == NULL, size == 0); 2961 if (buf == NULL && size == 0) 2962 return; 2963 vmem_free(kmem_oversize_arena, buf, size); 2964 return; 2965 } 2966 2967 if ((cp->cache_flags & KMF_BUFTAG) && !KMEM_DUMP(cp)) { 2968 kmem_buftag_t *btp = KMEM_BUFTAG(cp, buf); 2969 uint32_t *ip = (uint32_t *)btp; 2970 if (ip[1] != KMEM_SIZE_ENCODE(size)) { 2971 if (*(uint64_t *)buf == KMEM_FREE_PATTERN) { 2972 kmem_error(KMERR_DUPFREE, cp, buf); 2973 return; 2974 } 2975 if (KMEM_SIZE_VALID(ip[1])) { 2976 ip[0] = KMEM_SIZE_ENCODE(size); 2977 kmem_error(KMERR_BADSIZE, cp, buf); 2978 } else { 2979 kmem_error(KMERR_REDZONE, cp, buf); 2980 } 2981 return; 2982 } 2983 if (((uint8_t *)buf)[size] != KMEM_REDZONE_BYTE) { 2984 kmem_error(KMERR_REDZONE, cp, buf); 2985 return; 2986 } 2987 btp->bt_redzone = KMEM_REDZONE_PATTERN; 2988 if (cp->cache_flags & KMF_LITE) { 2989 KMEM_BUFTAG_LITE_ENTER(btp, kmem_lite_count, 2990 caller()); 2991 } 2992 } 2993 kmem_cache_free(cp, buf); 2994 } 2995 2996 void * 2997 kmem_firewall_va_alloc(vmem_t *vmp, size_t size, int vmflag) 2998 { 2999 size_t realsize = size + vmp->vm_quantum; 3000 void *addr; 3001 3002 /* 3003 * Annoying edge case: if 'size' is just shy of ULONG_MAX, adding 3004 * vm_quantum will cause integer wraparound. Check for this, and 3005 * blow off the firewall page in this case. Note that such a 3006 * giant allocation (the entire kernel address space) can never 3007 * be satisfied, so it will either fail immediately (VM_NOSLEEP) 3008 * or sleep forever (VM_SLEEP). Thus, there is no need for a 3009 * corresponding check in kmem_firewall_va_free(). 3010 */ 3011 if (realsize < size) 3012 realsize = size; 3013 3014 /* 3015 * While boot still owns resource management, make sure that this 3016 * redzone virtual address allocation is properly accounted for in 3017 * OBPs "virtual-memory" "available" lists because we're 3018 * effectively claiming them for a red zone. If we don't do this, 3019 * the available lists become too fragmented and too large for the 3020 * current boot/kernel memory list interface. 3021 */ 3022 addr = vmem_alloc(vmp, realsize, vmflag | VM_NEXTFIT); 3023 3024 if (addr != NULL && kvseg.s_base == NULL && realsize != size) 3025 (void) boot_virt_alloc((char *)addr + size, vmp->vm_quantum); 3026 3027 return (addr); 3028 } 3029 3030 void 3031 kmem_firewall_va_free(vmem_t *vmp, void *addr, size_t size) 3032 { 3033 ASSERT((kvseg.s_base == NULL ? 3034 va_to_pfn((char *)addr + size) : 3035 hat_getpfnum(kas.a_hat, (caddr_t)addr + size)) == PFN_INVALID); 3036 3037 vmem_free(vmp, addr, size + vmp->vm_quantum); 3038 } 3039 3040 /* 3041 * Try to allocate at least `size' bytes of memory without sleeping or 3042 * panicking. Return actual allocated size in `asize'. If allocation failed, 3043 * try final allocation with sleep or panic allowed. 3044 */ 3045 void * 3046 kmem_alloc_tryhard(size_t size, size_t *asize, int kmflag) 3047 { 3048 void *p; 3049 3050 *asize = P2ROUNDUP(size, KMEM_ALIGN); 3051 do { 3052 p = kmem_alloc(*asize, (kmflag | KM_NOSLEEP) & ~KM_PANIC); 3053 if (p != NULL) 3054 return (p); 3055 *asize += KMEM_ALIGN; 3056 } while (*asize <= PAGESIZE); 3057 3058 *asize = P2ROUNDUP(size, KMEM_ALIGN); 3059 return (kmem_alloc(*asize, kmflag)); 3060 } 3061 3062 /* 3063 * Reclaim all unused memory from a cache. 3064 */ 3065 static void 3066 kmem_cache_reap(kmem_cache_t *cp) 3067 { 3068 ASSERT(taskq_member(kmem_taskq, curthread)); 3069 cp->cache_reap++; 3070 3071 /* 3072 * Ask the cache's owner to free some memory if possible. 3073 * The idea is to handle things like the inode cache, which 3074 * typically sits on a bunch of memory that it doesn't truly 3075 * *need*. Reclaim policy is entirely up to the owner; this 3076 * callback is just an advisory plea for help. 3077 */ 3078 if (cp->cache_reclaim != NULL) { 3079 long delta; 3080 3081 /* 3082 * Reclaimed memory should be reapable (not included in the 3083 * depot's working set). 3084 */ 3085 delta = cp->cache_full.ml_total; 3086 cp->cache_reclaim(cp->cache_private); 3087 delta = cp->cache_full.ml_total - delta; 3088 if (delta > 0) { 3089 mutex_enter(&cp->cache_depot_lock); 3090 cp->cache_full.ml_reaplimit += delta; 3091 cp->cache_full.ml_min += delta; 3092 mutex_exit(&cp->cache_depot_lock); 3093 } 3094 } 3095 3096 kmem_depot_ws_reap(cp); 3097 3098 if (cp->cache_defrag != NULL && !kmem_move_noreap) { 3099 kmem_cache_defrag(cp); 3100 } 3101 } 3102 3103 static void 3104 kmem_reap_timeout(void *flag_arg) 3105 { 3106 uint32_t *flag = (uint32_t *)flag_arg; 3107 3108 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace); 3109 *flag = 0; 3110 } 3111 3112 static void 3113 kmem_reap_done(void *flag) 3114 { 3115 if (!callout_init_done) { 3116 /* can't schedule a timeout at this point */ 3117 kmem_reap_timeout(flag); 3118 } else { 3119 (void) timeout(kmem_reap_timeout, flag, kmem_reap_interval); 3120 } 3121 } 3122 3123 static void 3124 kmem_reap_start(void *flag) 3125 { 3126 ASSERT(flag == &kmem_reaping || flag == &kmem_reaping_idspace); 3127 3128 if (flag == &kmem_reaping) { 3129 kmem_cache_applyall(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP); 3130 /* 3131 * if we have segkp under heap, reap segkp cache. 3132 */ 3133 if (segkp_fromheap) 3134 segkp_cache_free(); 3135 } 3136 else 3137 kmem_cache_applyall_id(kmem_cache_reap, kmem_taskq, TQ_NOSLEEP); 3138 3139 /* 3140 * We use taskq_dispatch() to schedule a timeout to clear 3141 * the flag so that kmem_reap() becomes self-throttling: 3142 * we won't reap again until the current reap completes *and* 3143 * at least kmem_reap_interval ticks have elapsed. 3144 */ 3145 if (!taskq_dispatch(kmem_taskq, kmem_reap_done, flag, TQ_NOSLEEP)) 3146 kmem_reap_done(flag); 3147 } 3148 3149 static void 3150 kmem_reap_common(void *flag_arg) 3151 { 3152 uint32_t *flag = (uint32_t *)flag_arg; 3153 3154 if (MUTEX_HELD(&kmem_cache_lock) || kmem_taskq == NULL || 3155 atomic_cas_32(flag, 0, 1) != 0) 3156 return; 3157 3158 /* 3159 * It may not be kosher to do memory allocation when a reap is called 3160 * (for example, if vmem_populate() is in the call chain). So we 3161 * start the reap going with a TQ_NOALLOC dispatch. If the dispatch 3162 * fails, we reset the flag, and the next reap will try again. 3163 */ 3164 if (!taskq_dispatch(kmem_taskq, kmem_reap_start, flag, TQ_NOALLOC)) 3165 *flag = 0; 3166 } 3167 3168 /* 3169 * Reclaim all unused memory from all caches. Called from the VM system 3170 * when memory gets tight. 3171 */ 3172 void 3173 kmem_reap(void) 3174 { 3175 kmem_reap_common(&kmem_reaping); 3176 } 3177 3178 /* 3179 * Reclaim all unused memory from identifier arenas, called when a vmem 3180 * arena not back by memory is exhausted. Since reaping memory-backed caches 3181 * cannot help with identifier exhaustion, we avoid both a large amount of 3182 * work and unwanted side-effects from reclaim callbacks. 3183 */ 3184 void 3185 kmem_reap_idspace(void) 3186 { 3187 kmem_reap_common(&kmem_reaping_idspace); 3188 } 3189 3190 /* 3191 * Purge all magazines from a cache and set its magazine limit to zero. 3192 * All calls are serialized by the kmem_taskq lock, except for the final 3193 * call from kmem_cache_destroy(). 3194 */ 3195 static void 3196 kmem_cache_magazine_purge(kmem_cache_t *cp) 3197 { 3198 kmem_cpu_cache_t *ccp; 3199 kmem_magazine_t *mp, *pmp; 3200 int rounds, prounds, cpu_seqid; 3201 3202 ASSERT(!list_link_active(&cp->cache_link) || 3203 taskq_member(kmem_taskq, curthread)); 3204 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 3205 3206 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 3207 ccp = &cp->cache_cpu[cpu_seqid]; 3208 3209 mutex_enter(&ccp->cc_lock); 3210 mp = ccp->cc_loaded; 3211 pmp = ccp->cc_ploaded; 3212 rounds = ccp->cc_rounds; 3213 prounds = ccp->cc_prounds; 3214 ccp->cc_loaded = NULL; 3215 ccp->cc_ploaded = NULL; 3216 ccp->cc_rounds = -1; 3217 ccp->cc_prounds = -1; 3218 ccp->cc_magsize = 0; 3219 mutex_exit(&ccp->cc_lock); 3220 3221 if (mp) 3222 kmem_magazine_destroy(cp, mp, rounds); 3223 if (pmp) 3224 kmem_magazine_destroy(cp, pmp, prounds); 3225 } 3226 3227 kmem_depot_ws_zero(cp); 3228 kmem_depot_ws_reap(cp); 3229 } 3230 3231 /* 3232 * Enable per-cpu magazines on a cache. 3233 */ 3234 static void 3235 kmem_cache_magazine_enable(kmem_cache_t *cp) 3236 { 3237 int cpu_seqid; 3238 3239 if (cp->cache_flags & KMF_NOMAGAZINE) 3240 return; 3241 3242 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 3243 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 3244 mutex_enter(&ccp->cc_lock); 3245 ccp->cc_magsize = cp->cache_magtype->mt_magsize; 3246 mutex_exit(&ccp->cc_lock); 3247 } 3248 3249 } 3250 3251 /* 3252 * Reap (almost) everything right now. 3253 */ 3254 void 3255 kmem_cache_reap_now(kmem_cache_t *cp) 3256 { 3257 ASSERT(list_link_active(&cp->cache_link)); 3258 3259 kmem_depot_ws_zero(cp); 3260 3261 (void) taskq_dispatch(kmem_taskq, 3262 (task_func_t *)kmem_depot_ws_reap, cp, TQ_SLEEP); 3263 taskq_wait(kmem_taskq); 3264 } 3265 3266 /* 3267 * Recompute a cache's magazine size. The trade-off is that larger magazines 3268 * provide a higher transfer rate with the depot, while smaller magazines 3269 * reduce memory consumption. Magazine resizing is an expensive operation; 3270 * it should not be done frequently. 3271 * 3272 * Changes to the magazine size are serialized by the kmem_taskq lock. 3273 * 3274 * Note: at present this only grows the magazine size. It might be useful 3275 * to allow shrinkage too. 3276 */ 3277 static void 3278 kmem_cache_magazine_resize(kmem_cache_t *cp) 3279 { 3280 kmem_magtype_t *mtp = cp->cache_magtype; 3281 3282 ASSERT(taskq_member(kmem_taskq, curthread)); 3283 3284 if (cp->cache_chunksize < mtp->mt_maxbuf) { 3285 kmem_cache_magazine_purge(cp); 3286 mutex_enter(&cp->cache_depot_lock); 3287 cp->cache_magtype = ++mtp; 3288 cp->cache_depot_contention_prev = 3289 cp->cache_depot_contention + INT_MAX; 3290 mutex_exit(&cp->cache_depot_lock); 3291 kmem_cache_magazine_enable(cp); 3292 } 3293 } 3294 3295 /* 3296 * Rescale a cache's hash table, so that the table size is roughly the 3297 * cache size. We want the average lookup time to be extremely small. 3298 */ 3299 static void 3300 kmem_hash_rescale(kmem_cache_t *cp) 3301 { 3302 kmem_bufctl_t **old_table, **new_table, *bcp; 3303 size_t old_size, new_size, h; 3304 3305 ASSERT(taskq_member(kmem_taskq, curthread)); 3306 3307 new_size = MAX(KMEM_HASH_INITIAL, 3308 1 << (highbit(3 * cp->cache_buftotal + 4) - 2)); 3309 old_size = cp->cache_hash_mask + 1; 3310 3311 if ((old_size >> 1) <= new_size && new_size <= (old_size << 1)) 3312 return; 3313 3314 new_table = vmem_alloc(kmem_hash_arena, new_size * sizeof (void *), 3315 VM_NOSLEEP); 3316 if (new_table == NULL) 3317 return; 3318 bzero(new_table, new_size * sizeof (void *)); 3319 3320 mutex_enter(&cp->cache_lock); 3321 3322 old_size = cp->cache_hash_mask + 1; 3323 old_table = cp->cache_hash_table; 3324 3325 cp->cache_hash_mask = new_size - 1; 3326 cp->cache_hash_table = new_table; 3327 cp->cache_rescale++; 3328 3329 for (h = 0; h < old_size; h++) { 3330 bcp = old_table[h]; 3331 while (bcp != NULL) { 3332 void *addr = bcp->bc_addr; 3333 kmem_bufctl_t *next_bcp = bcp->bc_next; 3334 kmem_bufctl_t **hash_bucket = KMEM_HASH(cp, addr); 3335 bcp->bc_next = *hash_bucket; 3336 *hash_bucket = bcp; 3337 bcp = next_bcp; 3338 } 3339 } 3340 3341 mutex_exit(&cp->cache_lock); 3342 3343 vmem_free(kmem_hash_arena, old_table, old_size * sizeof (void *)); 3344 } 3345 3346 /* 3347 * Perform periodic maintenance on a cache: hash rescaling, depot working-set 3348 * update, magazine resizing, and slab consolidation. 3349 */ 3350 static void 3351 kmem_cache_update(kmem_cache_t *cp) 3352 { 3353 int need_hash_rescale = 0; 3354 int need_magazine_resize = 0; 3355 3356 ASSERT(MUTEX_HELD(&kmem_cache_lock)); 3357 3358 /* 3359 * If the cache has become much larger or smaller than its hash table, 3360 * fire off a request to rescale the hash table. 3361 */ 3362 mutex_enter(&cp->cache_lock); 3363 3364 if ((cp->cache_flags & KMF_HASH) && 3365 (cp->cache_buftotal > (cp->cache_hash_mask << 1) || 3366 (cp->cache_buftotal < (cp->cache_hash_mask >> 1) && 3367 cp->cache_hash_mask > KMEM_HASH_INITIAL))) 3368 need_hash_rescale = 1; 3369 3370 mutex_exit(&cp->cache_lock); 3371 3372 /* 3373 * Update the depot working set statistics. 3374 */ 3375 kmem_depot_ws_update(cp); 3376 3377 /* 3378 * If there's a lot of contention in the depot, 3379 * increase the magazine size. 3380 */ 3381 mutex_enter(&cp->cache_depot_lock); 3382 3383 if (cp->cache_chunksize < cp->cache_magtype->mt_maxbuf && 3384 (int)(cp->cache_depot_contention - 3385 cp->cache_depot_contention_prev) > kmem_depot_contention) 3386 need_magazine_resize = 1; 3387 3388 cp->cache_depot_contention_prev = cp->cache_depot_contention; 3389 3390 mutex_exit(&cp->cache_depot_lock); 3391 3392 if (need_hash_rescale) 3393 (void) taskq_dispatch(kmem_taskq, 3394 (task_func_t *)kmem_hash_rescale, cp, TQ_NOSLEEP); 3395 3396 if (need_magazine_resize) 3397 (void) taskq_dispatch(kmem_taskq, 3398 (task_func_t *)kmem_cache_magazine_resize, cp, TQ_NOSLEEP); 3399 3400 if (cp->cache_defrag != NULL) 3401 (void) taskq_dispatch(kmem_taskq, 3402 (task_func_t *)kmem_cache_scan, cp, TQ_NOSLEEP); 3403 } 3404 3405 static void kmem_update(void *); 3406 3407 static void 3408 kmem_update_timeout(void *dummy) 3409 { 3410 (void) timeout(kmem_update, dummy, kmem_reap_interval); 3411 } 3412 3413 static void 3414 kmem_update(void *dummy) 3415 { 3416 kmem_cache_applyall(kmem_cache_update, NULL, TQ_NOSLEEP); 3417 3418 /* 3419 * We use taskq_dispatch() to reschedule the timeout so that 3420 * kmem_update() becomes self-throttling: it won't schedule 3421 * new tasks until all previous tasks have completed. 3422 */ 3423 if (!taskq_dispatch(kmem_taskq, kmem_update_timeout, dummy, TQ_NOSLEEP)) 3424 kmem_update_timeout(NULL); 3425 } 3426 3427 static int 3428 kmem_cache_kstat_update(kstat_t *ksp, int rw) 3429 { 3430 struct kmem_cache_kstat *kmcp = &kmem_cache_kstat; 3431 kmem_cache_t *cp = ksp->ks_private; 3432 uint64_t cpu_buf_avail; 3433 uint64_t buf_avail = 0; 3434 int cpu_seqid; 3435 long reap; 3436 3437 ASSERT(MUTEX_HELD(&kmem_cache_kstat_lock)); 3438 3439 if (rw == KSTAT_WRITE) 3440 return (EACCES); 3441 3442 mutex_enter(&cp->cache_lock); 3443 3444 kmcp->kmc_alloc_fail.value.ui64 = cp->cache_alloc_fail; 3445 kmcp->kmc_alloc.value.ui64 = cp->cache_slab_alloc; 3446 kmcp->kmc_free.value.ui64 = cp->cache_slab_free; 3447 kmcp->kmc_slab_alloc.value.ui64 = cp->cache_slab_alloc; 3448 kmcp->kmc_slab_free.value.ui64 = cp->cache_slab_free; 3449 3450 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 3451 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 3452 3453 mutex_enter(&ccp->cc_lock); 3454 3455 cpu_buf_avail = 0; 3456 if (ccp->cc_rounds > 0) 3457 cpu_buf_avail += ccp->cc_rounds; 3458 if (ccp->cc_prounds > 0) 3459 cpu_buf_avail += ccp->cc_prounds; 3460 3461 kmcp->kmc_alloc.value.ui64 += ccp->cc_alloc; 3462 kmcp->kmc_free.value.ui64 += ccp->cc_free; 3463 buf_avail += cpu_buf_avail; 3464 3465 mutex_exit(&ccp->cc_lock); 3466 } 3467 3468 mutex_enter(&cp->cache_depot_lock); 3469 3470 kmcp->kmc_depot_alloc.value.ui64 = cp->cache_full.ml_alloc; 3471 kmcp->kmc_depot_free.value.ui64 = cp->cache_empty.ml_alloc; 3472 kmcp->kmc_depot_contention.value.ui64 = cp->cache_depot_contention; 3473 kmcp->kmc_full_magazines.value.ui64 = cp->cache_full.ml_total; 3474 kmcp->kmc_empty_magazines.value.ui64 = cp->cache_empty.ml_total; 3475 kmcp->kmc_magazine_size.value.ui64 = 3476 (cp->cache_flags & KMF_NOMAGAZINE) ? 3477 0 : cp->cache_magtype->mt_magsize; 3478 3479 kmcp->kmc_alloc.value.ui64 += cp->cache_full.ml_alloc; 3480 kmcp->kmc_free.value.ui64 += cp->cache_empty.ml_alloc; 3481 buf_avail += cp->cache_full.ml_total * cp->cache_magtype->mt_magsize; 3482 3483 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 3484 reap = MIN(reap, cp->cache_full.ml_total); 3485 3486 mutex_exit(&cp->cache_depot_lock); 3487 3488 kmcp->kmc_buf_size.value.ui64 = cp->cache_bufsize; 3489 kmcp->kmc_align.value.ui64 = cp->cache_align; 3490 kmcp->kmc_chunk_size.value.ui64 = cp->cache_chunksize; 3491 kmcp->kmc_slab_size.value.ui64 = cp->cache_slabsize; 3492 kmcp->kmc_buf_constructed.value.ui64 = buf_avail; 3493 buf_avail += cp->cache_bufslab; 3494 kmcp->kmc_buf_avail.value.ui64 = buf_avail; 3495 kmcp->kmc_buf_inuse.value.ui64 = cp->cache_buftotal - buf_avail; 3496 kmcp->kmc_buf_total.value.ui64 = cp->cache_buftotal; 3497 kmcp->kmc_buf_max.value.ui64 = cp->cache_bufmax; 3498 kmcp->kmc_slab_create.value.ui64 = cp->cache_slab_create; 3499 kmcp->kmc_slab_destroy.value.ui64 = cp->cache_slab_destroy; 3500 kmcp->kmc_hash_size.value.ui64 = (cp->cache_flags & KMF_HASH) ? 3501 cp->cache_hash_mask + 1 : 0; 3502 kmcp->kmc_hash_lookup_depth.value.ui64 = cp->cache_lookup_depth; 3503 kmcp->kmc_hash_rescale.value.ui64 = cp->cache_rescale; 3504 kmcp->kmc_vmem_source.value.ui64 = cp->cache_arena->vm_id; 3505 kmcp->kmc_reap.value.ui64 = cp->cache_reap; 3506 3507 if (cp->cache_defrag == NULL) { 3508 kmcp->kmc_move_callbacks.value.ui64 = 0; 3509 kmcp->kmc_move_yes.value.ui64 = 0; 3510 kmcp->kmc_move_no.value.ui64 = 0; 3511 kmcp->kmc_move_later.value.ui64 = 0; 3512 kmcp->kmc_move_dont_need.value.ui64 = 0; 3513 kmcp->kmc_move_dont_know.value.ui64 = 0; 3514 kmcp->kmc_move_hunt_found.value.ui64 = 0; 3515 kmcp->kmc_move_slabs_freed.value.ui64 = 0; 3516 kmcp->kmc_defrag.value.ui64 = 0; 3517 kmcp->kmc_scan.value.ui64 = 0; 3518 kmcp->kmc_move_reclaimable.value.ui64 = 0; 3519 } else { 3520 int64_t reclaimable; 3521 3522 kmem_defrag_t *kd = cp->cache_defrag; 3523 kmcp->kmc_move_callbacks.value.ui64 = kd->kmd_callbacks; 3524 kmcp->kmc_move_yes.value.ui64 = kd->kmd_yes; 3525 kmcp->kmc_move_no.value.ui64 = kd->kmd_no; 3526 kmcp->kmc_move_later.value.ui64 = kd->kmd_later; 3527 kmcp->kmc_move_dont_need.value.ui64 = kd->kmd_dont_need; 3528 kmcp->kmc_move_dont_know.value.ui64 = kd->kmd_dont_know; 3529 kmcp->kmc_move_hunt_found.value.ui64 = 0; 3530 kmcp->kmc_move_slabs_freed.value.ui64 = kd->kmd_slabs_freed; 3531 kmcp->kmc_defrag.value.ui64 = kd->kmd_defrags; 3532 kmcp->kmc_scan.value.ui64 = kd->kmd_scans; 3533 3534 reclaimable = cp->cache_bufslab - (cp->cache_maxchunks - 1); 3535 reclaimable = MAX(reclaimable, 0); 3536 reclaimable += ((uint64_t)reap * cp->cache_magtype->mt_magsize); 3537 kmcp->kmc_move_reclaimable.value.ui64 = reclaimable; 3538 } 3539 3540 mutex_exit(&cp->cache_lock); 3541 return (0); 3542 } 3543 3544 /* 3545 * Return a named statistic about a particular cache. 3546 * This shouldn't be called very often, so it's currently designed for 3547 * simplicity (leverages existing kstat support) rather than efficiency. 3548 */ 3549 uint64_t 3550 kmem_cache_stat(kmem_cache_t *cp, char *name) 3551 { 3552 int i; 3553 kstat_t *ksp = cp->cache_kstat; 3554 kstat_named_t *knp = (kstat_named_t *)&kmem_cache_kstat; 3555 uint64_t value = 0; 3556 3557 if (ksp != NULL) { 3558 mutex_enter(&kmem_cache_kstat_lock); 3559 (void) kmem_cache_kstat_update(ksp, KSTAT_READ); 3560 for (i = 0; i < ksp->ks_ndata; i++) { 3561 if (strcmp(knp[i].name, name) == 0) { 3562 value = knp[i].value.ui64; 3563 break; 3564 } 3565 } 3566 mutex_exit(&kmem_cache_kstat_lock); 3567 } 3568 return (value); 3569 } 3570 3571 /* 3572 * Return an estimate of currently available kernel heap memory. 3573 * On 32-bit systems, physical memory may exceed virtual memory, 3574 * we just truncate the result at 1GB. 3575 */ 3576 size_t 3577 kmem_avail(void) 3578 { 3579 spgcnt_t rmem = availrmem - tune.t_minarmem; 3580 spgcnt_t fmem = freemem - minfree; 3581 3582 return ((size_t)ptob(MIN(MAX(MIN(rmem, fmem), 0), 3583 1 << (30 - PAGESHIFT)))); 3584 } 3585 3586 /* 3587 * Return the maximum amount of memory that is (in theory) allocatable 3588 * from the heap. This may be used as an estimate only since there 3589 * is no guarentee this space will still be available when an allocation 3590 * request is made, nor that the space may be allocated in one big request 3591 * due to kernel heap fragmentation. 3592 */ 3593 size_t 3594 kmem_maxavail(void) 3595 { 3596 spgcnt_t pmem = availrmem - tune.t_minarmem; 3597 spgcnt_t vmem = btop(vmem_size(heap_arena, VMEM_FREE)); 3598 3599 return ((size_t)ptob(MAX(MIN(pmem, vmem), 0))); 3600 } 3601 3602 /* 3603 * Indicate whether memory-intensive kmem debugging is enabled. 3604 */ 3605 int 3606 kmem_debugging(void) 3607 { 3608 return (kmem_flags & (KMF_AUDIT | KMF_REDZONE)); 3609 } 3610 3611 /* binning function, sorts finely at the two extremes */ 3612 #define KMEM_PARTIAL_SLAB_WEIGHT(sp, binshift) \ 3613 ((((sp)->slab_refcnt <= (binshift)) || \ 3614 (((sp)->slab_chunks - (sp)->slab_refcnt) <= (binshift))) \ 3615 ? -(sp)->slab_refcnt \ 3616 : -((binshift) + ((sp)->slab_refcnt >> (binshift)))) 3617 3618 /* 3619 * Minimizing the number of partial slabs on the freelist minimizes 3620 * fragmentation (the ratio of unused buffers held by the slab layer). There are 3621 * two ways to get a slab off of the freelist: 1) free all the buffers on the 3622 * slab, and 2) allocate all the buffers on the slab. It follows that we want 3623 * the most-used slabs at the front of the list where they have the best chance 3624 * of being completely allocated, and the least-used slabs at a safe distance 3625 * from the front to improve the odds that the few remaining buffers will all be 3626 * freed before another allocation can tie up the slab. For that reason a slab 3627 * with a higher slab_refcnt sorts less than than a slab with a lower 3628 * slab_refcnt. 3629 * 3630 * However, if a slab has at least one buffer that is deemed unfreeable, we 3631 * would rather have that slab at the front of the list regardless of 3632 * slab_refcnt, since even one unfreeable buffer makes the entire slab 3633 * unfreeable. If the client returns KMEM_CBRC_NO in response to a cache_move() 3634 * callback, the slab is marked unfreeable for as long as it remains on the 3635 * freelist. 3636 */ 3637 static int 3638 kmem_partial_slab_cmp(const void *p0, const void *p1) 3639 { 3640 const kmem_cache_t *cp; 3641 const kmem_slab_t *s0 = p0; 3642 const kmem_slab_t *s1 = p1; 3643 int w0, w1; 3644 size_t binshift; 3645 3646 ASSERT(KMEM_SLAB_IS_PARTIAL(s0)); 3647 ASSERT(KMEM_SLAB_IS_PARTIAL(s1)); 3648 ASSERT(s0->slab_cache == s1->slab_cache); 3649 cp = s1->slab_cache; 3650 ASSERT(MUTEX_HELD(&cp->cache_lock)); 3651 binshift = cp->cache_partial_binshift; 3652 3653 /* weight of first slab */ 3654 w0 = KMEM_PARTIAL_SLAB_WEIGHT(s0, binshift); 3655 if (s0->slab_flags & KMEM_SLAB_NOMOVE) { 3656 w0 -= cp->cache_maxchunks; 3657 } 3658 3659 /* weight of second slab */ 3660 w1 = KMEM_PARTIAL_SLAB_WEIGHT(s1, binshift); 3661 if (s1->slab_flags & KMEM_SLAB_NOMOVE) { 3662 w1 -= cp->cache_maxchunks; 3663 } 3664 3665 if (w0 < w1) 3666 return (-1); 3667 if (w0 > w1) 3668 return (1); 3669 3670 /* compare pointer values */ 3671 if ((uintptr_t)s0 < (uintptr_t)s1) 3672 return (-1); 3673 if ((uintptr_t)s0 > (uintptr_t)s1) 3674 return (1); 3675 3676 return (0); 3677 } 3678 3679 /* 3680 * It must be valid to call the destructor (if any) on a newly created object. 3681 * That is, the constructor (if any) must leave the object in a valid state for 3682 * the destructor. 3683 */ 3684 kmem_cache_t * 3685 kmem_cache_create( 3686 char *name, /* descriptive name for this cache */ 3687 size_t bufsize, /* size of the objects it manages */ 3688 size_t align, /* required object alignment */ 3689 int (*constructor)(void *, void *, int), /* object constructor */ 3690 void (*destructor)(void *, void *), /* object destructor */ 3691 void (*reclaim)(void *), /* memory reclaim callback */ 3692 void *private, /* pass-thru arg for constr/destr/reclaim */ 3693 vmem_t *vmp, /* vmem source for slab allocation */ 3694 int cflags) /* cache creation flags */ 3695 { 3696 int cpu_seqid; 3697 size_t chunksize; 3698 kmem_cache_t *cp; 3699 kmem_magtype_t *mtp; 3700 size_t csize = KMEM_CACHE_SIZE(max_ncpus); 3701 3702 #ifdef DEBUG 3703 /* 3704 * Cache names should conform to the rules for valid C identifiers 3705 */ 3706 if (!strident_valid(name)) { 3707 cmn_err(CE_CONT, 3708 "kmem_cache_create: '%s' is an invalid cache name\n" 3709 "cache names must conform to the rules for " 3710 "C identifiers\n", name); 3711 } 3712 #endif /* DEBUG */ 3713 3714 if (vmp == NULL) 3715 vmp = kmem_default_arena; 3716 3717 /* 3718 * If this kmem cache has an identifier vmem arena as its source, mark 3719 * it such to allow kmem_reap_idspace(). 3720 */ 3721 ASSERT(!(cflags & KMC_IDENTIFIER)); /* consumer should not set this */ 3722 if (vmp->vm_cflags & VMC_IDENTIFIER) 3723 cflags |= KMC_IDENTIFIER; 3724 3725 /* 3726 * Get a kmem_cache structure. We arrange that cp->cache_cpu[] 3727 * is aligned on a KMEM_CPU_CACHE_SIZE boundary to prevent 3728 * false sharing of per-CPU data. 3729 */ 3730 cp = vmem_xalloc(kmem_cache_arena, csize, KMEM_CPU_CACHE_SIZE, 3731 P2NPHASE(csize, KMEM_CPU_CACHE_SIZE), 0, NULL, NULL, VM_SLEEP); 3732 bzero(cp, csize); 3733 list_link_init(&cp->cache_link); 3734 3735 if (align == 0) 3736 align = KMEM_ALIGN; 3737 3738 /* 3739 * If we're not at least KMEM_ALIGN aligned, we can't use free 3740 * memory to hold bufctl information (because we can't safely 3741 * perform word loads and stores on it). 3742 */ 3743 if (align < KMEM_ALIGN) 3744 cflags |= KMC_NOTOUCH; 3745 3746 if (!ISP2(align) || align > vmp->vm_quantum) 3747 panic("kmem_cache_create: bad alignment %lu", align); 3748 3749 mutex_enter(&kmem_flags_lock); 3750 if (kmem_flags & KMF_RANDOMIZE) 3751 kmem_flags = (((kmem_flags | ~KMF_RANDOM) + 1) & KMF_RANDOM) | 3752 KMF_RANDOMIZE; 3753 cp->cache_flags = (kmem_flags | cflags) & KMF_DEBUG; 3754 mutex_exit(&kmem_flags_lock); 3755 3756 /* 3757 * Make sure all the various flags are reasonable. 3758 */ 3759 ASSERT(!(cflags & KMC_NOHASH) || !(cflags & KMC_NOTOUCH)); 3760 3761 if (cp->cache_flags & KMF_LITE) { 3762 if (bufsize >= kmem_lite_minsize && 3763 align <= kmem_lite_maxalign && 3764 P2PHASE(bufsize, kmem_lite_maxalign) != 0) { 3765 cp->cache_flags |= KMF_BUFTAG; 3766 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL); 3767 } else { 3768 cp->cache_flags &= ~KMF_DEBUG; 3769 } 3770 } 3771 3772 if (cp->cache_flags & KMF_DEADBEEF) 3773 cp->cache_flags |= KMF_REDZONE; 3774 3775 if ((cflags & KMC_QCACHE) && (cp->cache_flags & KMF_AUDIT)) 3776 cp->cache_flags |= KMF_NOMAGAZINE; 3777 3778 if (cflags & KMC_NODEBUG) 3779 cp->cache_flags &= ~KMF_DEBUG; 3780 3781 if (cflags & KMC_NOTOUCH) 3782 cp->cache_flags &= ~KMF_TOUCH; 3783 3784 if (cflags & KMC_PREFILL) 3785 cp->cache_flags |= KMF_PREFILL; 3786 3787 if (cflags & KMC_NOHASH) 3788 cp->cache_flags &= ~(KMF_AUDIT | KMF_FIREWALL); 3789 3790 if (cflags & KMC_NOMAGAZINE) 3791 cp->cache_flags |= KMF_NOMAGAZINE; 3792 3793 if ((cp->cache_flags & KMF_AUDIT) && !(cflags & KMC_NOTOUCH)) 3794 cp->cache_flags |= KMF_REDZONE; 3795 3796 if (!(cp->cache_flags & KMF_AUDIT)) 3797 cp->cache_flags &= ~KMF_CONTENTS; 3798 3799 if ((cp->cache_flags & KMF_BUFTAG) && bufsize >= kmem_minfirewall && 3800 !(cp->cache_flags & KMF_LITE) && !(cflags & KMC_NOHASH)) 3801 cp->cache_flags |= KMF_FIREWALL; 3802 3803 if (vmp != kmem_default_arena || kmem_firewall_arena == NULL) 3804 cp->cache_flags &= ~KMF_FIREWALL; 3805 3806 if (cp->cache_flags & KMF_FIREWALL) { 3807 cp->cache_flags &= ~KMF_BUFTAG; 3808 cp->cache_flags |= KMF_NOMAGAZINE; 3809 ASSERT(vmp == kmem_default_arena); 3810 vmp = kmem_firewall_arena; 3811 } 3812 3813 /* 3814 * Set cache properties. 3815 */ 3816 (void) strncpy(cp->cache_name, name, KMEM_CACHE_NAMELEN); 3817 strident_canon(cp->cache_name, KMEM_CACHE_NAMELEN + 1); 3818 cp->cache_bufsize = bufsize; 3819 cp->cache_align = align; 3820 cp->cache_constructor = constructor; 3821 cp->cache_destructor = destructor; 3822 cp->cache_reclaim = reclaim; 3823 cp->cache_private = private; 3824 cp->cache_arena = vmp; 3825 cp->cache_cflags = cflags; 3826 3827 /* 3828 * Determine the chunk size. 3829 */ 3830 chunksize = bufsize; 3831 3832 if (align >= KMEM_ALIGN) { 3833 chunksize = P2ROUNDUP(chunksize, KMEM_ALIGN); 3834 cp->cache_bufctl = chunksize - KMEM_ALIGN; 3835 } 3836 3837 if (cp->cache_flags & KMF_BUFTAG) { 3838 cp->cache_bufctl = chunksize; 3839 cp->cache_buftag = chunksize; 3840 if (cp->cache_flags & KMF_LITE) 3841 chunksize += KMEM_BUFTAG_LITE_SIZE(kmem_lite_count); 3842 else 3843 chunksize += sizeof (kmem_buftag_t); 3844 } 3845 3846 if (cp->cache_flags & KMF_DEADBEEF) { 3847 cp->cache_verify = MIN(cp->cache_buftag, kmem_maxverify); 3848 if (cp->cache_flags & KMF_LITE) 3849 cp->cache_verify = sizeof (uint64_t); 3850 } 3851 3852 cp->cache_contents = MIN(cp->cache_bufctl, kmem_content_maxsave); 3853 3854 cp->cache_chunksize = chunksize = P2ROUNDUP(chunksize, align); 3855 3856 /* 3857 * Now that we know the chunk size, determine the optimal slab size. 3858 */ 3859 if (vmp == kmem_firewall_arena) { 3860 cp->cache_slabsize = P2ROUNDUP(chunksize, vmp->vm_quantum); 3861 cp->cache_mincolor = cp->cache_slabsize - chunksize; 3862 cp->cache_maxcolor = cp->cache_mincolor; 3863 cp->cache_flags |= KMF_HASH; 3864 ASSERT(!(cp->cache_flags & KMF_BUFTAG)); 3865 } else if ((cflags & KMC_NOHASH) || (!(cflags & KMC_NOTOUCH) && 3866 !(cp->cache_flags & KMF_AUDIT) && 3867 chunksize < vmp->vm_quantum / KMEM_VOID_FRACTION)) { 3868 cp->cache_slabsize = vmp->vm_quantum; 3869 cp->cache_mincolor = 0; 3870 cp->cache_maxcolor = 3871 (cp->cache_slabsize - sizeof (kmem_slab_t)) % chunksize; 3872 ASSERT(chunksize + sizeof (kmem_slab_t) <= cp->cache_slabsize); 3873 ASSERT(!(cp->cache_flags & KMF_AUDIT)); 3874 } else { 3875 size_t chunks, bestfit, waste, slabsize; 3876 size_t minwaste = LONG_MAX; 3877 3878 for (chunks = 1; chunks <= KMEM_VOID_FRACTION; chunks++) { 3879 slabsize = P2ROUNDUP(chunksize * chunks, 3880 vmp->vm_quantum); 3881 chunks = slabsize / chunksize; 3882 waste = (slabsize % chunksize) / chunks; 3883 if (waste < minwaste) { 3884 minwaste = waste; 3885 bestfit = slabsize; 3886 } 3887 } 3888 if (cflags & KMC_QCACHE) 3889 bestfit = VMEM_QCACHE_SLABSIZE(vmp->vm_qcache_max); 3890 cp->cache_slabsize = bestfit; 3891 cp->cache_mincolor = 0; 3892 cp->cache_maxcolor = bestfit % chunksize; 3893 cp->cache_flags |= KMF_HASH; 3894 } 3895 3896 cp->cache_maxchunks = (cp->cache_slabsize / cp->cache_chunksize); 3897 cp->cache_partial_binshift = highbit(cp->cache_maxchunks / 16) + 1; 3898 3899 /* 3900 * Disallowing prefill when either the DEBUG or HASH flag is set or when 3901 * there is a constructor avoids some tricky issues with debug setup 3902 * that may be revisited later. We cannot allow prefill in a 3903 * metadata cache because of potential recursion. 3904 */ 3905 if (vmp == kmem_msb_arena || 3906 cp->cache_flags & (KMF_HASH | KMF_BUFTAG) || 3907 cp->cache_constructor != NULL) 3908 cp->cache_flags &= ~KMF_PREFILL; 3909 3910 if (cp->cache_flags & KMF_HASH) { 3911 ASSERT(!(cflags & KMC_NOHASH)); 3912 cp->cache_bufctl_cache = (cp->cache_flags & KMF_AUDIT) ? 3913 kmem_bufctl_audit_cache : kmem_bufctl_cache; 3914 } 3915 3916 if (cp->cache_maxcolor >= vmp->vm_quantum) 3917 cp->cache_maxcolor = vmp->vm_quantum - 1; 3918 3919 cp->cache_color = cp->cache_mincolor; 3920 3921 /* 3922 * Initialize the rest of the slab layer. 3923 */ 3924 mutex_init(&cp->cache_lock, NULL, MUTEX_DEFAULT, NULL); 3925 3926 avl_create(&cp->cache_partial_slabs, kmem_partial_slab_cmp, 3927 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link)); 3928 /* LINTED: E_TRUE_LOGICAL_EXPR */ 3929 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t)); 3930 /* reuse partial slab AVL linkage for complete slab list linkage */ 3931 list_create(&cp->cache_complete_slabs, 3932 sizeof (kmem_slab_t), offsetof(kmem_slab_t, slab_link)); 3933 3934 if (cp->cache_flags & KMF_HASH) { 3935 cp->cache_hash_table = vmem_alloc(kmem_hash_arena, 3936 KMEM_HASH_INITIAL * sizeof (void *), VM_SLEEP); 3937 bzero(cp->cache_hash_table, 3938 KMEM_HASH_INITIAL * sizeof (void *)); 3939 cp->cache_hash_mask = KMEM_HASH_INITIAL - 1; 3940 cp->cache_hash_shift = highbit((ulong_t)chunksize) - 1; 3941 } 3942 3943 /* 3944 * Initialize the depot. 3945 */ 3946 mutex_init(&cp->cache_depot_lock, NULL, MUTEX_DEFAULT, NULL); 3947 3948 for (mtp = kmem_magtype; chunksize <= mtp->mt_minbuf; mtp++) 3949 continue; 3950 3951 cp->cache_magtype = mtp; 3952 3953 /* 3954 * Initialize the CPU layer. 3955 */ 3956 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) { 3957 kmem_cpu_cache_t *ccp = &cp->cache_cpu[cpu_seqid]; 3958 mutex_init(&ccp->cc_lock, NULL, MUTEX_DEFAULT, NULL); 3959 ccp->cc_flags = cp->cache_flags; 3960 ccp->cc_rounds = -1; 3961 ccp->cc_prounds = -1; 3962 } 3963 3964 /* 3965 * Create the cache's kstats. 3966 */ 3967 if ((cp->cache_kstat = kstat_create("unix", 0, cp->cache_name, 3968 "kmem_cache", KSTAT_TYPE_NAMED, 3969 sizeof (kmem_cache_kstat) / sizeof (kstat_named_t), 3970 KSTAT_FLAG_VIRTUAL)) != NULL) { 3971 cp->cache_kstat->ks_data = &kmem_cache_kstat; 3972 cp->cache_kstat->ks_update = kmem_cache_kstat_update; 3973 cp->cache_kstat->ks_private = cp; 3974 cp->cache_kstat->ks_lock = &kmem_cache_kstat_lock; 3975 kstat_install(cp->cache_kstat); 3976 } 3977 3978 /* 3979 * Add the cache to the global list. This makes it visible 3980 * to kmem_update(), so the cache must be ready for business. 3981 */ 3982 mutex_enter(&kmem_cache_lock); 3983 list_insert_tail(&kmem_caches, cp); 3984 mutex_exit(&kmem_cache_lock); 3985 3986 if (kmem_ready) 3987 kmem_cache_magazine_enable(cp); 3988 3989 return (cp); 3990 } 3991 3992 static int 3993 kmem_move_cmp(const void *buf, const void *p) 3994 { 3995 const kmem_move_t *kmm = p; 3996 uintptr_t v1 = (uintptr_t)buf; 3997 uintptr_t v2 = (uintptr_t)kmm->kmm_from_buf; 3998 return (v1 < v2 ? -1 : (v1 > v2 ? 1 : 0)); 3999 } 4000 4001 static void 4002 kmem_reset_reclaim_threshold(kmem_defrag_t *kmd) 4003 { 4004 kmd->kmd_reclaim_numer = 1; 4005 } 4006 4007 /* 4008 * Initially, when choosing candidate slabs for buffers to move, we want to be 4009 * very selective and take only slabs that are less than 4010 * (1 / KMEM_VOID_FRACTION) allocated. If we have difficulty finding candidate 4011 * slabs, then we raise the allocation ceiling incrementally. The reclaim 4012 * threshold is reset to (1 / KMEM_VOID_FRACTION) as soon as the cache is no 4013 * longer fragmented. 4014 */ 4015 static void 4016 kmem_adjust_reclaim_threshold(kmem_defrag_t *kmd, int direction) 4017 { 4018 if (direction > 0) { 4019 /* make it easier to find a candidate slab */ 4020 if (kmd->kmd_reclaim_numer < (KMEM_VOID_FRACTION - 1)) { 4021 kmd->kmd_reclaim_numer++; 4022 } 4023 } else { 4024 /* be more selective */ 4025 if (kmd->kmd_reclaim_numer > 1) { 4026 kmd->kmd_reclaim_numer--; 4027 } 4028 } 4029 } 4030 4031 void 4032 kmem_cache_set_move(kmem_cache_t *cp, 4033 kmem_cbrc_t (*move)(void *, void *, size_t, void *)) 4034 { 4035 kmem_defrag_t *defrag; 4036 4037 ASSERT(move != NULL); 4038 /* 4039 * The consolidator does not support NOTOUCH caches because kmem cannot 4040 * initialize their slabs with the 0xbaddcafe memory pattern, which sets 4041 * a low order bit usable by clients to distinguish uninitialized memory 4042 * from known objects (see kmem_slab_create). 4043 */ 4044 ASSERT(!(cp->cache_cflags & KMC_NOTOUCH)); 4045 ASSERT(!(cp->cache_cflags & KMC_IDENTIFIER)); 4046 4047 /* 4048 * We should not be holding anyone's cache lock when calling 4049 * kmem_cache_alloc(), so allocate in all cases before acquiring the 4050 * lock. 4051 */ 4052 defrag = kmem_cache_alloc(kmem_defrag_cache, KM_SLEEP); 4053 4054 mutex_enter(&cp->cache_lock); 4055 4056 if (KMEM_IS_MOVABLE(cp)) { 4057 if (cp->cache_move == NULL) { 4058 ASSERT(cp->cache_slab_alloc == 0); 4059 4060 cp->cache_defrag = defrag; 4061 defrag = NULL; /* nothing to free */ 4062 bzero(cp->cache_defrag, sizeof (kmem_defrag_t)); 4063 avl_create(&cp->cache_defrag->kmd_moves_pending, 4064 kmem_move_cmp, sizeof (kmem_move_t), 4065 offsetof(kmem_move_t, kmm_entry)); 4066 /* LINTED: E_TRUE_LOGICAL_EXPR */ 4067 ASSERT(sizeof (list_node_t) <= sizeof (avl_node_t)); 4068 /* reuse the slab's AVL linkage for deadlist linkage */ 4069 list_create(&cp->cache_defrag->kmd_deadlist, 4070 sizeof (kmem_slab_t), 4071 offsetof(kmem_slab_t, slab_link)); 4072 kmem_reset_reclaim_threshold(cp->cache_defrag); 4073 } 4074 cp->cache_move = move; 4075 } 4076 4077 mutex_exit(&cp->cache_lock); 4078 4079 if (defrag != NULL) { 4080 kmem_cache_free(kmem_defrag_cache, defrag); /* unused */ 4081 } 4082 } 4083 4084 void 4085 kmem_cache_destroy(kmem_cache_t *cp) 4086 { 4087 int cpu_seqid; 4088 4089 /* 4090 * Remove the cache from the global cache list so that no one else 4091 * can schedule tasks on its behalf, wait for any pending tasks to 4092 * complete, purge the cache, and then destroy it. 4093 */ 4094 mutex_enter(&kmem_cache_lock); 4095 list_remove(&kmem_caches, cp); 4096 mutex_exit(&kmem_cache_lock); 4097 4098 if (kmem_taskq != NULL) 4099 taskq_wait(kmem_taskq); 4100 4101 if (kmem_move_taskq != NULL && cp->cache_defrag != NULL) 4102 taskq_wait(kmem_move_taskq); 4103 4104 kmem_cache_magazine_purge(cp); 4105 4106 mutex_enter(&cp->cache_lock); 4107 if (cp->cache_buftotal != 0) 4108 cmn_err(CE_WARN, "kmem_cache_destroy: '%s' (%p) not empty", 4109 cp->cache_name, (void *)cp); 4110 if (cp->cache_defrag != NULL) { 4111 avl_destroy(&cp->cache_defrag->kmd_moves_pending); 4112 list_destroy(&cp->cache_defrag->kmd_deadlist); 4113 kmem_cache_free(kmem_defrag_cache, cp->cache_defrag); 4114 cp->cache_defrag = NULL; 4115 } 4116 /* 4117 * The cache is now dead. There should be no further activity. We 4118 * enforce this by setting land mines in the constructor, destructor, 4119 * reclaim, and move routines that induce a kernel text fault if 4120 * invoked. 4121 */ 4122 cp->cache_constructor = (int (*)(void *, void *, int))1; 4123 cp->cache_destructor = (void (*)(void *, void *))2; 4124 cp->cache_reclaim = (void (*)(void *))3; 4125 cp->cache_move = (kmem_cbrc_t (*)(void *, void *, size_t, void *))4; 4126 mutex_exit(&cp->cache_lock); 4127 4128 kstat_delete(cp->cache_kstat); 4129 4130 if (cp->cache_hash_table != NULL) 4131 vmem_free(kmem_hash_arena, cp->cache_hash_table, 4132 (cp->cache_hash_mask + 1) * sizeof (void *)); 4133 4134 for (cpu_seqid = 0; cpu_seqid < max_ncpus; cpu_seqid++) 4135 mutex_destroy(&cp->cache_cpu[cpu_seqid].cc_lock); 4136 4137 mutex_destroy(&cp->cache_depot_lock); 4138 mutex_destroy(&cp->cache_lock); 4139 4140 vmem_free(kmem_cache_arena, cp, KMEM_CACHE_SIZE(max_ncpus)); 4141 } 4142 4143 /*ARGSUSED*/ 4144 static int 4145 kmem_cpu_setup(cpu_setup_t what, int id, void *arg) 4146 { 4147 ASSERT(MUTEX_HELD(&cpu_lock)); 4148 if (what == CPU_UNCONFIG) { 4149 kmem_cache_applyall(kmem_cache_magazine_purge, 4150 kmem_taskq, TQ_SLEEP); 4151 kmem_cache_applyall(kmem_cache_magazine_enable, 4152 kmem_taskq, TQ_SLEEP); 4153 } 4154 return (0); 4155 } 4156 4157 static void 4158 kmem_alloc_caches_create(const int *array, size_t count, 4159 kmem_cache_t **alloc_table, size_t maxbuf, uint_t shift) 4160 { 4161 char name[KMEM_CACHE_NAMELEN + 1]; 4162 size_t table_unit = (1 << shift); /* range of one alloc_table entry */ 4163 size_t size = table_unit; 4164 int i; 4165 4166 for (i = 0; i < count; i++) { 4167 size_t cache_size = array[i]; 4168 size_t align = KMEM_ALIGN; 4169 kmem_cache_t *cp; 4170 4171 /* if the table has an entry for maxbuf, we're done */ 4172 if (size > maxbuf) 4173 break; 4174 4175 /* cache size must be a multiple of the table unit */ 4176 ASSERT(P2PHASE(cache_size, table_unit) == 0); 4177 4178 /* 4179 * If they allocate a multiple of the coherency granularity, 4180 * they get a coherency-granularity-aligned address. 4181 */ 4182 if (IS_P2ALIGNED(cache_size, 64)) 4183 align = 64; 4184 if (IS_P2ALIGNED(cache_size, PAGESIZE)) 4185 align = PAGESIZE; 4186 (void) snprintf(name, sizeof (name), 4187 "kmem_alloc_%lu", cache_size); 4188 cp = kmem_cache_create(name, cache_size, align, 4189 NULL, NULL, NULL, NULL, NULL, KMC_KMEM_ALLOC); 4190 4191 while (size <= cache_size) { 4192 alloc_table[(size - 1) >> shift] = cp; 4193 size += table_unit; 4194 } 4195 } 4196 4197 ASSERT(size > maxbuf); /* i.e. maxbuf <= max(cache_size) */ 4198 } 4199 4200 static void 4201 kmem_cache_init(int pass, int use_large_pages) 4202 { 4203 int i; 4204 size_t maxbuf; 4205 kmem_magtype_t *mtp; 4206 4207 for (i = 0; i < sizeof (kmem_magtype) / sizeof (*mtp); i++) { 4208 char name[KMEM_CACHE_NAMELEN + 1]; 4209 4210 mtp = &kmem_magtype[i]; 4211 (void) sprintf(name, "kmem_magazine_%d", mtp->mt_magsize); 4212 mtp->mt_cache = kmem_cache_create(name, 4213 (mtp->mt_magsize + 1) * sizeof (void *), 4214 mtp->mt_align, NULL, NULL, NULL, NULL, 4215 kmem_msb_arena, KMC_NOHASH); 4216 } 4217 4218 kmem_slab_cache = kmem_cache_create("kmem_slab_cache", 4219 sizeof (kmem_slab_t), 0, NULL, NULL, NULL, NULL, 4220 kmem_msb_arena, KMC_NOHASH); 4221 4222 kmem_bufctl_cache = kmem_cache_create("kmem_bufctl_cache", 4223 sizeof (kmem_bufctl_t), 0, NULL, NULL, NULL, NULL, 4224 kmem_msb_arena, KMC_NOHASH); 4225 4226 kmem_bufctl_audit_cache = kmem_cache_create("kmem_bufctl_audit_cache", 4227 sizeof (kmem_bufctl_audit_t), 0, NULL, NULL, NULL, NULL, 4228 kmem_msb_arena, KMC_NOHASH); 4229 4230 if (pass == 2) { 4231 kmem_va_arena = vmem_create("kmem_va", 4232 NULL, 0, PAGESIZE, 4233 vmem_alloc, vmem_free, heap_arena, 4234 8 * PAGESIZE, VM_SLEEP); 4235 4236 if (use_large_pages) { 4237 kmem_default_arena = vmem_xcreate("kmem_default", 4238 NULL, 0, PAGESIZE, 4239 segkmem_alloc_lp, segkmem_free_lp, kmem_va_arena, 4240 0, VMC_DUMPSAFE | VM_SLEEP); 4241 } else { 4242 kmem_default_arena = vmem_create("kmem_default", 4243 NULL, 0, PAGESIZE, 4244 segkmem_alloc, segkmem_free, kmem_va_arena, 4245 0, VMC_DUMPSAFE | VM_SLEEP); 4246 } 4247 4248 /* Figure out what our maximum cache size is */ 4249 maxbuf = kmem_max_cached; 4250 if (maxbuf <= KMEM_MAXBUF) { 4251 maxbuf = 0; 4252 kmem_max_cached = KMEM_MAXBUF; 4253 } else { 4254 size_t size = 0; 4255 size_t max = 4256 sizeof (kmem_big_alloc_sizes) / sizeof (int); 4257 /* 4258 * Round maxbuf up to an existing cache size. If maxbuf 4259 * is larger than the largest cache, we truncate it to 4260 * the largest cache's size. 4261 */ 4262 for (i = 0; i < max; i++) { 4263 size = kmem_big_alloc_sizes[i]; 4264 if (maxbuf <= size) 4265 break; 4266 } 4267 kmem_max_cached = maxbuf = size; 4268 } 4269 4270 /* 4271 * The big alloc table may not be completely overwritten, so 4272 * we clear out any stale cache pointers from the first pass. 4273 */ 4274 bzero(kmem_big_alloc_table, sizeof (kmem_big_alloc_table)); 4275 } else { 4276 /* 4277 * During the first pass, the kmem_alloc_* caches 4278 * are treated as metadata. 4279 */ 4280 kmem_default_arena = kmem_msb_arena; 4281 maxbuf = KMEM_BIG_MAXBUF_32BIT; 4282 } 4283 4284 /* 4285 * Set up the default caches to back kmem_alloc() 4286 */ 4287 kmem_alloc_caches_create( 4288 kmem_alloc_sizes, sizeof (kmem_alloc_sizes) / sizeof (int), 4289 kmem_alloc_table, KMEM_MAXBUF, KMEM_ALIGN_SHIFT); 4290 4291 kmem_alloc_caches_create( 4292 kmem_big_alloc_sizes, sizeof (kmem_big_alloc_sizes) / sizeof (int), 4293 kmem_big_alloc_table, maxbuf, KMEM_BIG_SHIFT); 4294 4295 kmem_big_alloc_table_max = maxbuf >> KMEM_BIG_SHIFT; 4296 } 4297 4298 void 4299 kmem_init(void) 4300 { 4301 kmem_cache_t *cp; 4302 int old_kmem_flags = kmem_flags; 4303 int use_large_pages = 0; 4304 size_t maxverify, minfirewall; 4305 4306 kstat_init(); 4307 4308 /* 4309 * Don't do firewalled allocations if the heap is less than 1TB 4310 * (i.e. on a 32-bit kernel) 4311 * The resulting VM_NEXTFIT allocations would create too much 4312 * fragmentation in a small heap. 4313 */ 4314 #if defined(_LP64) 4315 maxverify = minfirewall = PAGESIZE / 2; 4316 #else 4317 maxverify = minfirewall = ULONG_MAX; 4318 #endif 4319 4320 /* LINTED */ 4321 ASSERT(sizeof (kmem_cpu_cache_t) == KMEM_CPU_CACHE_SIZE); 4322 4323 list_create(&kmem_caches, sizeof (kmem_cache_t), 4324 offsetof(kmem_cache_t, cache_link)); 4325 4326 kmem_metadata_arena = vmem_create("kmem_metadata", NULL, 0, PAGESIZE, 4327 vmem_alloc, vmem_free, heap_arena, 8 * PAGESIZE, 4328 VM_SLEEP | VMC_NO_QCACHE); 4329 4330 kmem_msb_arena = vmem_create("kmem_msb", NULL, 0, 4331 PAGESIZE, segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, 4332 VMC_DUMPSAFE | VM_SLEEP); 4333 4334 kmem_cache_arena = vmem_create("kmem_cache", NULL, 0, KMEM_ALIGN, 4335 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP); 4336 4337 kmem_hash_arena = vmem_create("kmem_hash", NULL, 0, KMEM_ALIGN, 4338 segkmem_alloc, segkmem_free, kmem_metadata_arena, 0, VM_SLEEP); 4339 4340 kmem_log_arena = vmem_create("kmem_log", NULL, 0, KMEM_ALIGN, 4341 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP); 4342 4343 kmem_firewall_va_arena = vmem_create("kmem_firewall_va", 4344 NULL, 0, PAGESIZE, 4345 kmem_firewall_va_alloc, kmem_firewall_va_free, heap_arena, 4346 0, VM_SLEEP); 4347 4348 kmem_firewall_arena = vmem_create("kmem_firewall", NULL, 0, PAGESIZE, 4349 segkmem_alloc, segkmem_free, kmem_firewall_va_arena, 0, 4350 VMC_DUMPSAFE | VM_SLEEP); 4351 4352 /* temporary oversize arena for mod_read_system_file */ 4353 kmem_oversize_arena = vmem_create("kmem_oversize", NULL, 0, PAGESIZE, 4354 segkmem_alloc, segkmem_free, heap_arena, 0, VM_SLEEP); 4355 4356 kmem_reap_interval = 15 * hz; 4357 4358 /* 4359 * Read /etc/system. This is a chicken-and-egg problem because 4360 * kmem_flags may be set in /etc/system, but mod_read_system_file() 4361 * needs to use the allocator. The simplest solution is to create 4362 * all the standard kmem caches, read /etc/system, destroy all the 4363 * caches we just created, and then create them all again in light 4364 * of the (possibly) new kmem_flags and other kmem tunables. 4365 */ 4366 kmem_cache_init(1, 0); 4367 4368 mod_read_system_file(boothowto & RB_ASKNAME); 4369 4370 while ((cp = list_tail(&kmem_caches)) != NULL) 4371 kmem_cache_destroy(cp); 4372 4373 vmem_destroy(kmem_oversize_arena); 4374 4375 if (old_kmem_flags & KMF_STICKY) 4376 kmem_flags = old_kmem_flags; 4377 4378 if (!(kmem_flags & KMF_AUDIT)) 4379 vmem_seg_size = offsetof(vmem_seg_t, vs_thread); 4380 4381 if (kmem_maxverify == 0) 4382 kmem_maxverify = maxverify; 4383 4384 if (kmem_minfirewall == 0) 4385 kmem_minfirewall = minfirewall; 4386 4387 /* 4388 * give segkmem a chance to figure out if we are using large pages 4389 * for the kernel heap 4390 */ 4391 use_large_pages = segkmem_lpsetup(); 4392 4393 /* 4394 * To protect against corruption, we keep the actual number of callers 4395 * KMF_LITE records seperate from the tunable. We arbitrarily clamp 4396 * to 16, since the overhead for small buffers quickly gets out of 4397 * hand. 4398 * 4399 * The real limit would depend on the needs of the largest KMC_NOHASH 4400 * cache. 4401 */ 4402 kmem_lite_count = MIN(MAX(0, kmem_lite_pcs), 16); 4403 kmem_lite_pcs = kmem_lite_count; 4404 4405 /* 4406 * Normally, we firewall oversized allocations when possible, but 4407 * if we are using large pages for kernel memory, and we don't have 4408 * any non-LITE debugging flags set, we want to allocate oversized 4409 * buffers from large pages, and so skip the firewalling. 4410 */ 4411 if (use_large_pages && 4412 ((kmem_flags & KMF_LITE) || !(kmem_flags & KMF_DEBUG))) { 4413 kmem_oversize_arena = vmem_xcreate("kmem_oversize", NULL, 0, 4414 PAGESIZE, segkmem_alloc_lp, segkmem_free_lp, heap_arena, 4415 0, VMC_DUMPSAFE | VM_SLEEP); 4416 } else { 4417 kmem_oversize_arena = vmem_create("kmem_oversize", 4418 NULL, 0, PAGESIZE, 4419 segkmem_alloc, segkmem_free, kmem_minfirewall < ULONG_MAX? 4420 kmem_firewall_va_arena : heap_arena, 0, VMC_DUMPSAFE | 4421 VM_SLEEP); 4422 } 4423 4424 kmem_cache_init(2, use_large_pages); 4425 4426 if (kmem_flags & (KMF_AUDIT | KMF_RANDOMIZE)) { 4427 if (kmem_transaction_log_size == 0) 4428 kmem_transaction_log_size = kmem_maxavail() / 50; 4429 kmem_transaction_log = kmem_log_init(kmem_transaction_log_size); 4430 } 4431 4432 if (kmem_flags & (KMF_CONTENTS | KMF_RANDOMIZE)) { 4433 if (kmem_content_log_size == 0) 4434 kmem_content_log_size = kmem_maxavail() / 50; 4435 kmem_content_log = kmem_log_init(kmem_content_log_size); 4436 } 4437 4438 kmem_failure_log = kmem_log_init(kmem_failure_log_size); 4439 4440 kmem_slab_log = kmem_log_init(kmem_slab_log_size); 4441 4442 /* 4443 * Initialize STREAMS message caches so allocb() is available. 4444 * This allows us to initialize the logging framework (cmn_err(9F), 4445 * strlog(9F), etc) so we can start recording messages. 4446 */ 4447 streams_msg_init(); 4448 4449 /* 4450 * Initialize the ZSD framework in Zones so modules loaded henceforth 4451 * can register their callbacks. 4452 */ 4453 zone_zsd_init(); 4454 4455 log_init(); 4456 taskq_init(); 4457 4458 /* 4459 * Warn about invalid or dangerous values of kmem_flags. 4460 * Always warn about unsupported values. 4461 */ 4462 if (((kmem_flags & ~(KMF_AUDIT | KMF_DEADBEEF | KMF_REDZONE | 4463 KMF_CONTENTS | KMF_LITE)) != 0) || 4464 ((kmem_flags & KMF_LITE) && kmem_flags != KMF_LITE)) 4465 cmn_err(CE_WARN, "kmem_flags set to unsupported value 0x%x. " 4466 "See the Solaris Tunable Parameters Reference Manual.", 4467 kmem_flags); 4468 4469 #ifdef DEBUG 4470 if ((kmem_flags & KMF_DEBUG) == 0) 4471 cmn_err(CE_NOTE, "kmem debugging disabled."); 4472 #else 4473 /* 4474 * For non-debug kernels, the only "normal" flags are 0, KMF_LITE, 4475 * KMF_REDZONE, and KMF_CONTENTS (the last because it is only enabled 4476 * if KMF_AUDIT is set). We should warn the user about the performance 4477 * penalty of KMF_AUDIT or KMF_DEADBEEF if they are set and KMF_LITE 4478 * isn't set (since that disables AUDIT). 4479 */ 4480 if (!(kmem_flags & KMF_LITE) && 4481 (kmem_flags & (KMF_AUDIT | KMF_DEADBEEF)) != 0) 4482 cmn_err(CE_WARN, "High-overhead kmem debugging features " 4483 "enabled (kmem_flags = 0x%x). Performance degradation " 4484 "and large memory overhead possible. See the Solaris " 4485 "Tunable Parameters Reference Manual.", kmem_flags); 4486 #endif /* not DEBUG */ 4487 4488 kmem_cache_applyall(kmem_cache_magazine_enable, NULL, TQ_SLEEP); 4489 4490 kmem_ready = 1; 4491 4492 /* 4493 * Initialize the platform-specific aligned/DMA memory allocator. 4494 */ 4495 ka_init(); 4496 4497 /* 4498 * Initialize 32-bit ID cache. 4499 */ 4500 id32_init(); 4501 4502 /* 4503 * Initialize the networking stack so modules loaded can 4504 * register their callbacks. 4505 */ 4506 netstack_init(); 4507 } 4508 4509 static void 4510 kmem_move_init(void) 4511 { 4512 kmem_defrag_cache = kmem_cache_create("kmem_defrag_cache", 4513 sizeof (kmem_defrag_t), 0, NULL, NULL, NULL, NULL, 4514 kmem_msb_arena, KMC_NOHASH); 4515 kmem_move_cache = kmem_cache_create("kmem_move_cache", 4516 sizeof (kmem_move_t), 0, NULL, NULL, NULL, NULL, 4517 kmem_msb_arena, KMC_NOHASH); 4518 4519 /* 4520 * kmem guarantees that move callbacks are sequential and that even 4521 * across multiple caches no two moves ever execute simultaneously. 4522 * Move callbacks are processed on a separate taskq so that client code 4523 * does not interfere with internal maintenance tasks. 4524 */ 4525 kmem_move_taskq = taskq_create_instance("kmem_move_taskq", 0, 1, 4526 minclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE); 4527 } 4528 4529 void 4530 kmem_thread_init(void) 4531 { 4532 kmem_move_init(); 4533 kmem_taskq = taskq_create_instance("kmem_taskq", 0, 1, minclsyspri, 4534 300, INT_MAX, TASKQ_PREPOPULATE); 4535 } 4536 4537 void 4538 kmem_mp_init(void) 4539 { 4540 mutex_enter(&cpu_lock); 4541 register_cpu_setup_func(kmem_cpu_setup, NULL); 4542 mutex_exit(&cpu_lock); 4543 4544 kmem_update_timeout(NULL); 4545 4546 taskq_mp_init(); 4547 } 4548 4549 /* 4550 * Return the slab of the allocated buffer, or NULL if the buffer is not 4551 * allocated. This function may be called with a known slab address to determine 4552 * whether or not the buffer is allocated, or with a NULL slab address to obtain 4553 * an allocated buffer's slab. 4554 */ 4555 static kmem_slab_t * 4556 kmem_slab_allocated(kmem_cache_t *cp, kmem_slab_t *sp, void *buf) 4557 { 4558 kmem_bufctl_t *bcp, *bufbcp; 4559 4560 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4561 ASSERT(sp == NULL || KMEM_SLAB_MEMBER(sp, buf)); 4562 4563 if (cp->cache_flags & KMF_HASH) { 4564 for (bcp = *KMEM_HASH(cp, buf); 4565 (bcp != NULL) && (bcp->bc_addr != buf); 4566 bcp = bcp->bc_next) { 4567 continue; 4568 } 4569 ASSERT(sp != NULL && bcp != NULL ? sp == bcp->bc_slab : 1); 4570 return (bcp == NULL ? NULL : bcp->bc_slab); 4571 } 4572 4573 if (sp == NULL) { 4574 sp = KMEM_SLAB(cp, buf); 4575 } 4576 bufbcp = KMEM_BUFCTL(cp, buf); 4577 for (bcp = sp->slab_head; 4578 (bcp != NULL) && (bcp != bufbcp); 4579 bcp = bcp->bc_next) { 4580 continue; 4581 } 4582 return (bcp == NULL ? sp : NULL); 4583 } 4584 4585 static boolean_t 4586 kmem_slab_is_reclaimable(kmem_cache_t *cp, kmem_slab_t *sp, int flags) 4587 { 4588 long refcnt = sp->slab_refcnt; 4589 4590 ASSERT(cp->cache_defrag != NULL); 4591 4592 /* 4593 * For code coverage we want to be able to move an object within the 4594 * same slab (the only partial slab) even if allocating the destination 4595 * buffer resulted in a completely allocated slab. 4596 */ 4597 if (flags & KMM_DEBUG) { 4598 return ((flags & KMM_DESPERATE) || 4599 ((sp->slab_flags & KMEM_SLAB_NOMOVE) == 0)); 4600 } 4601 4602 /* If we're desperate, we don't care if the client said NO. */ 4603 if (flags & KMM_DESPERATE) { 4604 return (refcnt < sp->slab_chunks); /* any partial */ 4605 } 4606 4607 if (sp->slab_flags & KMEM_SLAB_NOMOVE) { 4608 return (B_FALSE); 4609 } 4610 4611 if ((refcnt == 1) || kmem_move_any_partial) { 4612 return (refcnt < sp->slab_chunks); 4613 } 4614 4615 /* 4616 * The reclaim threshold is adjusted at each kmem_cache_scan() so that 4617 * slabs with a progressively higher percentage of used buffers can be 4618 * reclaimed until the cache as a whole is no longer fragmented. 4619 * 4620 * sp->slab_refcnt kmd_reclaim_numer 4621 * --------------- < ------------------ 4622 * sp->slab_chunks KMEM_VOID_FRACTION 4623 */ 4624 return ((refcnt * KMEM_VOID_FRACTION) < 4625 (sp->slab_chunks * cp->cache_defrag->kmd_reclaim_numer)); 4626 } 4627 4628 /* 4629 * May be called from the kmem_move_taskq, from kmem_cache_move_notify_task(), 4630 * or when the buffer is freed. 4631 */ 4632 static void 4633 kmem_slab_move_yes(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf) 4634 { 4635 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4636 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf)); 4637 4638 if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4639 return; 4640 } 4641 4642 if (sp->slab_flags & KMEM_SLAB_NOMOVE) { 4643 if (KMEM_SLAB_OFFSET(sp, from_buf) == sp->slab_stuck_offset) { 4644 avl_remove(&cp->cache_partial_slabs, sp); 4645 sp->slab_flags &= ~KMEM_SLAB_NOMOVE; 4646 sp->slab_stuck_offset = (uint32_t)-1; 4647 avl_add(&cp->cache_partial_slabs, sp); 4648 } 4649 } else { 4650 sp->slab_later_count = 0; 4651 sp->slab_stuck_offset = (uint32_t)-1; 4652 } 4653 } 4654 4655 static void 4656 kmem_slab_move_no(kmem_cache_t *cp, kmem_slab_t *sp, void *from_buf) 4657 { 4658 ASSERT(taskq_member(kmem_move_taskq, curthread)); 4659 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4660 ASSERT(KMEM_SLAB_MEMBER(sp, from_buf)); 4661 4662 if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4663 return; 4664 } 4665 4666 avl_remove(&cp->cache_partial_slabs, sp); 4667 sp->slab_later_count = 0; 4668 sp->slab_flags |= KMEM_SLAB_NOMOVE; 4669 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, from_buf); 4670 avl_add(&cp->cache_partial_slabs, sp); 4671 } 4672 4673 static void kmem_move_end(kmem_cache_t *, kmem_move_t *); 4674 4675 /* 4676 * The move callback takes two buffer addresses, the buffer to be moved, and a 4677 * newly allocated and constructed buffer selected by kmem as the destination. 4678 * It also takes the size of the buffer and an optional user argument specified 4679 * at cache creation time. kmem guarantees that the buffer to be moved has not 4680 * been unmapped by the virtual memory subsystem. Beyond that, it cannot 4681 * guarantee the present whereabouts of the buffer to be moved, so it is up to 4682 * the client to safely determine whether or not it is still using the buffer. 4683 * The client must not free either of the buffers passed to the move callback, 4684 * since kmem wants to free them directly to the slab layer. The client response 4685 * tells kmem which of the two buffers to free: 4686 * 4687 * YES kmem frees the old buffer (the move was successful) 4688 * NO kmem frees the new buffer, marks the slab of the old buffer 4689 * non-reclaimable to avoid bothering the client again 4690 * LATER kmem frees the new buffer, increments slab_later_count 4691 * DONT_KNOW kmem frees the new buffer 4692 * DONT_NEED kmem frees both the old buffer and the new buffer 4693 * 4694 * The pending callback argument now being processed contains both of the 4695 * buffers (old and new) passed to the move callback function, the slab of the 4696 * old buffer, and flags related to the move request, such as whether or not the 4697 * system was desperate for memory. 4698 * 4699 * Slabs are not freed while there is a pending callback, but instead are kept 4700 * on a deadlist, which is drained after the last callback completes. This means 4701 * that slabs are safe to access until kmem_move_end(), no matter how many of 4702 * their buffers have been freed. Once slab_refcnt reaches zero, it stays at 4703 * zero for as long as the slab remains on the deadlist and until the slab is 4704 * freed. 4705 */ 4706 static void 4707 kmem_move_buffer(kmem_move_t *callback) 4708 { 4709 kmem_cbrc_t response; 4710 kmem_slab_t *sp = callback->kmm_from_slab; 4711 kmem_cache_t *cp = sp->slab_cache; 4712 boolean_t free_on_slab; 4713 4714 ASSERT(taskq_member(kmem_move_taskq, curthread)); 4715 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4716 ASSERT(KMEM_SLAB_MEMBER(sp, callback->kmm_from_buf)); 4717 4718 /* 4719 * The number of allocated buffers on the slab may have changed since we 4720 * last checked the slab's reclaimability (when the pending move was 4721 * enqueued), or the client may have responded NO when asked to move 4722 * another buffer on the same slab. 4723 */ 4724 if (!kmem_slab_is_reclaimable(cp, sp, callback->kmm_flags)) { 4725 kmem_slab_free(cp, callback->kmm_to_buf); 4726 kmem_move_end(cp, callback); 4727 return; 4728 } 4729 4730 /* 4731 * Checking the slab layer is easy, so we might as well do that here 4732 * in case we can avoid bothering the client. 4733 */ 4734 mutex_enter(&cp->cache_lock); 4735 free_on_slab = (kmem_slab_allocated(cp, sp, 4736 callback->kmm_from_buf) == NULL); 4737 mutex_exit(&cp->cache_lock); 4738 4739 if (free_on_slab) { 4740 kmem_slab_free(cp, callback->kmm_to_buf); 4741 kmem_move_end(cp, callback); 4742 return; 4743 } 4744 4745 if (cp->cache_flags & KMF_BUFTAG) { 4746 /* 4747 * Make kmem_cache_alloc_debug() apply the constructor for us. 4748 */ 4749 if (kmem_cache_alloc_debug(cp, callback->kmm_to_buf, 4750 KM_NOSLEEP, 1, caller()) != 0) { 4751 kmem_move_end(cp, callback); 4752 return; 4753 } 4754 } else if (cp->cache_constructor != NULL && 4755 cp->cache_constructor(callback->kmm_to_buf, cp->cache_private, 4756 KM_NOSLEEP) != 0) { 4757 atomic_inc_64(&cp->cache_alloc_fail); 4758 kmem_slab_free(cp, callback->kmm_to_buf); 4759 kmem_move_end(cp, callback); 4760 return; 4761 } 4762 4763 cp->cache_defrag->kmd_callbacks++; 4764 cp->cache_defrag->kmd_thread = curthread; 4765 cp->cache_defrag->kmd_from_buf = callback->kmm_from_buf; 4766 cp->cache_defrag->kmd_to_buf = callback->kmm_to_buf; 4767 DTRACE_PROBE2(kmem__move__start, kmem_cache_t *, cp, kmem_move_t *, 4768 callback); 4769 4770 response = cp->cache_move(callback->kmm_from_buf, 4771 callback->kmm_to_buf, cp->cache_bufsize, cp->cache_private); 4772 4773 DTRACE_PROBE3(kmem__move__end, kmem_cache_t *, cp, kmem_move_t *, 4774 callback, kmem_cbrc_t, response); 4775 cp->cache_defrag->kmd_thread = NULL; 4776 cp->cache_defrag->kmd_from_buf = NULL; 4777 cp->cache_defrag->kmd_to_buf = NULL; 4778 4779 if (response == KMEM_CBRC_YES) { 4780 cp->cache_defrag->kmd_yes++; 4781 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE); 4782 /* slab safe to access until kmem_move_end() */ 4783 if (sp->slab_refcnt == 0) 4784 cp->cache_defrag->kmd_slabs_freed++; 4785 mutex_enter(&cp->cache_lock); 4786 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4787 mutex_exit(&cp->cache_lock); 4788 kmem_move_end(cp, callback); 4789 return; 4790 } 4791 4792 switch (response) { 4793 case KMEM_CBRC_NO: 4794 cp->cache_defrag->kmd_no++; 4795 mutex_enter(&cp->cache_lock); 4796 kmem_slab_move_no(cp, sp, callback->kmm_from_buf); 4797 mutex_exit(&cp->cache_lock); 4798 break; 4799 case KMEM_CBRC_LATER: 4800 cp->cache_defrag->kmd_later++; 4801 mutex_enter(&cp->cache_lock); 4802 if (!KMEM_SLAB_IS_PARTIAL(sp)) { 4803 mutex_exit(&cp->cache_lock); 4804 break; 4805 } 4806 4807 if (++sp->slab_later_count >= KMEM_DISBELIEF) { 4808 kmem_slab_move_no(cp, sp, callback->kmm_from_buf); 4809 } else if (!(sp->slab_flags & KMEM_SLAB_NOMOVE)) { 4810 sp->slab_stuck_offset = KMEM_SLAB_OFFSET(sp, 4811 callback->kmm_from_buf); 4812 } 4813 mutex_exit(&cp->cache_lock); 4814 break; 4815 case KMEM_CBRC_DONT_NEED: 4816 cp->cache_defrag->kmd_dont_need++; 4817 kmem_slab_free_constructed(cp, callback->kmm_from_buf, B_FALSE); 4818 if (sp->slab_refcnt == 0) 4819 cp->cache_defrag->kmd_slabs_freed++; 4820 mutex_enter(&cp->cache_lock); 4821 kmem_slab_move_yes(cp, sp, callback->kmm_from_buf); 4822 mutex_exit(&cp->cache_lock); 4823 break; 4824 case KMEM_CBRC_DONT_KNOW: 4825 /* 4826 * If we don't know if we can move this buffer or not, we'll 4827 * just assume that we can't: if the buffer is in fact free, 4828 * then it is sitting in one of the per-CPU magazines or in 4829 * a full magazine in the depot layer. Either way, because 4830 * defrag is induced in the same logic that reaps a cache, 4831 * it's likely that full magazines will be returned to the 4832 * system soon (thereby accomplishing what we're trying to 4833 * accomplish here: return those magazines to their slabs). 4834 * Given this, any work that we might do now to locate a buffer 4835 * in a magazine is wasted (and expensive!) work; we bump 4836 * a counter in this case and otherwise assume that we can't 4837 * move it. 4838 */ 4839 cp->cache_defrag->kmd_dont_know++; 4840 break; 4841 default: 4842 panic("'%s' (%p) unexpected move callback response %d\n", 4843 cp->cache_name, (void *)cp, response); 4844 } 4845 4846 kmem_slab_free_constructed(cp, callback->kmm_to_buf, B_FALSE); 4847 kmem_move_end(cp, callback); 4848 } 4849 4850 /* Return B_FALSE if there is insufficient memory for the move request. */ 4851 static boolean_t 4852 kmem_move_begin(kmem_cache_t *cp, kmem_slab_t *sp, void *buf, int flags) 4853 { 4854 void *to_buf; 4855 avl_index_t index; 4856 kmem_move_t *callback, *pending; 4857 ulong_t n; 4858 4859 ASSERT(taskq_member(kmem_taskq, curthread)); 4860 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4861 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 4862 4863 callback = kmem_cache_alloc(kmem_move_cache, KM_NOSLEEP); 4864 4865 if (callback == NULL) 4866 return (B_FALSE); 4867 4868 callback->kmm_from_slab = sp; 4869 callback->kmm_from_buf = buf; 4870 callback->kmm_flags = flags; 4871 4872 mutex_enter(&cp->cache_lock); 4873 4874 n = avl_numnodes(&cp->cache_partial_slabs); 4875 if ((n == 0) || ((n == 1) && !(flags & KMM_DEBUG))) { 4876 mutex_exit(&cp->cache_lock); 4877 kmem_cache_free(kmem_move_cache, callback); 4878 return (B_TRUE); /* there is no need for the move request */ 4879 } 4880 4881 pending = avl_find(&cp->cache_defrag->kmd_moves_pending, buf, &index); 4882 if (pending != NULL) { 4883 /* 4884 * If the move is already pending and we're desperate now, 4885 * update the move flags. 4886 */ 4887 if (flags & KMM_DESPERATE) { 4888 pending->kmm_flags |= KMM_DESPERATE; 4889 } 4890 mutex_exit(&cp->cache_lock); 4891 kmem_cache_free(kmem_move_cache, callback); 4892 return (B_TRUE); 4893 } 4894 4895 to_buf = kmem_slab_alloc_impl(cp, avl_first(&cp->cache_partial_slabs), 4896 B_FALSE); 4897 callback->kmm_to_buf = to_buf; 4898 avl_insert(&cp->cache_defrag->kmd_moves_pending, callback, index); 4899 4900 mutex_exit(&cp->cache_lock); 4901 4902 if (!taskq_dispatch(kmem_move_taskq, (task_func_t *)kmem_move_buffer, 4903 callback, TQ_NOSLEEP)) { 4904 mutex_enter(&cp->cache_lock); 4905 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback); 4906 mutex_exit(&cp->cache_lock); 4907 kmem_slab_free(cp, to_buf); 4908 kmem_cache_free(kmem_move_cache, callback); 4909 return (B_FALSE); 4910 } 4911 4912 return (B_TRUE); 4913 } 4914 4915 static void 4916 kmem_move_end(kmem_cache_t *cp, kmem_move_t *callback) 4917 { 4918 avl_index_t index; 4919 4920 ASSERT(cp->cache_defrag != NULL); 4921 ASSERT(taskq_member(kmem_move_taskq, curthread)); 4922 ASSERT(MUTEX_NOT_HELD(&cp->cache_lock)); 4923 4924 mutex_enter(&cp->cache_lock); 4925 VERIFY(avl_find(&cp->cache_defrag->kmd_moves_pending, 4926 callback->kmm_from_buf, &index) != NULL); 4927 avl_remove(&cp->cache_defrag->kmd_moves_pending, callback); 4928 if (avl_is_empty(&cp->cache_defrag->kmd_moves_pending)) { 4929 list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 4930 kmem_slab_t *sp; 4931 4932 /* 4933 * The last pending move completed. Release all slabs from the 4934 * front of the dead list except for any slab at the tail that 4935 * needs to be released from the context of kmem_move_buffers(). 4936 * kmem deferred unmapping the buffers on these slabs in order 4937 * to guarantee that buffers passed to the move callback have 4938 * been touched only by kmem or by the client itself. 4939 */ 4940 while ((sp = list_remove_head(deadlist)) != NULL) { 4941 if (sp->slab_flags & KMEM_SLAB_MOVE_PENDING) { 4942 list_insert_tail(deadlist, sp); 4943 break; 4944 } 4945 cp->cache_defrag->kmd_deadcount--; 4946 cp->cache_slab_destroy++; 4947 mutex_exit(&cp->cache_lock); 4948 kmem_slab_destroy(cp, sp); 4949 mutex_enter(&cp->cache_lock); 4950 } 4951 } 4952 mutex_exit(&cp->cache_lock); 4953 kmem_cache_free(kmem_move_cache, callback); 4954 } 4955 4956 /* 4957 * Move buffers from least used slabs first by scanning backwards from the end 4958 * of the partial slab list. Scan at most max_scan candidate slabs and move 4959 * buffers from at most max_slabs slabs (0 for all partial slabs in both cases). 4960 * If desperate to reclaim memory, move buffers from any partial slab, otherwise 4961 * skip slabs with a ratio of allocated buffers at or above the current 4962 * threshold. Return the number of unskipped slabs (at most max_slabs, -1 if the 4963 * scan is aborted) so that the caller can adjust the reclaimability threshold 4964 * depending on how many reclaimable slabs it finds. 4965 * 4966 * kmem_move_buffers() drops and reacquires cache_lock every time it issues a 4967 * move request, since it is not valid for kmem_move_begin() to call 4968 * kmem_cache_alloc() or taskq_dispatch() with cache_lock held. 4969 */ 4970 static int 4971 kmem_move_buffers(kmem_cache_t *cp, size_t max_scan, size_t max_slabs, 4972 int flags) 4973 { 4974 kmem_slab_t *sp; 4975 void *buf; 4976 int i, j; /* slab index, buffer index */ 4977 int s; /* reclaimable slabs */ 4978 int b; /* allocated (movable) buffers on reclaimable slab */ 4979 boolean_t success; 4980 int refcnt; 4981 int nomove; 4982 4983 ASSERT(taskq_member(kmem_taskq, curthread)); 4984 ASSERT(MUTEX_HELD(&cp->cache_lock)); 4985 ASSERT(kmem_move_cache != NULL); 4986 ASSERT(cp->cache_move != NULL && cp->cache_defrag != NULL); 4987 ASSERT((flags & KMM_DEBUG) ? !avl_is_empty(&cp->cache_partial_slabs) : 4988 avl_numnodes(&cp->cache_partial_slabs) > 1); 4989 4990 if (kmem_move_blocked) { 4991 return (0); 4992 } 4993 4994 if (kmem_move_fulltilt) { 4995 flags |= KMM_DESPERATE; 4996 } 4997 4998 if (max_scan == 0 || (flags & KMM_DESPERATE)) { 4999 /* 5000 * Scan as many slabs as needed to find the desired number of 5001 * candidate slabs. 5002 */ 5003 max_scan = (size_t)-1; 5004 } 5005 5006 if (max_slabs == 0 || (flags & KMM_DESPERATE)) { 5007 /* Find as many candidate slabs as possible. */ 5008 max_slabs = (size_t)-1; 5009 } 5010 5011 sp = avl_last(&cp->cache_partial_slabs); 5012 ASSERT(KMEM_SLAB_IS_PARTIAL(sp)); 5013 for (i = 0, s = 0; (i < max_scan) && (s < max_slabs) && (sp != NULL) && 5014 ((sp != avl_first(&cp->cache_partial_slabs)) || 5015 (flags & KMM_DEBUG)); 5016 sp = AVL_PREV(&cp->cache_partial_slabs, sp), i++) { 5017 5018 if (!kmem_slab_is_reclaimable(cp, sp, flags)) { 5019 continue; 5020 } 5021 s++; 5022 5023 /* Look for allocated buffers to move. */ 5024 for (j = 0, b = 0, buf = sp->slab_base; 5025 (j < sp->slab_chunks) && (b < sp->slab_refcnt); 5026 buf = (((char *)buf) + cp->cache_chunksize), j++) { 5027 5028 if (kmem_slab_allocated(cp, sp, buf) == NULL) { 5029 continue; 5030 } 5031 5032 b++; 5033 5034 /* 5035 * Prevent the slab from being destroyed while we drop 5036 * cache_lock and while the pending move is not yet 5037 * registered. Flag the pending move while 5038 * kmd_moves_pending may still be empty, since we can't 5039 * yet rely on a non-zero pending move count to prevent 5040 * the slab from being destroyed. 5041 */ 5042 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING)); 5043 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING; 5044 /* 5045 * Recheck refcnt and nomove after reacquiring the lock, 5046 * since these control the order of partial slabs, and 5047 * we want to know if we can pick up the scan where we 5048 * left off. 5049 */ 5050 refcnt = sp->slab_refcnt; 5051 nomove = (sp->slab_flags & KMEM_SLAB_NOMOVE); 5052 mutex_exit(&cp->cache_lock); 5053 5054 success = kmem_move_begin(cp, sp, buf, flags); 5055 5056 /* 5057 * Now, before the lock is reacquired, kmem could 5058 * process all pending move requests and purge the 5059 * deadlist, so that upon reacquiring the lock, sp has 5060 * been remapped. Or, the client may free all the 5061 * objects on the slab while the pending moves are still 5062 * on the taskq. Therefore, the KMEM_SLAB_MOVE_PENDING 5063 * flag causes the slab to be put at the end of the 5064 * deadlist and prevents it from being destroyed, since 5065 * we plan to destroy it here after reacquiring the 5066 * lock. 5067 */ 5068 mutex_enter(&cp->cache_lock); 5069 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 5070 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING; 5071 5072 if (sp->slab_refcnt == 0) { 5073 list_t *deadlist = 5074 &cp->cache_defrag->kmd_deadlist; 5075 list_remove(deadlist, sp); 5076 5077 if (!avl_is_empty( 5078 &cp->cache_defrag->kmd_moves_pending)) { 5079 /* 5080 * A pending move makes it unsafe to 5081 * destroy the slab, because even though 5082 * the move is no longer needed, the 5083 * context where that is determined 5084 * requires the slab to exist. 5085 * Fortunately, a pending move also 5086 * means we don't need to destroy the 5087 * slab here, since it will get 5088 * destroyed along with any other slabs 5089 * on the deadlist after the last 5090 * pending move completes. 5091 */ 5092 list_insert_head(deadlist, sp); 5093 return (-1); 5094 } 5095 5096 /* 5097 * Destroy the slab now if it was completely 5098 * freed while we dropped cache_lock and there 5099 * are no pending moves. Since slab_refcnt 5100 * cannot change once it reaches zero, no new 5101 * pending moves from that slab are possible. 5102 */ 5103 cp->cache_defrag->kmd_deadcount--; 5104 cp->cache_slab_destroy++; 5105 mutex_exit(&cp->cache_lock); 5106 kmem_slab_destroy(cp, sp); 5107 mutex_enter(&cp->cache_lock); 5108 /* 5109 * Since we can't pick up the scan where we left 5110 * off, abort the scan and say nothing about the 5111 * number of reclaimable slabs. 5112 */ 5113 return (-1); 5114 } 5115 5116 if (!success) { 5117 /* 5118 * Abort the scan if there is not enough memory 5119 * for the request and say nothing about the 5120 * number of reclaimable slabs. 5121 */ 5122 return (-1); 5123 } 5124 5125 /* 5126 * The slab's position changed while the lock was 5127 * dropped, so we don't know where we are in the 5128 * sequence any more. 5129 */ 5130 if (sp->slab_refcnt != refcnt) { 5131 /* 5132 * If this is a KMM_DEBUG move, the slab_refcnt 5133 * may have changed because we allocated a 5134 * destination buffer on the same slab. In that 5135 * case, we're not interested in counting it. 5136 */ 5137 return (-1); 5138 } 5139 if ((sp->slab_flags & KMEM_SLAB_NOMOVE) != nomove) 5140 return (-1); 5141 5142 /* 5143 * Generating a move request allocates a destination 5144 * buffer from the slab layer, bumping the first partial 5145 * slab if it is completely allocated. If the current 5146 * slab becomes the first partial slab as a result, we 5147 * can't continue to scan backwards. 5148 * 5149 * If this is a KMM_DEBUG move and we allocated the 5150 * destination buffer from the last partial slab, then 5151 * the buffer we're moving is on the same slab and our 5152 * slab_refcnt has changed, causing us to return before 5153 * reaching here if there are no partial slabs left. 5154 */ 5155 ASSERT(!avl_is_empty(&cp->cache_partial_slabs)); 5156 if (sp == avl_first(&cp->cache_partial_slabs)) { 5157 /* 5158 * We're not interested in a second KMM_DEBUG 5159 * move. 5160 */ 5161 goto end_scan; 5162 } 5163 } 5164 } 5165 end_scan: 5166 5167 return (s); 5168 } 5169 5170 typedef struct kmem_move_notify_args { 5171 kmem_cache_t *kmna_cache; 5172 void *kmna_buf; 5173 } kmem_move_notify_args_t; 5174 5175 static void 5176 kmem_cache_move_notify_task(void *arg) 5177 { 5178 kmem_move_notify_args_t *args = arg; 5179 kmem_cache_t *cp = args->kmna_cache; 5180 void *buf = args->kmna_buf; 5181 kmem_slab_t *sp; 5182 5183 ASSERT(taskq_member(kmem_taskq, curthread)); 5184 ASSERT(list_link_active(&cp->cache_link)); 5185 5186 kmem_free(args, sizeof (kmem_move_notify_args_t)); 5187 mutex_enter(&cp->cache_lock); 5188 sp = kmem_slab_allocated(cp, NULL, buf); 5189 5190 /* Ignore the notification if the buffer is no longer allocated. */ 5191 if (sp == NULL) { 5192 mutex_exit(&cp->cache_lock); 5193 return; 5194 } 5195 5196 /* Ignore the notification if there's no reason to move the buffer. */ 5197 if (avl_numnodes(&cp->cache_partial_slabs) > 1) { 5198 /* 5199 * So far the notification is not ignored. Ignore the 5200 * notification if the slab is not marked by an earlier refusal 5201 * to move a buffer. 5202 */ 5203 if (!(sp->slab_flags & KMEM_SLAB_NOMOVE) && 5204 (sp->slab_later_count == 0)) { 5205 mutex_exit(&cp->cache_lock); 5206 return; 5207 } 5208 5209 kmem_slab_move_yes(cp, sp, buf); 5210 ASSERT(!(sp->slab_flags & KMEM_SLAB_MOVE_PENDING)); 5211 sp->slab_flags |= KMEM_SLAB_MOVE_PENDING; 5212 mutex_exit(&cp->cache_lock); 5213 /* see kmem_move_buffers() about dropping the lock */ 5214 (void) kmem_move_begin(cp, sp, buf, KMM_NOTIFY); 5215 mutex_enter(&cp->cache_lock); 5216 ASSERT(sp->slab_flags & KMEM_SLAB_MOVE_PENDING); 5217 sp->slab_flags &= ~KMEM_SLAB_MOVE_PENDING; 5218 if (sp->slab_refcnt == 0) { 5219 list_t *deadlist = &cp->cache_defrag->kmd_deadlist; 5220 list_remove(deadlist, sp); 5221 5222 if (!avl_is_empty( 5223 &cp->cache_defrag->kmd_moves_pending)) { 5224 list_insert_head(deadlist, sp); 5225 mutex_exit(&cp->cache_lock); 5226 return; 5227 } 5228 5229 cp->cache_defrag->kmd_deadcount--; 5230 cp->cache_slab_destroy++; 5231 mutex_exit(&cp->cache_lock); 5232 kmem_slab_destroy(cp, sp); 5233 return; 5234 } 5235 } else { 5236 kmem_slab_move_yes(cp, sp, buf); 5237 } 5238 mutex_exit(&cp->cache_lock); 5239 } 5240 5241 void 5242 kmem_cache_move_notify(kmem_cache_t *cp, void *buf) 5243 { 5244 kmem_move_notify_args_t *args; 5245 5246 args = kmem_alloc(sizeof (kmem_move_notify_args_t), KM_NOSLEEP); 5247 if (args != NULL) { 5248 args->kmna_cache = cp; 5249 args->kmna_buf = buf; 5250 if (!taskq_dispatch(kmem_taskq, 5251 (task_func_t *)kmem_cache_move_notify_task, args, 5252 TQ_NOSLEEP)) 5253 kmem_free(args, sizeof (kmem_move_notify_args_t)); 5254 } 5255 } 5256 5257 static void 5258 kmem_cache_defrag(kmem_cache_t *cp) 5259 { 5260 size_t n; 5261 5262 ASSERT(cp->cache_defrag != NULL); 5263 5264 mutex_enter(&cp->cache_lock); 5265 n = avl_numnodes(&cp->cache_partial_slabs); 5266 if (n > 1) { 5267 /* kmem_move_buffers() drops and reacquires cache_lock */ 5268 cp->cache_defrag->kmd_defrags++; 5269 (void) kmem_move_buffers(cp, n, 0, KMM_DESPERATE); 5270 } 5271 mutex_exit(&cp->cache_lock); 5272 } 5273 5274 /* Is this cache above the fragmentation threshold? */ 5275 static boolean_t 5276 kmem_cache_frag_threshold(kmem_cache_t *cp, uint64_t nfree) 5277 { 5278 /* 5279 * nfree kmem_frag_numer 5280 * ------------------ > --------------- 5281 * cp->cache_buftotal kmem_frag_denom 5282 */ 5283 return ((nfree * kmem_frag_denom) > 5284 (cp->cache_buftotal * kmem_frag_numer)); 5285 } 5286 5287 static boolean_t 5288 kmem_cache_is_fragmented(kmem_cache_t *cp, boolean_t *doreap) 5289 { 5290 boolean_t fragmented; 5291 uint64_t nfree; 5292 5293 ASSERT(MUTEX_HELD(&cp->cache_lock)); 5294 *doreap = B_FALSE; 5295 5296 if (kmem_move_fulltilt) { 5297 if (avl_numnodes(&cp->cache_partial_slabs) > 1) { 5298 return (B_TRUE); 5299 } 5300 } else { 5301 if ((cp->cache_complete_slab_count + avl_numnodes( 5302 &cp->cache_partial_slabs)) < kmem_frag_minslabs) { 5303 return (B_FALSE); 5304 } 5305 } 5306 5307 nfree = cp->cache_bufslab; 5308 fragmented = ((avl_numnodes(&cp->cache_partial_slabs) > 1) && 5309 kmem_cache_frag_threshold(cp, nfree)); 5310 5311 /* 5312 * Free buffers in the magazine layer appear allocated from the point of 5313 * view of the slab layer. We want to know if the slab layer would 5314 * appear fragmented if we included free buffers from magazines that 5315 * have fallen out of the working set. 5316 */ 5317 if (!fragmented) { 5318 long reap; 5319 5320 mutex_enter(&cp->cache_depot_lock); 5321 reap = MIN(cp->cache_full.ml_reaplimit, cp->cache_full.ml_min); 5322 reap = MIN(reap, cp->cache_full.ml_total); 5323 mutex_exit(&cp->cache_depot_lock); 5324 5325 nfree += ((uint64_t)reap * cp->cache_magtype->mt_magsize); 5326 if (kmem_cache_frag_threshold(cp, nfree)) { 5327 *doreap = B_TRUE; 5328 } 5329 } 5330 5331 return (fragmented); 5332 } 5333 5334 /* Called periodically from kmem_taskq */ 5335 static void 5336 kmem_cache_scan(kmem_cache_t *cp) 5337 { 5338 boolean_t reap = B_FALSE; 5339 kmem_defrag_t *kmd; 5340 5341 ASSERT(taskq_member(kmem_taskq, curthread)); 5342 5343 mutex_enter(&cp->cache_lock); 5344 5345 kmd = cp->cache_defrag; 5346 if (kmd->kmd_consolidate > 0) { 5347 kmd->kmd_consolidate--; 5348 mutex_exit(&cp->cache_lock); 5349 kmem_cache_reap(cp); 5350 return; 5351 } 5352 5353 if (kmem_cache_is_fragmented(cp, &reap)) { 5354 size_t slabs_found; 5355 5356 /* 5357 * Consolidate reclaimable slabs from the end of the partial 5358 * slab list (scan at most kmem_reclaim_scan_range slabs to find 5359 * reclaimable slabs). Keep track of how many candidate slabs we 5360 * looked for and how many we actually found so we can adjust 5361 * the definition of a candidate slab if we're having trouble 5362 * finding them. 5363 * 5364 * kmem_move_buffers() drops and reacquires cache_lock. 5365 */ 5366 kmd->kmd_scans++; 5367 slabs_found = kmem_move_buffers(cp, kmem_reclaim_scan_range, 5368 kmem_reclaim_max_slabs, 0); 5369 if (slabs_found >= 0) { 5370 kmd->kmd_slabs_sought += kmem_reclaim_max_slabs; 5371 kmd->kmd_slabs_found += slabs_found; 5372 } 5373 5374 if (++kmd->kmd_tries >= kmem_reclaim_scan_range) { 5375 kmd->kmd_tries = 0; 5376 5377 /* 5378 * If we had difficulty finding candidate slabs in 5379 * previous scans, adjust the threshold so that 5380 * candidates are easier to find. 5381 */ 5382 if (kmd->kmd_slabs_found == kmd->kmd_slabs_sought) { 5383 kmem_adjust_reclaim_threshold(kmd, -1); 5384 } else if ((kmd->kmd_slabs_found * 2) < 5385 kmd->kmd_slabs_sought) { 5386 kmem_adjust_reclaim_threshold(kmd, 1); 5387 } 5388 kmd->kmd_slabs_sought = 0; 5389 kmd->kmd_slabs_found = 0; 5390 } 5391 } else { 5392 kmem_reset_reclaim_threshold(cp->cache_defrag); 5393 #ifdef DEBUG 5394 if (!avl_is_empty(&cp->cache_partial_slabs)) { 5395 /* 5396 * In a debug kernel we want the consolidator to 5397 * run occasionally even when there is plenty of 5398 * memory. 5399 */ 5400 uint16_t debug_rand; 5401 5402 (void) random_get_bytes((uint8_t *)&debug_rand, 2); 5403 if (!kmem_move_noreap && 5404 ((debug_rand % kmem_mtb_reap) == 0)) { 5405 mutex_exit(&cp->cache_lock); 5406 kmem_cache_reap(cp); 5407 return; 5408 } else if ((debug_rand % kmem_mtb_move) == 0) { 5409 kmd->kmd_scans++; 5410 (void) kmem_move_buffers(cp, 5411 kmem_reclaim_scan_range, 1, KMM_DEBUG); 5412 } 5413 } 5414 #endif /* DEBUG */ 5415 } 5416 5417 mutex_exit(&cp->cache_lock); 5418 5419 if (reap) 5420 kmem_depot_ws_reap(cp); 5421 } 5422