1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or http://www.opensolaris.org/os/licensing. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 23 * Copyright (c) 2018, Joyent, Inc. 24 * Copyright (c) 2011, 2020, Delphix. All rights reserved. 25 * Copyright (c) 2014, Saso Kiselkov. All rights reserved. 26 * Copyright (c) 2017, Nexenta Systems, Inc. All rights reserved. 27 * Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved. 28 * Copyright (c) 2020, George Amanakis. All rights reserved. 29 * Copyright (c) 2019, Klara Inc. 30 * Copyright (c) 2019, Allan Jude 31 * Copyright (c) 2020, The FreeBSD Foundation [1] 32 * 33 * [1] Portions of this software were developed by Allan Jude 34 * under sponsorship from the FreeBSD Foundation. 35 */ 36 37 /* 38 * DVA-based Adjustable Replacement Cache 39 * 40 * While much of the theory of operation used here is 41 * based on the self-tuning, low overhead replacement cache 42 * presented by Megiddo and Modha at FAST 2003, there are some 43 * significant differences: 44 * 45 * 1. The Megiddo and Modha model assumes any page is evictable. 46 * Pages in its cache cannot be "locked" into memory. This makes 47 * the eviction algorithm simple: evict the last page in the list. 48 * This also make the performance characteristics easy to reason 49 * about. Our cache is not so simple. At any given moment, some 50 * subset of the blocks in the cache are un-evictable because we 51 * have handed out a reference to them. Blocks are only evictable 52 * when there are no external references active. This makes 53 * eviction far more problematic: we choose to evict the evictable 54 * blocks that are the "lowest" in the list. 55 * 56 * There are times when it is not possible to evict the requested 57 * space. In these circumstances we are unable to adjust the cache 58 * size. To prevent the cache growing unbounded at these times we 59 * implement a "cache throttle" that slows the flow of new data 60 * into the cache until we can make space available. 61 * 62 * 2. The Megiddo and Modha model assumes a fixed cache size. 63 * Pages are evicted when the cache is full and there is a cache 64 * miss. Our model has a variable sized cache. It grows with 65 * high use, but also tries to react to memory pressure from the 66 * operating system: decreasing its size when system memory is 67 * tight. 68 * 69 * 3. The Megiddo and Modha model assumes a fixed page size. All 70 * elements of the cache are therefore exactly the same size. So 71 * when adjusting the cache size following a cache miss, its simply 72 * a matter of choosing a single page to evict. In our model, we 73 * have variable sized cache blocks (ranging from 512 bytes to 74 * 128K bytes). We therefore choose a set of blocks to evict to make 75 * space for a cache miss that approximates as closely as possible 76 * the space used by the new block. 77 * 78 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache" 79 * by N. Megiddo & D. Modha, FAST 2003 80 */ 81 82 /* 83 * The locking model: 84 * 85 * A new reference to a cache buffer can be obtained in two 86 * ways: 1) via a hash table lookup using the DVA as a key, 87 * or 2) via one of the ARC lists. The arc_read() interface 88 * uses method 1, while the internal ARC algorithms for 89 * adjusting the cache use method 2. We therefore provide two 90 * types of locks: 1) the hash table lock array, and 2) the 91 * ARC list locks. 92 * 93 * Buffers do not have their own mutexes, rather they rely on the 94 * hash table mutexes for the bulk of their protection (i.e. most 95 * fields in the arc_buf_hdr_t are protected by these mutexes). 96 * 97 * buf_hash_find() returns the appropriate mutex (held) when it 98 * locates the requested buffer in the hash table. It returns 99 * NULL for the mutex if the buffer was not in the table. 100 * 101 * buf_hash_remove() expects the appropriate hash mutex to be 102 * already held before it is invoked. 103 * 104 * Each ARC state also has a mutex which is used to protect the 105 * buffer list associated with the state. When attempting to 106 * obtain a hash table lock while holding an ARC list lock you 107 * must use: mutex_tryenter() to avoid deadlock. Also note that 108 * the active state mutex must be held before the ghost state mutex. 109 * 110 * It as also possible to register a callback which is run when the 111 * arc_meta_limit is reached and no buffers can be safely evicted. In 112 * this case the arc user should drop a reference on some arc buffers so 113 * they can be reclaimed and the arc_meta_limit honored. For example, 114 * when using the ZPL each dentry holds a references on a znode. These 115 * dentries must be pruned before the arc buffer holding the znode can 116 * be safely evicted. 117 * 118 * Note that the majority of the performance stats are manipulated 119 * with atomic operations. 120 * 121 * The L2ARC uses the l2ad_mtx on each vdev for the following: 122 * 123 * - L2ARC buflist creation 124 * - L2ARC buflist eviction 125 * - L2ARC write completion, which walks L2ARC buflists 126 * - ARC header destruction, as it removes from L2ARC buflists 127 * - ARC header release, as it removes from L2ARC buflists 128 */ 129 130 /* 131 * ARC operation: 132 * 133 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure. 134 * This structure can point either to a block that is still in the cache or to 135 * one that is only accessible in an L2 ARC device, or it can provide 136 * information about a block that was recently evicted. If a block is 137 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough 138 * information to retrieve it from the L2ARC device. This information is 139 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block 140 * that is in this state cannot access the data directly. 141 * 142 * Blocks that are actively being referenced or have not been evicted 143 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within 144 * the arc_buf_hdr_t that will point to the data block in memory. A block can 145 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC 146 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and 147 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd). 148 * 149 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the 150 * ability to store the physical data (b_pabd) associated with the DVA of the 151 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block, 152 * it will match its on-disk compression characteristics. This behavior can be 153 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the 154 * compressed ARC functionality is disabled, the b_pabd will point to an 155 * uncompressed version of the on-disk data. 156 * 157 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each 158 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it. 159 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC 160 * consumer. The ARC will provide references to this data and will keep it 161 * cached until it is no longer in use. The ARC caches only the L1ARC's physical 162 * data block and will evict any arc_buf_t that is no longer referenced. The 163 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the 164 * "overhead_size" kstat. 165 * 166 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or 167 * compressed form. The typical case is that consumers will want uncompressed 168 * data, and when that happens a new data buffer is allocated where the data is 169 * decompressed for them to use. Currently the only consumer who wants 170 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it 171 * exists on disk. When this happens, the arc_buf_t's data buffer is shared 172 * with the arc_buf_hdr_t. 173 * 174 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The 175 * first one is owned by a compressed send consumer (and therefore references 176 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be 177 * used by any other consumer (and has its own uncompressed copy of the data 178 * buffer). 179 * 180 * arc_buf_hdr_t 181 * +-----------+ 182 * | fields | 183 * | common to | 184 * | L1- and | 185 * | L2ARC | 186 * +-----------+ 187 * | l2arc_buf_hdr_t 188 * | | 189 * +-----------+ 190 * | l1arc_buf_hdr_t 191 * | | arc_buf_t 192 * | b_buf +------------>+-----------+ arc_buf_t 193 * | b_pabd +-+ |b_next +---->+-----------+ 194 * +-----------+ | |-----------| |b_next +-->NULL 195 * | |b_comp = T | +-----------+ 196 * | |b_data +-+ |b_comp = F | 197 * | +-----------+ | |b_data +-+ 198 * +->+------+ | +-----------+ | 199 * compressed | | | | 200 * data | |<--------------+ | uncompressed 201 * +------+ compressed, | data 202 * shared +-->+------+ 203 * data | | 204 * | | 205 * +------+ 206 * 207 * When a consumer reads a block, the ARC must first look to see if the 208 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new 209 * arc_buf_t and either copies uncompressed data into a new data buffer from an 210 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a 211 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the 212 * hdr is compressed and the desired compression characteristics of the 213 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the 214 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be 215 * the last buffer in the hdr's b_buf list, however a shared compressed buf can 216 * be anywhere in the hdr's list. 217 * 218 * The diagram below shows an example of an uncompressed ARC hdr that is 219 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is 220 * the last element in the buf list): 221 * 222 * arc_buf_hdr_t 223 * +-----------+ 224 * | | 225 * | | 226 * | | 227 * +-----------+ 228 * l2arc_buf_hdr_t| | 229 * | | 230 * +-----------+ 231 * l1arc_buf_hdr_t| | 232 * | | arc_buf_t (shared) 233 * | b_buf +------------>+---------+ arc_buf_t 234 * | | |b_next +---->+---------+ 235 * | b_pabd +-+ |---------| |b_next +-->NULL 236 * +-----------+ | | | +---------+ 237 * | |b_data +-+ | | 238 * | +---------+ | |b_data +-+ 239 * +->+------+ | +---------+ | 240 * | | | | 241 * uncompressed | | | | 242 * data +------+ | | 243 * ^ +->+------+ | 244 * | uncompressed | | | 245 * | data | | | 246 * | +------+ | 247 * +---------------------------------+ 248 * 249 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd 250 * since the physical block is about to be rewritten. The new data contents 251 * will be contained in the arc_buf_t. As the I/O pipeline performs the write, 252 * it may compress the data before writing it to disk. The ARC will be called 253 * with the transformed data and will bcopy the transformed on-disk block into 254 * a newly allocated b_pabd. Writes are always done into buffers which have 255 * either been loaned (and hence are new and don't have other readers) or 256 * buffers which have been released (and hence have their own hdr, if there 257 * were originally other readers of the buf's original hdr). This ensures that 258 * the ARC only needs to update a single buf and its hdr after a write occurs. 259 * 260 * When the L2ARC is in use, it will also take advantage of the b_pabd. The 261 * L2ARC will always write the contents of b_pabd to the L2ARC. This means 262 * that when compressed ARC is enabled that the L2ARC blocks are identical 263 * to the on-disk block in the main data pool. This provides a significant 264 * advantage since the ARC can leverage the bp's checksum when reading from the 265 * L2ARC to determine if the contents are valid. However, if the compressed 266 * ARC is disabled, then the L2ARC's block must be transformed to look 267 * like the physical block in the main data pool before comparing the 268 * checksum and determining its validity. 269 * 270 * The L1ARC has a slightly different system for storing encrypted data. 271 * Raw (encrypted + possibly compressed) data has a few subtle differences from 272 * data that is just compressed. The biggest difference is that it is not 273 * possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded. 274 * The other difference is that encryption cannot be treated as a suggestion. 275 * If a caller would prefer compressed data, but they actually wind up with 276 * uncompressed data the worst thing that could happen is there might be a 277 * performance hit. If the caller requests encrypted data, however, we must be 278 * sure they actually get it or else secret information could be leaked. Raw 279 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore, 280 * may have both an encrypted version and a decrypted version of its data at 281 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is 282 * copied out of this header. To avoid complications with b_pabd, raw buffers 283 * cannot be shared. 284 */ 285 286 #include <sys/spa.h> 287 #include <sys/zio.h> 288 #include <sys/spa_impl.h> 289 #include <sys/zio_compress.h> 290 #include <sys/zio_checksum.h> 291 #include <sys/zfs_context.h> 292 #include <sys/arc.h> 293 #include <sys/zfs_refcount.h> 294 #include <sys/vdev.h> 295 #include <sys/vdev_impl.h> 296 #include <sys/dsl_pool.h> 297 #include <sys/zio_checksum.h> 298 #include <sys/multilist.h> 299 #include <sys/abd.h> 300 #include <sys/zil.h> 301 #include <sys/fm/fs/zfs.h> 302 #include <sys/callb.h> 303 #include <sys/kstat.h> 304 #include <sys/zthr.h> 305 #include <zfs_fletcher.h> 306 #include <sys/arc_impl.h> 307 #include <sys/trace_zfs.h> 308 #include <sys/aggsum.h> 309 #include <cityhash.h> 310 #include <sys/vdev_trim.h> 311 #include <sys/zstd/zstd.h> 312 313 #ifndef _KERNEL 314 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */ 315 boolean_t arc_watch = B_FALSE; 316 #endif 317 318 /* 319 * This thread's job is to keep enough free memory in the system, by 320 * calling arc_kmem_reap_soon() plus arc_reduce_target_size(), which improves 321 * arc_available_memory(). 322 */ 323 static zthr_t *arc_reap_zthr; 324 325 /* 326 * This thread's job is to keep arc_size under arc_c, by calling 327 * arc_evict(), which improves arc_is_overflowing(). 328 */ 329 static zthr_t *arc_evict_zthr; 330 331 static kmutex_t arc_evict_lock; 332 static boolean_t arc_evict_needed = B_FALSE; 333 334 /* 335 * Count of bytes evicted since boot. 336 */ 337 static uint64_t arc_evict_count; 338 339 /* 340 * List of arc_evict_waiter_t's, representing threads waiting for the 341 * arc_evict_count to reach specific values. 342 */ 343 static list_t arc_evict_waiters; 344 345 /* 346 * When arc_is_overflowing(), arc_get_data_impl() waits for this percent of 347 * the requested amount of data to be evicted. For example, by default for 348 * every 2KB that's evicted, 1KB of it may be "reused" by a new allocation. 349 * Since this is above 100%, it ensures that progress is made towards getting 350 * arc_size under arc_c. Since this is finite, it ensures that allocations 351 * can still happen, even during the potentially long time that arc_size is 352 * more than arc_c. 353 */ 354 int zfs_arc_eviction_pct = 200; 355 356 /* 357 * The number of headers to evict in arc_evict_state_impl() before 358 * dropping the sublist lock and evicting from another sublist. A lower 359 * value means we're more likely to evict the "correct" header (i.e. the 360 * oldest header in the arc state), but comes with higher overhead 361 * (i.e. more invocations of arc_evict_state_impl()). 362 */ 363 int zfs_arc_evict_batch_limit = 10; 364 365 /* number of seconds before growing cache again */ 366 int arc_grow_retry = 5; 367 368 /* 369 * Minimum time between calls to arc_kmem_reap_soon(). 370 */ 371 int arc_kmem_cache_reap_retry_ms = 1000; 372 373 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */ 374 int zfs_arc_overflow_shift = 8; 375 376 /* shift of arc_c for calculating both min and max arc_p */ 377 int arc_p_min_shift = 4; 378 379 /* log2(fraction of arc to reclaim) */ 380 int arc_shrink_shift = 7; 381 382 /* percent of pagecache to reclaim arc to */ 383 #ifdef _KERNEL 384 uint_t zfs_arc_pc_percent = 0; 385 #endif 386 387 /* 388 * log2(fraction of ARC which must be free to allow growing). 389 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory, 390 * when reading a new block into the ARC, we will evict an equal-sized block 391 * from the ARC. 392 * 393 * This must be less than arc_shrink_shift, so that when we shrink the ARC, 394 * we will still not allow it to grow. 395 */ 396 int arc_no_grow_shift = 5; 397 398 399 /* 400 * minimum lifespan of a prefetch block in clock ticks 401 * (initialized in arc_init()) 402 */ 403 static int arc_min_prefetch_ms; 404 static int arc_min_prescient_prefetch_ms; 405 406 /* 407 * If this percent of memory is free, don't throttle. 408 */ 409 int arc_lotsfree_percent = 10; 410 411 /* 412 * The arc has filled available memory and has now warmed up. 413 */ 414 boolean_t arc_warm; 415 416 /* 417 * These tunables are for performance analysis. 418 */ 419 unsigned long zfs_arc_max = 0; 420 unsigned long zfs_arc_min = 0; 421 unsigned long zfs_arc_meta_limit = 0; 422 unsigned long zfs_arc_meta_min = 0; 423 unsigned long zfs_arc_dnode_limit = 0; 424 unsigned long zfs_arc_dnode_reduce_percent = 10; 425 int zfs_arc_grow_retry = 0; 426 int zfs_arc_shrink_shift = 0; 427 int zfs_arc_p_min_shift = 0; 428 int zfs_arc_average_blocksize = 8 * 1024; /* 8KB */ 429 430 /* 431 * ARC dirty data constraints for arc_tempreserve_space() throttle. 432 */ 433 unsigned long zfs_arc_dirty_limit_percent = 50; /* total dirty data limit */ 434 unsigned long zfs_arc_anon_limit_percent = 25; /* anon block dirty limit */ 435 unsigned long zfs_arc_pool_dirty_percent = 20; /* each pool's anon allowance */ 436 437 /* 438 * Enable or disable compressed arc buffers. 439 */ 440 int zfs_compressed_arc_enabled = B_TRUE; 441 442 /* 443 * ARC will evict meta buffers that exceed arc_meta_limit. This 444 * tunable make arc_meta_limit adjustable for different workloads. 445 */ 446 unsigned long zfs_arc_meta_limit_percent = 75; 447 448 /* 449 * Percentage that can be consumed by dnodes of ARC meta buffers. 450 */ 451 unsigned long zfs_arc_dnode_limit_percent = 10; 452 453 /* 454 * These tunables are Linux specific 455 */ 456 unsigned long zfs_arc_sys_free = 0; 457 int zfs_arc_min_prefetch_ms = 0; 458 int zfs_arc_min_prescient_prefetch_ms = 0; 459 int zfs_arc_p_dampener_disable = 1; 460 int zfs_arc_meta_prune = 10000; 461 int zfs_arc_meta_strategy = ARC_STRATEGY_META_BALANCED; 462 int zfs_arc_meta_adjust_restarts = 4096; 463 int zfs_arc_lotsfree_percent = 10; 464 465 /* The 6 states: */ 466 arc_state_t ARC_anon; 467 arc_state_t ARC_mru; 468 arc_state_t ARC_mru_ghost; 469 arc_state_t ARC_mfu; 470 arc_state_t ARC_mfu_ghost; 471 arc_state_t ARC_l2c_only; 472 473 arc_stats_t arc_stats = { 474 { "hits", KSTAT_DATA_UINT64 }, 475 { "misses", KSTAT_DATA_UINT64 }, 476 { "demand_data_hits", KSTAT_DATA_UINT64 }, 477 { "demand_data_misses", KSTAT_DATA_UINT64 }, 478 { "demand_metadata_hits", KSTAT_DATA_UINT64 }, 479 { "demand_metadata_misses", KSTAT_DATA_UINT64 }, 480 { "prefetch_data_hits", KSTAT_DATA_UINT64 }, 481 { "prefetch_data_misses", KSTAT_DATA_UINT64 }, 482 { "prefetch_metadata_hits", KSTAT_DATA_UINT64 }, 483 { "prefetch_metadata_misses", KSTAT_DATA_UINT64 }, 484 { "mru_hits", KSTAT_DATA_UINT64 }, 485 { "mru_ghost_hits", KSTAT_DATA_UINT64 }, 486 { "mfu_hits", KSTAT_DATA_UINT64 }, 487 { "mfu_ghost_hits", KSTAT_DATA_UINT64 }, 488 { "deleted", KSTAT_DATA_UINT64 }, 489 { "mutex_miss", KSTAT_DATA_UINT64 }, 490 { "access_skip", KSTAT_DATA_UINT64 }, 491 { "evict_skip", KSTAT_DATA_UINT64 }, 492 { "evict_not_enough", KSTAT_DATA_UINT64 }, 493 { "evict_l2_cached", KSTAT_DATA_UINT64 }, 494 { "evict_l2_eligible", KSTAT_DATA_UINT64 }, 495 { "evict_l2_eligible_mfu", KSTAT_DATA_UINT64 }, 496 { "evict_l2_eligible_mru", KSTAT_DATA_UINT64 }, 497 { "evict_l2_ineligible", KSTAT_DATA_UINT64 }, 498 { "evict_l2_skip", KSTAT_DATA_UINT64 }, 499 { "hash_elements", KSTAT_DATA_UINT64 }, 500 { "hash_elements_max", KSTAT_DATA_UINT64 }, 501 { "hash_collisions", KSTAT_DATA_UINT64 }, 502 { "hash_chains", KSTAT_DATA_UINT64 }, 503 { "hash_chain_max", KSTAT_DATA_UINT64 }, 504 { "p", KSTAT_DATA_UINT64 }, 505 { "c", KSTAT_DATA_UINT64 }, 506 { "c_min", KSTAT_DATA_UINT64 }, 507 { "c_max", KSTAT_DATA_UINT64 }, 508 { "size", KSTAT_DATA_UINT64 }, 509 { "compressed_size", KSTAT_DATA_UINT64 }, 510 { "uncompressed_size", KSTAT_DATA_UINT64 }, 511 { "overhead_size", KSTAT_DATA_UINT64 }, 512 { "hdr_size", KSTAT_DATA_UINT64 }, 513 { "data_size", KSTAT_DATA_UINT64 }, 514 { "metadata_size", KSTAT_DATA_UINT64 }, 515 { "dbuf_size", KSTAT_DATA_UINT64 }, 516 { "dnode_size", KSTAT_DATA_UINT64 }, 517 { "bonus_size", KSTAT_DATA_UINT64 }, 518 #if defined(COMPAT_FREEBSD11) 519 { "other_size", KSTAT_DATA_UINT64 }, 520 #endif 521 { "anon_size", KSTAT_DATA_UINT64 }, 522 { "anon_evictable_data", KSTAT_DATA_UINT64 }, 523 { "anon_evictable_metadata", KSTAT_DATA_UINT64 }, 524 { "mru_size", KSTAT_DATA_UINT64 }, 525 { "mru_evictable_data", KSTAT_DATA_UINT64 }, 526 { "mru_evictable_metadata", KSTAT_DATA_UINT64 }, 527 { "mru_ghost_size", KSTAT_DATA_UINT64 }, 528 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 }, 529 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 530 { "mfu_size", KSTAT_DATA_UINT64 }, 531 { "mfu_evictable_data", KSTAT_DATA_UINT64 }, 532 { "mfu_evictable_metadata", KSTAT_DATA_UINT64 }, 533 { "mfu_ghost_size", KSTAT_DATA_UINT64 }, 534 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 }, 535 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 536 { "l2_hits", KSTAT_DATA_UINT64 }, 537 { "l2_misses", KSTAT_DATA_UINT64 }, 538 { "l2_prefetch_asize", KSTAT_DATA_UINT64 }, 539 { "l2_mru_asize", KSTAT_DATA_UINT64 }, 540 { "l2_mfu_asize", KSTAT_DATA_UINT64 }, 541 { "l2_bufc_data_asize", KSTAT_DATA_UINT64 }, 542 { "l2_bufc_metadata_asize", KSTAT_DATA_UINT64 }, 543 { "l2_feeds", KSTAT_DATA_UINT64 }, 544 { "l2_rw_clash", KSTAT_DATA_UINT64 }, 545 { "l2_read_bytes", KSTAT_DATA_UINT64 }, 546 { "l2_write_bytes", KSTAT_DATA_UINT64 }, 547 { "l2_writes_sent", KSTAT_DATA_UINT64 }, 548 { "l2_writes_done", KSTAT_DATA_UINT64 }, 549 { "l2_writes_error", KSTAT_DATA_UINT64 }, 550 { "l2_writes_lock_retry", KSTAT_DATA_UINT64 }, 551 { "l2_evict_lock_retry", KSTAT_DATA_UINT64 }, 552 { "l2_evict_reading", KSTAT_DATA_UINT64 }, 553 { "l2_evict_l1cached", KSTAT_DATA_UINT64 }, 554 { "l2_free_on_write", KSTAT_DATA_UINT64 }, 555 { "l2_abort_lowmem", KSTAT_DATA_UINT64 }, 556 { "l2_cksum_bad", KSTAT_DATA_UINT64 }, 557 { "l2_io_error", KSTAT_DATA_UINT64 }, 558 { "l2_size", KSTAT_DATA_UINT64 }, 559 { "l2_asize", KSTAT_DATA_UINT64 }, 560 { "l2_hdr_size", KSTAT_DATA_UINT64 }, 561 { "l2_log_blk_writes", KSTAT_DATA_UINT64 }, 562 { "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 }, 563 { "l2_log_blk_asize", KSTAT_DATA_UINT64 }, 564 { "l2_log_blk_count", KSTAT_DATA_UINT64 }, 565 { "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 }, 566 { "l2_rebuild_success", KSTAT_DATA_UINT64 }, 567 { "l2_rebuild_unsupported", KSTAT_DATA_UINT64 }, 568 { "l2_rebuild_io_errors", KSTAT_DATA_UINT64 }, 569 { "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 }, 570 { "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 }, 571 { "l2_rebuild_lowmem", KSTAT_DATA_UINT64 }, 572 { "l2_rebuild_size", KSTAT_DATA_UINT64 }, 573 { "l2_rebuild_asize", KSTAT_DATA_UINT64 }, 574 { "l2_rebuild_bufs", KSTAT_DATA_UINT64 }, 575 { "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 }, 576 { "l2_rebuild_log_blks", KSTAT_DATA_UINT64 }, 577 { "memory_throttle_count", KSTAT_DATA_UINT64 }, 578 { "memory_direct_count", KSTAT_DATA_UINT64 }, 579 { "memory_indirect_count", KSTAT_DATA_UINT64 }, 580 { "memory_all_bytes", KSTAT_DATA_UINT64 }, 581 { "memory_free_bytes", KSTAT_DATA_UINT64 }, 582 { "memory_available_bytes", KSTAT_DATA_INT64 }, 583 { "arc_no_grow", KSTAT_DATA_UINT64 }, 584 { "arc_tempreserve", KSTAT_DATA_UINT64 }, 585 { "arc_loaned_bytes", KSTAT_DATA_UINT64 }, 586 { "arc_prune", KSTAT_DATA_UINT64 }, 587 { "arc_meta_used", KSTAT_DATA_UINT64 }, 588 { "arc_meta_limit", KSTAT_DATA_UINT64 }, 589 { "arc_dnode_limit", KSTAT_DATA_UINT64 }, 590 { "arc_meta_max", KSTAT_DATA_UINT64 }, 591 { "arc_meta_min", KSTAT_DATA_UINT64 }, 592 { "async_upgrade_sync", KSTAT_DATA_UINT64 }, 593 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 }, 594 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 }, 595 { "arc_need_free", KSTAT_DATA_UINT64 }, 596 { "arc_sys_free", KSTAT_DATA_UINT64 }, 597 { "arc_raw_size", KSTAT_DATA_UINT64 }, 598 { "cached_only_in_progress", KSTAT_DATA_UINT64 }, 599 { "abd_chunk_waste_size", KSTAT_DATA_UINT64 }, 600 }; 601 602 #define ARCSTAT_MAX(stat, val) { \ 603 uint64_t m; \ 604 while ((val) > (m = arc_stats.stat.value.ui64) && \ 605 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \ 606 continue; \ 607 } 608 609 #define ARCSTAT_MAXSTAT(stat) \ 610 ARCSTAT_MAX(stat##_max, arc_stats.stat.value.ui64) 611 612 /* 613 * We define a macro to allow ARC hits/misses to be easily broken down by 614 * two separate conditions, giving a total of four different subtypes for 615 * each of hits and misses (so eight statistics total). 616 */ 617 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \ 618 if (cond1) { \ 619 if (cond2) { \ 620 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \ 621 } else { \ 622 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \ 623 } \ 624 } else { \ 625 if (cond2) { \ 626 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \ 627 } else { \ 628 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\ 629 } \ 630 } 631 632 /* 633 * This macro allows us to use kstats as floating averages. Each time we 634 * update this kstat, we first factor it and the update value by 635 * ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall 636 * average. This macro assumes that integer loads and stores are atomic, but 637 * is not safe for multiple writers updating the kstat in parallel (only the 638 * last writer's update will remain). 639 */ 640 #define ARCSTAT_F_AVG_FACTOR 3 641 #define ARCSTAT_F_AVG(stat, value) \ 642 do { \ 643 uint64_t x = ARCSTAT(stat); \ 644 x = x - x / ARCSTAT_F_AVG_FACTOR + \ 645 (value) / ARCSTAT_F_AVG_FACTOR; \ 646 ARCSTAT(stat) = x; \ 647 _NOTE(CONSTCOND) \ 648 } while (0) 649 650 kstat_t *arc_ksp; 651 static arc_state_t *arc_anon; 652 static arc_state_t *arc_mru_ghost; 653 static arc_state_t *arc_mfu_ghost; 654 static arc_state_t *arc_l2c_only; 655 656 arc_state_t *arc_mru; 657 arc_state_t *arc_mfu; 658 659 /* 660 * There are several ARC variables that are critical to export as kstats -- 661 * but we don't want to have to grovel around in the kstat whenever we wish to 662 * manipulate them. For these variables, we therefore define them to be in 663 * terms of the statistic variable. This assures that we are not introducing 664 * the possibility of inconsistency by having shadow copies of the variables, 665 * while still allowing the code to be readable. 666 */ 667 #define arc_tempreserve ARCSTAT(arcstat_tempreserve) 668 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes) 669 #define arc_meta_limit ARCSTAT(arcstat_meta_limit) /* max size for metadata */ 670 /* max size for dnodes */ 671 #define arc_dnode_size_limit ARCSTAT(arcstat_dnode_limit) 672 #define arc_meta_min ARCSTAT(arcstat_meta_min) /* min size for metadata */ 673 #define arc_meta_max ARCSTAT(arcstat_meta_max) /* max size of metadata */ 674 #define arc_need_free ARCSTAT(arcstat_need_free) /* waiting to be evicted */ 675 676 /* size of all b_rabd's in entire arc */ 677 #define arc_raw_size ARCSTAT(arcstat_raw_size) 678 /* compressed size of entire arc */ 679 #define arc_compressed_size ARCSTAT(arcstat_compressed_size) 680 /* uncompressed size of entire arc */ 681 #define arc_uncompressed_size ARCSTAT(arcstat_uncompressed_size) 682 /* number of bytes in the arc from arc_buf_t's */ 683 #define arc_overhead_size ARCSTAT(arcstat_overhead_size) 684 685 /* 686 * There are also some ARC variables that we want to export, but that are 687 * updated so often that having the canonical representation be the statistic 688 * variable causes a performance bottleneck. We want to use aggsum_t's for these 689 * instead, but still be able to export the kstat in the same way as before. 690 * The solution is to always use the aggsum version, except in the kstat update 691 * callback. 692 */ 693 aggsum_t arc_size; 694 aggsum_t arc_meta_used; 695 aggsum_t astat_data_size; 696 aggsum_t astat_metadata_size; 697 aggsum_t astat_dbuf_size; 698 aggsum_t astat_dnode_size; 699 aggsum_t astat_bonus_size; 700 aggsum_t astat_hdr_size; 701 aggsum_t astat_l2_hdr_size; 702 aggsum_t astat_abd_chunk_waste_size; 703 704 hrtime_t arc_growtime; 705 list_t arc_prune_list; 706 kmutex_t arc_prune_mtx; 707 taskq_t *arc_prune_taskq; 708 709 #define GHOST_STATE(state) \ 710 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \ 711 (state) == arc_l2c_only) 712 713 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE) 714 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) 715 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR) 716 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH) 717 #define HDR_PRESCIENT_PREFETCH(hdr) \ 718 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) 719 #define HDR_COMPRESSION_ENABLED(hdr) \ 720 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC) 721 722 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE) 723 #define HDR_L2_READING(hdr) \ 724 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \ 725 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)) 726 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING) 727 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED) 728 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD) 729 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED) 730 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH) 731 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA) 732 733 #define HDR_ISTYPE_METADATA(hdr) \ 734 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA) 735 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr)) 736 737 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR) 738 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR) 739 #define HDR_HAS_RABD(hdr) \ 740 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \ 741 (hdr)->b_crypt_hdr.b_rabd != NULL) 742 #define HDR_ENCRYPTED(hdr) \ 743 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 744 #define HDR_AUTHENTICATED(hdr) \ 745 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 746 747 /* For storing compression mode in b_flags */ 748 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1) 749 750 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \ 751 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS)) 752 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \ 753 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp)); 754 755 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL) 756 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED) 757 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED) 758 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED) 759 760 /* 761 * Other sizes 762 */ 763 764 #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t)) 765 #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr)) 766 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr)) 767 768 /* 769 * Hash table routines 770 */ 771 772 #define HT_LOCK_ALIGN 64 773 #define HT_LOCK_PAD (P2NPHASE(sizeof (kmutex_t), (HT_LOCK_ALIGN))) 774 775 struct ht_lock { 776 kmutex_t ht_lock; 777 #ifdef _KERNEL 778 unsigned char pad[HT_LOCK_PAD]; 779 #endif 780 }; 781 782 #define BUF_LOCKS 8192 783 typedef struct buf_hash_table { 784 uint64_t ht_mask; 785 arc_buf_hdr_t **ht_table; 786 struct ht_lock ht_locks[BUF_LOCKS]; 787 } buf_hash_table_t; 788 789 static buf_hash_table_t buf_hash_table; 790 791 #define BUF_HASH_INDEX(spa, dva, birth) \ 792 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask) 793 #define BUF_HASH_LOCK_NTRY(idx) (buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)]) 794 #define BUF_HASH_LOCK(idx) (&(BUF_HASH_LOCK_NTRY(idx).ht_lock)) 795 #define HDR_LOCK(hdr) \ 796 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth))) 797 798 uint64_t zfs_crc64_table[256]; 799 800 /* 801 * Level 2 ARC 802 */ 803 804 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */ 805 #define L2ARC_HEADROOM 2 /* num of writes */ 806 807 /* 808 * If we discover during ARC scan any buffers to be compressed, we boost 809 * our headroom for the next scanning cycle by this percentage multiple. 810 */ 811 #define L2ARC_HEADROOM_BOOST 200 812 #define L2ARC_FEED_SECS 1 /* caching interval secs */ 813 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */ 814 815 /* 816 * We can feed L2ARC from two states of ARC buffers, mru and mfu, 817 * and each of the state has two types: data and metadata. 818 */ 819 #define L2ARC_FEED_TYPES 4 820 821 #define l2arc_writes_sent ARCSTAT(arcstat_l2_writes_sent) 822 #define l2arc_writes_done ARCSTAT(arcstat_l2_writes_done) 823 824 /* L2ARC Performance Tunables */ 825 unsigned long l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */ 826 unsigned long l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */ 827 unsigned long l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */ 828 unsigned long l2arc_headroom_boost = L2ARC_HEADROOM_BOOST; 829 unsigned long l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */ 830 unsigned long l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */ 831 int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */ 832 int l2arc_feed_again = B_TRUE; /* turbo warmup */ 833 int l2arc_norw = B_FALSE; /* no reads during writes */ 834 int l2arc_meta_percent = 33; /* limit on headers size */ 835 836 /* 837 * L2ARC Internals 838 */ 839 static list_t L2ARC_dev_list; /* device list */ 840 static list_t *l2arc_dev_list; /* device list pointer */ 841 static kmutex_t l2arc_dev_mtx; /* device list mutex */ 842 static l2arc_dev_t *l2arc_dev_last; /* last device used */ 843 static list_t L2ARC_free_on_write; /* free after write buf list */ 844 static list_t *l2arc_free_on_write; /* free after write list ptr */ 845 static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */ 846 static uint64_t l2arc_ndev; /* number of devices */ 847 848 typedef struct l2arc_read_callback { 849 arc_buf_hdr_t *l2rcb_hdr; /* read header */ 850 blkptr_t l2rcb_bp; /* original blkptr */ 851 zbookmark_phys_t l2rcb_zb; /* original bookmark */ 852 int l2rcb_flags; /* original flags */ 853 abd_t *l2rcb_abd; /* temporary buffer */ 854 } l2arc_read_callback_t; 855 856 typedef struct l2arc_data_free { 857 /* protected by l2arc_free_on_write_mtx */ 858 abd_t *l2df_abd; 859 size_t l2df_size; 860 arc_buf_contents_t l2df_type; 861 list_node_t l2df_list_node; 862 } l2arc_data_free_t; 863 864 typedef enum arc_fill_flags { 865 ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */ 866 ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */ 867 ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */ 868 ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */ 869 ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */ 870 } arc_fill_flags_t; 871 872 static kmutex_t l2arc_feed_thr_lock; 873 static kcondvar_t l2arc_feed_thr_cv; 874 static uint8_t l2arc_thread_exit; 875 876 static kmutex_t l2arc_rebuild_thr_lock; 877 static kcondvar_t l2arc_rebuild_thr_cv; 878 879 enum arc_hdr_alloc_flags { 880 ARC_HDR_ALLOC_RDATA = 0x1, 881 ARC_HDR_DO_ADAPT = 0x2, 882 }; 883 884 885 static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, void *, boolean_t); 886 static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, void *); 887 static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, void *, boolean_t); 888 static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, void *); 889 static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, void *); 890 static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag); 891 static void arc_hdr_free_abd(arc_buf_hdr_t *, boolean_t); 892 static void arc_hdr_alloc_abd(arc_buf_hdr_t *, int); 893 static void arc_access(arc_buf_hdr_t *, kmutex_t *); 894 static void arc_buf_watch(arc_buf_t *); 895 896 static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *); 897 static uint32_t arc_bufc_to_flags(arc_buf_contents_t); 898 static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 899 static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 900 901 static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *); 902 static void l2arc_read_done(zio_t *); 903 static void l2arc_do_free_on_write(void); 904 static void l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 905 boolean_t state_only); 906 907 #define l2arc_hdr_arcstats_increment(hdr) \ 908 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_FALSE) 909 #define l2arc_hdr_arcstats_decrement(hdr) \ 910 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_FALSE) 911 #define l2arc_hdr_arcstats_increment_state(hdr) \ 912 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_TRUE) 913 #define l2arc_hdr_arcstats_decrement_state(hdr) \ 914 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_TRUE) 915 916 /* 917 * l2arc_mfuonly : A ZFS module parameter that controls whether only MFU 918 * metadata and data are cached from ARC into L2ARC. 919 */ 920 int l2arc_mfuonly = 0; 921 922 /* 923 * L2ARC TRIM 924 * l2arc_trim_ahead : A ZFS module parameter that controls how much ahead of 925 * the current write size (l2arc_write_max) we should TRIM if we 926 * have filled the device. It is defined as a percentage of the 927 * write size. If set to 100 we trim twice the space required to 928 * accommodate upcoming writes. A minimum of 64MB will be trimmed. 929 * It also enables TRIM of the whole L2ARC device upon creation or 930 * addition to an existing pool or if the header of the device is 931 * invalid upon importing a pool or onlining a cache device. The 932 * default is 0, which disables TRIM on L2ARC altogether as it can 933 * put significant stress on the underlying storage devices. This 934 * will vary depending of how well the specific device handles 935 * these commands. 936 */ 937 unsigned long l2arc_trim_ahead = 0; 938 939 /* 940 * Performance tuning of L2ARC persistence: 941 * 942 * l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding 943 * an L2ARC device (either at pool import or later) will attempt 944 * to rebuild L2ARC buffer contents. 945 * l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls 946 * whether log blocks are written to the L2ARC device. If the L2ARC 947 * device is less than 1GB, the amount of data l2arc_evict() 948 * evicts is significant compared to the amount of restored L2ARC 949 * data. In this case do not write log blocks in L2ARC in order 950 * not to waste space. 951 */ 952 int l2arc_rebuild_enabled = B_TRUE; 953 unsigned long l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024; 954 955 /* L2ARC persistence rebuild control routines. */ 956 void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen); 957 static void l2arc_dev_rebuild_thread(void *arg); 958 static int l2arc_rebuild(l2arc_dev_t *dev); 959 960 /* L2ARC persistence read I/O routines. */ 961 static int l2arc_dev_hdr_read(l2arc_dev_t *dev); 962 static int l2arc_log_blk_read(l2arc_dev_t *dev, 963 const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp, 964 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 965 zio_t *this_io, zio_t **next_io); 966 static zio_t *l2arc_log_blk_fetch(vdev_t *vd, 967 const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb); 968 static void l2arc_log_blk_fetch_abort(zio_t *zio); 969 970 /* L2ARC persistence block restoration routines. */ 971 static void l2arc_log_blk_restore(l2arc_dev_t *dev, 972 const l2arc_log_blk_phys_t *lb, uint64_t lb_asize); 973 static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, 974 l2arc_dev_t *dev); 975 976 /* L2ARC persistence write I/O routines. */ 977 static void l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, 978 l2arc_write_callback_t *cb); 979 980 /* L2ARC persistence auxiliary routines. */ 981 boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev, 982 const l2arc_log_blkptr_t *lbp); 983 static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev, 984 const arc_buf_hdr_t *ab); 985 boolean_t l2arc_range_check_overlap(uint64_t bottom, 986 uint64_t top, uint64_t check); 987 static void l2arc_blk_fetch_done(zio_t *zio); 988 static inline uint64_t 989 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev); 990 991 /* 992 * We use Cityhash for this. It's fast, and has good hash properties without 993 * requiring any large static buffers. 994 */ 995 static uint64_t 996 buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth) 997 { 998 return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth)); 999 } 1000 1001 #define HDR_EMPTY(hdr) \ 1002 ((hdr)->b_dva.dva_word[0] == 0 && \ 1003 (hdr)->b_dva.dva_word[1] == 0) 1004 1005 #define HDR_EMPTY_OR_LOCKED(hdr) \ 1006 (HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr))) 1007 1008 #define HDR_EQUAL(spa, dva, birth, hdr) \ 1009 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \ 1010 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \ 1011 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa) 1012 1013 static void 1014 buf_discard_identity(arc_buf_hdr_t *hdr) 1015 { 1016 hdr->b_dva.dva_word[0] = 0; 1017 hdr->b_dva.dva_word[1] = 0; 1018 hdr->b_birth = 0; 1019 } 1020 1021 static arc_buf_hdr_t * 1022 buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp) 1023 { 1024 const dva_t *dva = BP_IDENTITY(bp); 1025 uint64_t birth = BP_PHYSICAL_BIRTH(bp); 1026 uint64_t idx = BUF_HASH_INDEX(spa, dva, birth); 1027 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1028 arc_buf_hdr_t *hdr; 1029 1030 mutex_enter(hash_lock); 1031 for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL; 1032 hdr = hdr->b_hash_next) { 1033 if (HDR_EQUAL(spa, dva, birth, hdr)) { 1034 *lockp = hash_lock; 1035 return (hdr); 1036 } 1037 } 1038 mutex_exit(hash_lock); 1039 *lockp = NULL; 1040 return (NULL); 1041 } 1042 1043 /* 1044 * Insert an entry into the hash table. If there is already an element 1045 * equal to elem in the hash table, then the already existing element 1046 * will be returned and the new element will not be inserted. 1047 * Otherwise returns NULL. 1048 * If lockp == NULL, the caller is assumed to already hold the hash lock. 1049 */ 1050 static arc_buf_hdr_t * 1051 buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp) 1052 { 1053 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1054 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1055 arc_buf_hdr_t *fhdr; 1056 uint32_t i; 1057 1058 ASSERT(!DVA_IS_EMPTY(&hdr->b_dva)); 1059 ASSERT(hdr->b_birth != 0); 1060 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 1061 1062 if (lockp != NULL) { 1063 *lockp = hash_lock; 1064 mutex_enter(hash_lock); 1065 } else { 1066 ASSERT(MUTEX_HELD(hash_lock)); 1067 } 1068 1069 for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL; 1070 fhdr = fhdr->b_hash_next, i++) { 1071 if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr)) 1072 return (fhdr); 1073 } 1074 1075 hdr->b_hash_next = buf_hash_table.ht_table[idx]; 1076 buf_hash_table.ht_table[idx] = hdr; 1077 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1078 1079 /* collect some hash table performance data */ 1080 if (i > 0) { 1081 ARCSTAT_BUMP(arcstat_hash_collisions); 1082 if (i == 1) 1083 ARCSTAT_BUMP(arcstat_hash_chains); 1084 1085 ARCSTAT_MAX(arcstat_hash_chain_max, i); 1086 } 1087 1088 ARCSTAT_BUMP(arcstat_hash_elements); 1089 ARCSTAT_MAXSTAT(arcstat_hash_elements); 1090 1091 return (NULL); 1092 } 1093 1094 static void 1095 buf_hash_remove(arc_buf_hdr_t *hdr) 1096 { 1097 arc_buf_hdr_t *fhdr, **hdrp; 1098 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1099 1100 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx))); 1101 ASSERT(HDR_IN_HASH_TABLE(hdr)); 1102 1103 hdrp = &buf_hash_table.ht_table[idx]; 1104 while ((fhdr = *hdrp) != hdr) { 1105 ASSERT3P(fhdr, !=, NULL); 1106 hdrp = &fhdr->b_hash_next; 1107 } 1108 *hdrp = hdr->b_hash_next; 1109 hdr->b_hash_next = NULL; 1110 arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1111 1112 /* collect some hash table performance data */ 1113 ARCSTAT_BUMPDOWN(arcstat_hash_elements); 1114 1115 if (buf_hash_table.ht_table[idx] && 1116 buf_hash_table.ht_table[idx]->b_hash_next == NULL) 1117 ARCSTAT_BUMPDOWN(arcstat_hash_chains); 1118 } 1119 1120 /* 1121 * Global data structures and functions for the buf kmem cache. 1122 */ 1123 1124 static kmem_cache_t *hdr_full_cache; 1125 static kmem_cache_t *hdr_full_crypt_cache; 1126 static kmem_cache_t *hdr_l2only_cache; 1127 static kmem_cache_t *buf_cache; 1128 1129 static void 1130 buf_fini(void) 1131 { 1132 int i; 1133 1134 #if defined(_KERNEL) 1135 /* 1136 * Large allocations which do not require contiguous pages 1137 * should be using vmem_free() in the linux kernel\ 1138 */ 1139 vmem_free(buf_hash_table.ht_table, 1140 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1141 #else 1142 kmem_free(buf_hash_table.ht_table, 1143 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1144 #endif 1145 for (i = 0; i < BUF_LOCKS; i++) 1146 mutex_destroy(&buf_hash_table.ht_locks[i].ht_lock); 1147 kmem_cache_destroy(hdr_full_cache); 1148 kmem_cache_destroy(hdr_full_crypt_cache); 1149 kmem_cache_destroy(hdr_l2only_cache); 1150 kmem_cache_destroy(buf_cache); 1151 } 1152 1153 /* 1154 * Constructor callback - called when the cache is empty 1155 * and a new buf is requested. 1156 */ 1157 /* ARGSUSED */ 1158 static int 1159 hdr_full_cons(void *vbuf, void *unused, int kmflag) 1160 { 1161 arc_buf_hdr_t *hdr = vbuf; 1162 1163 bzero(hdr, HDR_FULL_SIZE); 1164 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 1165 cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL); 1166 zfs_refcount_create(&hdr->b_l1hdr.b_refcnt); 1167 mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL); 1168 list_link_init(&hdr->b_l1hdr.b_arc_node); 1169 list_link_init(&hdr->b_l2hdr.b_l2node); 1170 multilist_link_init(&hdr->b_l1hdr.b_arc_node); 1171 arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1172 1173 return (0); 1174 } 1175 1176 /* ARGSUSED */ 1177 static int 1178 hdr_full_crypt_cons(void *vbuf, void *unused, int kmflag) 1179 { 1180 arc_buf_hdr_t *hdr = vbuf; 1181 1182 hdr_full_cons(vbuf, unused, kmflag); 1183 bzero(&hdr->b_crypt_hdr, sizeof (hdr->b_crypt_hdr)); 1184 arc_space_consume(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); 1185 1186 return (0); 1187 } 1188 1189 /* ARGSUSED */ 1190 static int 1191 hdr_l2only_cons(void *vbuf, void *unused, int kmflag) 1192 { 1193 arc_buf_hdr_t *hdr = vbuf; 1194 1195 bzero(hdr, HDR_L2ONLY_SIZE); 1196 arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1197 1198 return (0); 1199 } 1200 1201 /* ARGSUSED */ 1202 static int 1203 buf_cons(void *vbuf, void *unused, int kmflag) 1204 { 1205 arc_buf_t *buf = vbuf; 1206 1207 bzero(buf, sizeof (arc_buf_t)); 1208 mutex_init(&buf->b_evict_lock, NULL, MUTEX_DEFAULT, NULL); 1209 arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1210 1211 return (0); 1212 } 1213 1214 /* 1215 * Destructor callback - called when a cached buf is 1216 * no longer required. 1217 */ 1218 /* ARGSUSED */ 1219 static void 1220 hdr_full_dest(void *vbuf, void *unused) 1221 { 1222 arc_buf_hdr_t *hdr = vbuf; 1223 1224 ASSERT(HDR_EMPTY(hdr)); 1225 cv_destroy(&hdr->b_l1hdr.b_cv); 1226 zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt); 1227 mutex_destroy(&hdr->b_l1hdr.b_freeze_lock); 1228 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 1229 arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1230 } 1231 1232 /* ARGSUSED */ 1233 static void 1234 hdr_full_crypt_dest(void *vbuf, void *unused) 1235 { 1236 arc_buf_hdr_t *hdr = vbuf; 1237 1238 hdr_full_dest(vbuf, unused); 1239 arc_space_return(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); 1240 } 1241 1242 /* ARGSUSED */ 1243 static void 1244 hdr_l2only_dest(void *vbuf, void *unused) 1245 { 1246 arc_buf_hdr_t *hdr __maybe_unused = vbuf; 1247 1248 ASSERT(HDR_EMPTY(hdr)); 1249 arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1250 } 1251 1252 /* ARGSUSED */ 1253 static void 1254 buf_dest(void *vbuf, void *unused) 1255 { 1256 arc_buf_t *buf = vbuf; 1257 1258 mutex_destroy(&buf->b_evict_lock); 1259 arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1260 } 1261 1262 static void 1263 buf_init(void) 1264 { 1265 uint64_t *ct = NULL; 1266 uint64_t hsize = 1ULL << 12; 1267 int i, j; 1268 1269 /* 1270 * The hash table is big enough to fill all of physical memory 1271 * with an average block size of zfs_arc_average_blocksize (default 8K). 1272 * By default, the table will take up 1273 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers). 1274 */ 1275 while (hsize * zfs_arc_average_blocksize < arc_all_memory()) 1276 hsize <<= 1; 1277 retry: 1278 buf_hash_table.ht_mask = hsize - 1; 1279 #if defined(_KERNEL) 1280 /* 1281 * Large allocations which do not require contiguous pages 1282 * should be using vmem_alloc() in the linux kernel 1283 */ 1284 buf_hash_table.ht_table = 1285 vmem_zalloc(hsize * sizeof (void*), KM_SLEEP); 1286 #else 1287 buf_hash_table.ht_table = 1288 kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP); 1289 #endif 1290 if (buf_hash_table.ht_table == NULL) { 1291 ASSERT(hsize > (1ULL << 8)); 1292 hsize >>= 1; 1293 goto retry; 1294 } 1295 1296 hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE, 1297 0, hdr_full_cons, hdr_full_dest, NULL, NULL, NULL, 0); 1298 hdr_full_crypt_cache = kmem_cache_create("arc_buf_hdr_t_full_crypt", 1299 HDR_FULL_CRYPT_SIZE, 0, hdr_full_crypt_cons, hdr_full_crypt_dest, 1300 NULL, NULL, NULL, 0); 1301 hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only", 1302 HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, NULL, 1303 NULL, NULL, 0); 1304 buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t), 1305 0, buf_cons, buf_dest, NULL, NULL, NULL, 0); 1306 1307 for (i = 0; i < 256; i++) 1308 for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--) 1309 *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY); 1310 1311 for (i = 0; i < BUF_LOCKS; i++) { 1312 mutex_init(&buf_hash_table.ht_locks[i].ht_lock, 1313 NULL, MUTEX_DEFAULT, NULL); 1314 } 1315 } 1316 1317 #define ARC_MINTIME (hz>>4) /* 62 ms */ 1318 1319 /* 1320 * This is the size that the buf occupies in memory. If the buf is compressed, 1321 * it will correspond to the compressed size. You should use this method of 1322 * getting the buf size unless you explicitly need the logical size. 1323 */ 1324 uint64_t 1325 arc_buf_size(arc_buf_t *buf) 1326 { 1327 return (ARC_BUF_COMPRESSED(buf) ? 1328 HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr)); 1329 } 1330 1331 uint64_t 1332 arc_buf_lsize(arc_buf_t *buf) 1333 { 1334 return (HDR_GET_LSIZE(buf->b_hdr)); 1335 } 1336 1337 /* 1338 * This function will return B_TRUE if the buffer is encrypted in memory. 1339 * This buffer can be decrypted by calling arc_untransform(). 1340 */ 1341 boolean_t 1342 arc_is_encrypted(arc_buf_t *buf) 1343 { 1344 return (ARC_BUF_ENCRYPTED(buf) != 0); 1345 } 1346 1347 /* 1348 * Returns B_TRUE if the buffer represents data that has not had its MAC 1349 * verified yet. 1350 */ 1351 boolean_t 1352 arc_is_unauthenticated(arc_buf_t *buf) 1353 { 1354 return (HDR_NOAUTH(buf->b_hdr) != 0); 1355 } 1356 1357 void 1358 arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt, 1359 uint8_t *iv, uint8_t *mac) 1360 { 1361 arc_buf_hdr_t *hdr = buf->b_hdr; 1362 1363 ASSERT(HDR_PROTECTED(hdr)); 1364 1365 bcopy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 1366 bcopy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 1367 bcopy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 1368 *byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 1369 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 1370 } 1371 1372 /* 1373 * Indicates how this buffer is compressed in memory. If it is not compressed 1374 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with 1375 * arc_untransform() as long as it is also unencrypted. 1376 */ 1377 enum zio_compress 1378 arc_get_compression(arc_buf_t *buf) 1379 { 1380 return (ARC_BUF_COMPRESSED(buf) ? 1381 HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF); 1382 } 1383 1384 /* 1385 * Return the compression algorithm used to store this data in the ARC. If ARC 1386 * compression is enabled or this is an encrypted block, this will be the same 1387 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF. 1388 */ 1389 static inline enum zio_compress 1390 arc_hdr_get_compress(arc_buf_hdr_t *hdr) 1391 { 1392 return (HDR_COMPRESSION_ENABLED(hdr) ? 1393 HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF); 1394 } 1395 1396 uint8_t 1397 arc_get_complevel(arc_buf_t *buf) 1398 { 1399 return (buf->b_hdr->b_complevel); 1400 } 1401 1402 static inline boolean_t 1403 arc_buf_is_shared(arc_buf_t *buf) 1404 { 1405 boolean_t shared = (buf->b_data != NULL && 1406 buf->b_hdr->b_l1hdr.b_pabd != NULL && 1407 abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) && 1408 buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd)); 1409 IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr)); 1410 IMPLY(shared, ARC_BUF_SHARED(buf)); 1411 IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf)); 1412 1413 /* 1414 * It would be nice to assert arc_can_share() too, but the "hdr isn't 1415 * already being shared" requirement prevents us from doing that. 1416 */ 1417 1418 return (shared); 1419 } 1420 1421 /* 1422 * Free the checksum associated with this header. If there is no checksum, this 1423 * is a no-op. 1424 */ 1425 static inline void 1426 arc_cksum_free(arc_buf_hdr_t *hdr) 1427 { 1428 ASSERT(HDR_HAS_L1HDR(hdr)); 1429 1430 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1431 if (hdr->b_l1hdr.b_freeze_cksum != NULL) { 1432 kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t)); 1433 hdr->b_l1hdr.b_freeze_cksum = NULL; 1434 } 1435 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1436 } 1437 1438 /* 1439 * Return true iff at least one of the bufs on hdr is not compressed. 1440 * Encrypted buffers count as compressed. 1441 */ 1442 static boolean_t 1443 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr) 1444 { 1445 ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr)); 1446 1447 for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) { 1448 if (!ARC_BUF_COMPRESSED(b)) { 1449 return (B_TRUE); 1450 } 1451 } 1452 return (B_FALSE); 1453 } 1454 1455 1456 /* 1457 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data 1458 * matches the checksum that is stored in the hdr. If there is no checksum, 1459 * or if the buf is compressed, this is a no-op. 1460 */ 1461 static void 1462 arc_cksum_verify(arc_buf_t *buf) 1463 { 1464 arc_buf_hdr_t *hdr = buf->b_hdr; 1465 zio_cksum_t zc; 1466 1467 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1468 return; 1469 1470 if (ARC_BUF_COMPRESSED(buf)) 1471 return; 1472 1473 ASSERT(HDR_HAS_L1HDR(hdr)); 1474 1475 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1476 1477 if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) { 1478 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1479 return; 1480 } 1481 1482 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc); 1483 if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc)) 1484 panic("buffer modified while frozen!"); 1485 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1486 } 1487 1488 /* 1489 * This function makes the assumption that data stored in the L2ARC 1490 * will be transformed exactly as it is in the main pool. Because of 1491 * this we can verify the checksum against the reading process's bp. 1492 */ 1493 static boolean_t 1494 arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio) 1495 { 1496 ASSERT(!BP_IS_EMBEDDED(zio->io_bp)); 1497 VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr)); 1498 1499 /* 1500 * Block pointers always store the checksum for the logical data. 1501 * If the block pointer has the gang bit set, then the checksum 1502 * it represents is for the reconstituted data and not for an 1503 * individual gang member. The zio pipeline, however, must be able to 1504 * determine the checksum of each of the gang constituents so it 1505 * treats the checksum comparison differently than what we need 1506 * for l2arc blocks. This prevents us from using the 1507 * zio_checksum_error() interface directly. Instead we must call the 1508 * zio_checksum_error_impl() so that we can ensure the checksum is 1509 * generated using the correct checksum algorithm and accounts for the 1510 * logical I/O size and not just a gang fragment. 1511 */ 1512 return (zio_checksum_error_impl(zio->io_spa, zio->io_bp, 1513 BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size, 1514 zio->io_offset, NULL) == 0); 1515 } 1516 1517 /* 1518 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a 1519 * checksum and attaches it to the buf's hdr so that we can ensure that the buf 1520 * isn't modified later on. If buf is compressed or there is already a checksum 1521 * on the hdr, this is a no-op (we only checksum uncompressed bufs). 1522 */ 1523 static void 1524 arc_cksum_compute(arc_buf_t *buf) 1525 { 1526 arc_buf_hdr_t *hdr = buf->b_hdr; 1527 1528 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1529 return; 1530 1531 ASSERT(HDR_HAS_L1HDR(hdr)); 1532 1533 mutex_enter(&buf->b_hdr->b_l1hdr.b_freeze_lock); 1534 if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) { 1535 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1536 return; 1537 } 1538 1539 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 1540 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1541 hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t), 1542 KM_SLEEP); 1543 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, 1544 hdr->b_l1hdr.b_freeze_cksum); 1545 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1546 arc_buf_watch(buf); 1547 } 1548 1549 #ifndef _KERNEL 1550 void 1551 arc_buf_sigsegv(int sig, siginfo_t *si, void *unused) 1552 { 1553 panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr); 1554 } 1555 #endif 1556 1557 /* ARGSUSED */ 1558 static void 1559 arc_buf_unwatch(arc_buf_t *buf) 1560 { 1561 #ifndef _KERNEL 1562 if (arc_watch) { 1563 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1564 PROT_READ | PROT_WRITE)); 1565 } 1566 #endif 1567 } 1568 1569 /* ARGSUSED */ 1570 static void 1571 arc_buf_watch(arc_buf_t *buf) 1572 { 1573 #ifndef _KERNEL 1574 if (arc_watch) 1575 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1576 PROT_READ)); 1577 #endif 1578 } 1579 1580 static arc_buf_contents_t 1581 arc_buf_type(arc_buf_hdr_t *hdr) 1582 { 1583 arc_buf_contents_t type; 1584 if (HDR_ISTYPE_METADATA(hdr)) { 1585 type = ARC_BUFC_METADATA; 1586 } else { 1587 type = ARC_BUFC_DATA; 1588 } 1589 VERIFY3U(hdr->b_type, ==, type); 1590 return (type); 1591 } 1592 1593 boolean_t 1594 arc_is_metadata(arc_buf_t *buf) 1595 { 1596 return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0); 1597 } 1598 1599 static uint32_t 1600 arc_bufc_to_flags(arc_buf_contents_t type) 1601 { 1602 switch (type) { 1603 case ARC_BUFC_DATA: 1604 /* metadata field is 0 if buffer contains normal data */ 1605 return (0); 1606 case ARC_BUFC_METADATA: 1607 return (ARC_FLAG_BUFC_METADATA); 1608 default: 1609 break; 1610 } 1611 panic("undefined ARC buffer type!"); 1612 return ((uint32_t)-1); 1613 } 1614 1615 void 1616 arc_buf_thaw(arc_buf_t *buf) 1617 { 1618 arc_buf_hdr_t *hdr = buf->b_hdr; 1619 1620 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 1621 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 1622 1623 arc_cksum_verify(buf); 1624 1625 /* 1626 * Compressed buffers do not manipulate the b_freeze_cksum. 1627 */ 1628 if (ARC_BUF_COMPRESSED(buf)) 1629 return; 1630 1631 ASSERT(HDR_HAS_L1HDR(hdr)); 1632 arc_cksum_free(hdr); 1633 arc_buf_unwatch(buf); 1634 } 1635 1636 void 1637 arc_buf_freeze(arc_buf_t *buf) 1638 { 1639 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1640 return; 1641 1642 if (ARC_BUF_COMPRESSED(buf)) 1643 return; 1644 1645 ASSERT(HDR_HAS_L1HDR(buf->b_hdr)); 1646 arc_cksum_compute(buf); 1647 } 1648 1649 /* 1650 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead, 1651 * the following functions should be used to ensure that the flags are 1652 * updated in a thread-safe way. When manipulating the flags either 1653 * the hash_lock must be held or the hdr must be undiscoverable. This 1654 * ensures that we're not racing with any other threads when updating 1655 * the flags. 1656 */ 1657 static inline void 1658 arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1659 { 1660 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1661 hdr->b_flags |= flags; 1662 } 1663 1664 static inline void 1665 arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1666 { 1667 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1668 hdr->b_flags &= ~flags; 1669 } 1670 1671 /* 1672 * Setting the compression bits in the arc_buf_hdr_t's b_flags is 1673 * done in a special way since we have to clear and set bits 1674 * at the same time. Consumers that wish to set the compression bits 1675 * must use this function to ensure that the flags are updated in 1676 * thread-safe manner. 1677 */ 1678 static void 1679 arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp) 1680 { 1681 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1682 1683 /* 1684 * Holes and embedded blocks will always have a psize = 0 so 1685 * we ignore the compression of the blkptr and set the 1686 * want to uncompress them. Mark them as uncompressed. 1687 */ 1688 if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) { 1689 arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1690 ASSERT(!HDR_COMPRESSION_ENABLED(hdr)); 1691 } else { 1692 arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1693 ASSERT(HDR_COMPRESSION_ENABLED(hdr)); 1694 } 1695 1696 HDR_SET_COMPRESS(hdr, cmp); 1697 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp); 1698 } 1699 1700 /* 1701 * Looks for another buf on the same hdr which has the data decompressed, copies 1702 * from it, and returns true. If no such buf exists, returns false. 1703 */ 1704 static boolean_t 1705 arc_buf_try_copy_decompressed_data(arc_buf_t *buf) 1706 { 1707 arc_buf_hdr_t *hdr = buf->b_hdr; 1708 boolean_t copied = B_FALSE; 1709 1710 ASSERT(HDR_HAS_L1HDR(hdr)); 1711 ASSERT3P(buf->b_data, !=, NULL); 1712 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1713 1714 for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL; 1715 from = from->b_next) { 1716 /* can't use our own data buffer */ 1717 if (from == buf) { 1718 continue; 1719 } 1720 1721 if (!ARC_BUF_COMPRESSED(from)) { 1722 bcopy(from->b_data, buf->b_data, arc_buf_size(buf)); 1723 copied = B_TRUE; 1724 break; 1725 } 1726 } 1727 1728 /* 1729 * There were no decompressed bufs, so there should not be a 1730 * checksum on the hdr either. 1731 */ 1732 if (zfs_flags & ZFS_DEBUG_MODIFY) 1733 EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL); 1734 1735 return (copied); 1736 } 1737 1738 /* 1739 * Allocates an ARC buf header that's in an evicted & L2-cached state. 1740 * This is used during l2arc reconstruction to make empty ARC buffers 1741 * which circumvent the regular disk->arc->l2arc path and instead come 1742 * into being in the reverse order, i.e. l2arc->arc. 1743 */ 1744 static arc_buf_hdr_t * 1745 arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev, 1746 dva_t dva, uint64_t daddr, int32_t psize, uint64_t birth, 1747 enum zio_compress compress, uint8_t complevel, boolean_t protected, 1748 boolean_t prefetch, arc_state_type_t arcs_state) 1749 { 1750 arc_buf_hdr_t *hdr; 1751 1752 ASSERT(size != 0); 1753 hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP); 1754 hdr->b_birth = birth; 1755 hdr->b_type = type; 1756 hdr->b_flags = 0; 1757 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR); 1758 HDR_SET_LSIZE(hdr, size); 1759 HDR_SET_PSIZE(hdr, psize); 1760 arc_hdr_set_compress(hdr, compress); 1761 hdr->b_complevel = complevel; 1762 if (protected) 1763 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 1764 if (prefetch) 1765 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 1766 hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa); 1767 1768 hdr->b_dva = dva; 1769 1770 hdr->b_l2hdr.b_dev = dev; 1771 hdr->b_l2hdr.b_daddr = daddr; 1772 hdr->b_l2hdr.b_arcs_state = arcs_state; 1773 1774 return (hdr); 1775 } 1776 1777 /* 1778 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t. 1779 */ 1780 static uint64_t 1781 arc_hdr_size(arc_buf_hdr_t *hdr) 1782 { 1783 uint64_t size; 1784 1785 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 1786 HDR_GET_PSIZE(hdr) > 0) { 1787 size = HDR_GET_PSIZE(hdr); 1788 } else { 1789 ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0); 1790 size = HDR_GET_LSIZE(hdr); 1791 } 1792 return (size); 1793 } 1794 1795 static int 1796 arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj) 1797 { 1798 int ret; 1799 uint64_t csize; 1800 uint64_t lsize = HDR_GET_LSIZE(hdr); 1801 uint64_t psize = HDR_GET_PSIZE(hdr); 1802 void *tmpbuf = NULL; 1803 abd_t *abd = hdr->b_l1hdr.b_pabd; 1804 1805 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1806 ASSERT(HDR_AUTHENTICATED(hdr)); 1807 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1808 1809 /* 1810 * The MAC is calculated on the compressed data that is stored on disk. 1811 * However, if compressed arc is disabled we will only have the 1812 * decompressed data available to us now. Compress it into a temporary 1813 * abd so we can verify the MAC. The performance overhead of this will 1814 * be relatively low, since most objects in an encrypted objset will 1815 * be encrypted (instead of authenticated) anyway. 1816 */ 1817 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1818 !HDR_COMPRESSION_ENABLED(hdr)) { 1819 tmpbuf = zio_buf_alloc(lsize); 1820 abd = abd_get_from_buf(tmpbuf, lsize); 1821 abd_take_ownership_of_buf(abd, B_TRUE); 1822 csize = zio_compress_data(HDR_GET_COMPRESS(hdr), 1823 hdr->b_l1hdr.b_pabd, tmpbuf, lsize, hdr->b_complevel); 1824 ASSERT3U(csize, <=, psize); 1825 abd_zero_off(abd, csize, psize - csize); 1826 } 1827 1828 /* 1829 * Authentication is best effort. We authenticate whenever the key is 1830 * available. If we succeed we clear ARC_FLAG_NOAUTH. 1831 */ 1832 if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) { 1833 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); 1834 ASSERT3U(lsize, ==, psize); 1835 ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd, 1836 psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1837 } else { 1838 ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize, 1839 hdr->b_crypt_hdr.b_mac); 1840 } 1841 1842 if (ret == 0) 1843 arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH); 1844 else if (ret != ENOENT) 1845 goto error; 1846 1847 if (tmpbuf != NULL) 1848 abd_free(abd); 1849 1850 return (0); 1851 1852 error: 1853 if (tmpbuf != NULL) 1854 abd_free(abd); 1855 1856 return (ret); 1857 } 1858 1859 /* 1860 * This function will take a header that only has raw encrypted data in 1861 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in 1862 * b_l1hdr.b_pabd. If designated in the header flags, this function will 1863 * also decompress the data. 1864 */ 1865 static int 1866 arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb) 1867 { 1868 int ret; 1869 abd_t *cabd = NULL; 1870 void *tmp = NULL; 1871 boolean_t no_crypt = B_FALSE; 1872 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1873 1874 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1875 ASSERT(HDR_ENCRYPTED(hdr)); 1876 1877 arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); 1878 1879 ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot, 1880 B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv, 1881 hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd, 1882 hdr->b_crypt_hdr.b_rabd, &no_crypt); 1883 if (ret != 0) 1884 goto error; 1885 1886 if (no_crypt) { 1887 abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd, 1888 HDR_GET_PSIZE(hdr)); 1889 } 1890 1891 /* 1892 * If this header has disabled arc compression but the b_pabd is 1893 * compressed after decrypting it, we need to decompress the newly 1894 * decrypted data. 1895 */ 1896 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1897 !HDR_COMPRESSION_ENABLED(hdr)) { 1898 /* 1899 * We want to make sure that we are correctly honoring the 1900 * zfs_abd_scatter_enabled setting, so we allocate an abd here 1901 * and then loan a buffer from it, rather than allocating a 1902 * linear buffer and wrapping it in an abd later. 1903 */ 1904 cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, B_TRUE); 1905 tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 1906 1907 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 1908 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 1909 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 1910 if (ret != 0) { 1911 abd_return_buf(cabd, tmp, arc_hdr_size(hdr)); 1912 goto error; 1913 } 1914 1915 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 1916 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 1917 arc_hdr_size(hdr), hdr); 1918 hdr->b_l1hdr.b_pabd = cabd; 1919 } 1920 1921 return (0); 1922 1923 error: 1924 arc_hdr_free_abd(hdr, B_FALSE); 1925 if (cabd != NULL) 1926 arc_free_data_buf(hdr, cabd, arc_hdr_size(hdr), hdr); 1927 1928 return (ret); 1929 } 1930 1931 /* 1932 * This function is called during arc_buf_fill() to prepare the header's 1933 * abd plaintext pointer for use. This involves authenticated protected 1934 * data and decrypting encrypted data into the plaintext abd. 1935 */ 1936 static int 1937 arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa, 1938 const zbookmark_phys_t *zb, boolean_t noauth) 1939 { 1940 int ret; 1941 1942 ASSERT(HDR_PROTECTED(hdr)); 1943 1944 if (hash_lock != NULL) 1945 mutex_enter(hash_lock); 1946 1947 if (HDR_NOAUTH(hdr) && !noauth) { 1948 /* 1949 * The caller requested authenticated data but our data has 1950 * not been authenticated yet. Verify the MAC now if we can. 1951 */ 1952 ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset); 1953 if (ret != 0) 1954 goto error; 1955 } else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) { 1956 /* 1957 * If we only have the encrypted version of the data, but the 1958 * unencrypted version was requested we take this opportunity 1959 * to store the decrypted version in the header for future use. 1960 */ 1961 ret = arc_hdr_decrypt(hdr, spa, zb); 1962 if (ret != 0) 1963 goto error; 1964 } 1965 1966 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1967 1968 if (hash_lock != NULL) 1969 mutex_exit(hash_lock); 1970 1971 return (0); 1972 1973 error: 1974 if (hash_lock != NULL) 1975 mutex_exit(hash_lock); 1976 1977 return (ret); 1978 } 1979 1980 /* 1981 * This function is used by the dbuf code to decrypt bonus buffers in place. 1982 * The dbuf code itself doesn't have any locking for decrypting a shared dnode 1983 * block, so we use the hash lock here to protect against concurrent calls to 1984 * arc_buf_fill(). 1985 */ 1986 static void 1987 arc_buf_untransform_in_place(arc_buf_t *buf, kmutex_t *hash_lock) 1988 { 1989 arc_buf_hdr_t *hdr = buf->b_hdr; 1990 1991 ASSERT(HDR_ENCRYPTED(hdr)); 1992 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 1993 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1994 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1995 1996 zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data, 1997 arc_buf_size(buf)); 1998 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 1999 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 2000 hdr->b_crypt_hdr.b_ebufcnt -= 1; 2001 } 2002 2003 /* 2004 * Given a buf that has a data buffer attached to it, this function will 2005 * efficiently fill the buf with data of the specified compression setting from 2006 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr 2007 * are already sharing a data buf, no copy is performed. 2008 * 2009 * If the buf is marked as compressed but uncompressed data was requested, this 2010 * will allocate a new data buffer for the buf, remove that flag, and fill the 2011 * buf with uncompressed data. You can't request a compressed buf on a hdr with 2012 * uncompressed data, and (since we haven't added support for it yet) if you 2013 * want compressed data your buf must already be marked as compressed and have 2014 * the correct-sized data buffer. 2015 */ 2016 static int 2017 arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2018 arc_fill_flags_t flags) 2019 { 2020 int error = 0; 2021 arc_buf_hdr_t *hdr = buf->b_hdr; 2022 boolean_t hdr_compressed = 2023 (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 2024 boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0; 2025 boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0; 2026 dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap; 2027 kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr); 2028 2029 ASSERT3P(buf->b_data, !=, NULL); 2030 IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf)); 2031 IMPLY(compressed, ARC_BUF_COMPRESSED(buf)); 2032 IMPLY(encrypted, HDR_ENCRYPTED(hdr)); 2033 IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf)); 2034 IMPLY(encrypted, ARC_BUF_COMPRESSED(buf)); 2035 IMPLY(encrypted, !ARC_BUF_SHARED(buf)); 2036 2037 /* 2038 * If the caller wanted encrypted data we just need to copy it from 2039 * b_rabd and potentially byteswap it. We won't be able to do any 2040 * further transforms on it. 2041 */ 2042 if (encrypted) { 2043 ASSERT(HDR_HAS_RABD(hdr)); 2044 abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd, 2045 HDR_GET_PSIZE(hdr)); 2046 goto byteswap; 2047 } 2048 2049 /* 2050 * Adjust encrypted and authenticated headers to accommodate 2051 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are 2052 * allowed to fail decryption due to keys not being loaded 2053 * without being marked as an IO error. 2054 */ 2055 if (HDR_PROTECTED(hdr)) { 2056 error = arc_fill_hdr_crypt(hdr, hash_lock, spa, 2057 zb, !!(flags & ARC_FILL_NOAUTH)); 2058 if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) { 2059 return (error); 2060 } else if (error != 0) { 2061 if (hash_lock != NULL) 2062 mutex_enter(hash_lock); 2063 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2064 if (hash_lock != NULL) 2065 mutex_exit(hash_lock); 2066 return (error); 2067 } 2068 } 2069 2070 /* 2071 * There is a special case here for dnode blocks which are 2072 * decrypting their bonus buffers. These blocks may request to 2073 * be decrypted in-place. This is necessary because there may 2074 * be many dnodes pointing into this buffer and there is 2075 * currently no method to synchronize replacing the backing 2076 * b_data buffer and updating all of the pointers. Here we use 2077 * the hash lock to ensure there are no races. If the need 2078 * arises for other types to be decrypted in-place, they must 2079 * add handling here as well. 2080 */ 2081 if ((flags & ARC_FILL_IN_PLACE) != 0) { 2082 ASSERT(!hdr_compressed); 2083 ASSERT(!compressed); 2084 ASSERT(!encrypted); 2085 2086 if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) { 2087 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 2088 2089 if (hash_lock != NULL) 2090 mutex_enter(hash_lock); 2091 arc_buf_untransform_in_place(buf, hash_lock); 2092 if (hash_lock != NULL) 2093 mutex_exit(hash_lock); 2094 2095 /* Compute the hdr's checksum if necessary */ 2096 arc_cksum_compute(buf); 2097 } 2098 2099 return (0); 2100 } 2101 2102 if (hdr_compressed == compressed) { 2103 if (!arc_buf_is_shared(buf)) { 2104 abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd, 2105 arc_buf_size(buf)); 2106 } 2107 } else { 2108 ASSERT(hdr_compressed); 2109 ASSERT(!compressed); 2110 ASSERT3U(HDR_GET_LSIZE(hdr), !=, HDR_GET_PSIZE(hdr)); 2111 2112 /* 2113 * If the buf is sharing its data with the hdr, unlink it and 2114 * allocate a new data buffer for the buf. 2115 */ 2116 if (arc_buf_is_shared(buf)) { 2117 ASSERT(ARC_BUF_COMPRESSED(buf)); 2118 2119 /* We need to give the buf its own b_data */ 2120 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 2121 buf->b_data = 2122 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2123 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2124 2125 /* Previously overhead was 0; just add new overhead */ 2126 ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr)); 2127 } else if (ARC_BUF_COMPRESSED(buf)) { 2128 /* We need to reallocate the buf's b_data */ 2129 arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr), 2130 buf); 2131 buf->b_data = 2132 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2133 2134 /* We increased the size of b_data; update overhead */ 2135 ARCSTAT_INCR(arcstat_overhead_size, 2136 HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr)); 2137 } 2138 2139 /* 2140 * Regardless of the buf's previous compression settings, it 2141 * should not be compressed at the end of this function. 2142 */ 2143 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 2144 2145 /* 2146 * Try copying the data from another buf which already has a 2147 * decompressed version. If that's not possible, it's time to 2148 * bite the bullet and decompress the data from the hdr. 2149 */ 2150 if (arc_buf_try_copy_decompressed_data(buf)) { 2151 /* Skip byteswapping and checksumming (already done) */ 2152 return (0); 2153 } else { 2154 error = zio_decompress_data(HDR_GET_COMPRESS(hdr), 2155 hdr->b_l1hdr.b_pabd, buf->b_data, 2156 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr), 2157 &hdr->b_complevel); 2158 2159 /* 2160 * Absent hardware errors or software bugs, this should 2161 * be impossible, but log it anyway so we can debug it. 2162 */ 2163 if (error != 0) { 2164 zfs_dbgmsg( 2165 "hdr %px, compress %d, psize %d, lsize %d", 2166 hdr, arc_hdr_get_compress(hdr), 2167 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); 2168 if (hash_lock != NULL) 2169 mutex_enter(hash_lock); 2170 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2171 if (hash_lock != NULL) 2172 mutex_exit(hash_lock); 2173 return (SET_ERROR(EIO)); 2174 } 2175 } 2176 } 2177 2178 byteswap: 2179 /* Byteswap the buf's data if necessary */ 2180 if (bswap != DMU_BSWAP_NUMFUNCS) { 2181 ASSERT(!HDR_SHARED_DATA(hdr)); 2182 ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS); 2183 dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr)); 2184 } 2185 2186 /* Compute the hdr's checksum if necessary */ 2187 arc_cksum_compute(buf); 2188 2189 return (0); 2190 } 2191 2192 /* 2193 * If this function is being called to decrypt an encrypted buffer or verify an 2194 * authenticated one, the key must be loaded and a mapping must be made 2195 * available in the keystore via spa_keystore_create_mapping() or one of its 2196 * callers. 2197 */ 2198 int 2199 arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2200 boolean_t in_place) 2201 { 2202 int ret; 2203 arc_fill_flags_t flags = 0; 2204 2205 if (in_place) 2206 flags |= ARC_FILL_IN_PLACE; 2207 2208 ret = arc_buf_fill(buf, spa, zb, flags); 2209 if (ret == ECKSUM) { 2210 /* 2211 * Convert authentication and decryption errors to EIO 2212 * (and generate an ereport) before leaving the ARC. 2213 */ 2214 ret = SET_ERROR(EIO); 2215 spa_log_error(spa, zb); 2216 (void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION, 2217 spa, NULL, zb, NULL, 0); 2218 } 2219 2220 return (ret); 2221 } 2222 2223 /* 2224 * Increment the amount of evictable space in the arc_state_t's refcount. 2225 * We account for the space used by the hdr and the arc buf individually 2226 * so that we can add and remove them from the refcount individually. 2227 */ 2228 static void 2229 arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state) 2230 { 2231 arc_buf_contents_t type = arc_buf_type(hdr); 2232 2233 ASSERT(HDR_HAS_L1HDR(hdr)); 2234 2235 if (GHOST_STATE(state)) { 2236 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2237 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2238 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2239 ASSERT(!HDR_HAS_RABD(hdr)); 2240 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2241 HDR_GET_LSIZE(hdr), hdr); 2242 return; 2243 } 2244 2245 ASSERT(!GHOST_STATE(state)); 2246 if (hdr->b_l1hdr.b_pabd != NULL) { 2247 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2248 arc_hdr_size(hdr), hdr); 2249 } 2250 if (HDR_HAS_RABD(hdr)) { 2251 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2252 HDR_GET_PSIZE(hdr), hdr); 2253 } 2254 2255 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2256 buf = buf->b_next) { 2257 if (arc_buf_is_shared(buf)) 2258 continue; 2259 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2260 arc_buf_size(buf), buf); 2261 } 2262 } 2263 2264 /* 2265 * Decrement the amount of evictable space in the arc_state_t's refcount. 2266 * We account for the space used by the hdr and the arc buf individually 2267 * so that we can add and remove them from the refcount individually. 2268 */ 2269 static void 2270 arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state) 2271 { 2272 arc_buf_contents_t type = arc_buf_type(hdr); 2273 2274 ASSERT(HDR_HAS_L1HDR(hdr)); 2275 2276 if (GHOST_STATE(state)) { 2277 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2278 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2279 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2280 ASSERT(!HDR_HAS_RABD(hdr)); 2281 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2282 HDR_GET_LSIZE(hdr), hdr); 2283 return; 2284 } 2285 2286 ASSERT(!GHOST_STATE(state)); 2287 if (hdr->b_l1hdr.b_pabd != NULL) { 2288 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2289 arc_hdr_size(hdr), hdr); 2290 } 2291 if (HDR_HAS_RABD(hdr)) { 2292 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2293 HDR_GET_PSIZE(hdr), hdr); 2294 } 2295 2296 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2297 buf = buf->b_next) { 2298 if (arc_buf_is_shared(buf)) 2299 continue; 2300 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2301 arc_buf_size(buf), buf); 2302 } 2303 } 2304 2305 /* 2306 * Add a reference to this hdr indicating that someone is actively 2307 * referencing that memory. When the refcount transitions from 0 to 1, 2308 * we remove it from the respective arc_state_t list to indicate that 2309 * it is not evictable. 2310 */ 2311 static void 2312 add_reference(arc_buf_hdr_t *hdr, void *tag) 2313 { 2314 arc_state_t *state; 2315 2316 ASSERT(HDR_HAS_L1HDR(hdr)); 2317 if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) { 2318 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 2319 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2320 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2321 } 2322 2323 state = hdr->b_l1hdr.b_state; 2324 2325 if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) && 2326 (state != arc_anon)) { 2327 /* We don't use the L2-only state list. */ 2328 if (state != arc_l2c_only) { 2329 multilist_remove(state->arcs_list[arc_buf_type(hdr)], 2330 hdr); 2331 arc_evictable_space_decrement(hdr, state); 2332 } 2333 /* remove the prefetch flag if we get a reference */ 2334 if (HDR_HAS_L2HDR(hdr)) 2335 l2arc_hdr_arcstats_decrement_state(hdr); 2336 arc_hdr_clear_flags(hdr, ARC_FLAG_PREFETCH); 2337 if (HDR_HAS_L2HDR(hdr)) 2338 l2arc_hdr_arcstats_increment_state(hdr); 2339 } 2340 } 2341 2342 /* 2343 * Remove a reference from this hdr. When the reference transitions from 2344 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's 2345 * list making it eligible for eviction. 2346 */ 2347 static int 2348 remove_reference(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, void *tag) 2349 { 2350 int cnt; 2351 arc_state_t *state = hdr->b_l1hdr.b_state; 2352 2353 ASSERT(HDR_HAS_L1HDR(hdr)); 2354 ASSERT(state == arc_anon || MUTEX_HELD(hash_lock)); 2355 ASSERT(!GHOST_STATE(state)); 2356 2357 /* 2358 * arc_l2c_only counts as a ghost state so we don't need to explicitly 2359 * check to prevent usage of the arc_l2c_only list. 2360 */ 2361 if (((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) == 0) && 2362 (state != arc_anon)) { 2363 multilist_insert(state->arcs_list[arc_buf_type(hdr)], hdr); 2364 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); 2365 arc_evictable_space_increment(hdr, state); 2366 } 2367 return (cnt); 2368 } 2369 2370 /* 2371 * Returns detailed information about a specific arc buffer. When the 2372 * state_index argument is set the function will calculate the arc header 2373 * list position for its arc state. Since this requires a linear traversal 2374 * callers are strongly encourage not to do this. However, it can be helpful 2375 * for targeted analysis so the functionality is provided. 2376 */ 2377 void 2378 arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index) 2379 { 2380 arc_buf_hdr_t *hdr = ab->b_hdr; 2381 l1arc_buf_hdr_t *l1hdr = NULL; 2382 l2arc_buf_hdr_t *l2hdr = NULL; 2383 arc_state_t *state = NULL; 2384 2385 memset(abi, 0, sizeof (arc_buf_info_t)); 2386 2387 if (hdr == NULL) 2388 return; 2389 2390 abi->abi_flags = hdr->b_flags; 2391 2392 if (HDR_HAS_L1HDR(hdr)) { 2393 l1hdr = &hdr->b_l1hdr; 2394 state = l1hdr->b_state; 2395 } 2396 if (HDR_HAS_L2HDR(hdr)) 2397 l2hdr = &hdr->b_l2hdr; 2398 2399 if (l1hdr) { 2400 abi->abi_bufcnt = l1hdr->b_bufcnt; 2401 abi->abi_access = l1hdr->b_arc_access; 2402 abi->abi_mru_hits = l1hdr->b_mru_hits; 2403 abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits; 2404 abi->abi_mfu_hits = l1hdr->b_mfu_hits; 2405 abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits; 2406 abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt); 2407 } 2408 2409 if (l2hdr) { 2410 abi->abi_l2arc_dattr = l2hdr->b_daddr; 2411 abi->abi_l2arc_hits = l2hdr->b_hits; 2412 } 2413 2414 abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON; 2415 abi->abi_state_contents = arc_buf_type(hdr); 2416 abi->abi_size = arc_hdr_size(hdr); 2417 } 2418 2419 /* 2420 * Move the supplied buffer to the indicated state. The hash lock 2421 * for the buffer must be held by the caller. 2422 */ 2423 static void 2424 arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr, 2425 kmutex_t *hash_lock) 2426 { 2427 arc_state_t *old_state; 2428 int64_t refcnt; 2429 uint32_t bufcnt; 2430 boolean_t update_old, update_new; 2431 arc_buf_contents_t buftype = arc_buf_type(hdr); 2432 2433 /* 2434 * We almost always have an L1 hdr here, since we call arc_hdr_realloc() 2435 * in arc_read() when bringing a buffer out of the L2ARC. However, the 2436 * L1 hdr doesn't always exist when we change state to arc_anon before 2437 * destroying a header, in which case reallocating to add the L1 hdr is 2438 * pointless. 2439 */ 2440 if (HDR_HAS_L1HDR(hdr)) { 2441 old_state = hdr->b_l1hdr.b_state; 2442 refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt); 2443 bufcnt = hdr->b_l1hdr.b_bufcnt; 2444 update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL || 2445 HDR_HAS_RABD(hdr)); 2446 } else { 2447 old_state = arc_l2c_only; 2448 refcnt = 0; 2449 bufcnt = 0; 2450 update_old = B_FALSE; 2451 } 2452 update_new = update_old; 2453 2454 ASSERT(MUTEX_HELD(hash_lock)); 2455 ASSERT3P(new_state, !=, old_state); 2456 ASSERT(!GHOST_STATE(new_state) || bufcnt == 0); 2457 ASSERT(old_state != arc_anon || bufcnt <= 1); 2458 2459 /* 2460 * If this buffer is evictable, transfer it from the 2461 * old state list to the new state list. 2462 */ 2463 if (refcnt == 0) { 2464 if (old_state != arc_anon && old_state != arc_l2c_only) { 2465 ASSERT(HDR_HAS_L1HDR(hdr)); 2466 multilist_remove(old_state->arcs_list[buftype], hdr); 2467 2468 if (GHOST_STATE(old_state)) { 2469 ASSERT0(bufcnt); 2470 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2471 update_old = B_TRUE; 2472 } 2473 arc_evictable_space_decrement(hdr, old_state); 2474 } 2475 if (new_state != arc_anon && new_state != arc_l2c_only) { 2476 /* 2477 * An L1 header always exists here, since if we're 2478 * moving to some L1-cached state (i.e. not l2c_only or 2479 * anonymous), we realloc the header to add an L1hdr 2480 * beforehand. 2481 */ 2482 ASSERT(HDR_HAS_L1HDR(hdr)); 2483 multilist_insert(new_state->arcs_list[buftype], hdr); 2484 2485 if (GHOST_STATE(new_state)) { 2486 ASSERT0(bufcnt); 2487 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2488 update_new = B_TRUE; 2489 } 2490 arc_evictable_space_increment(hdr, new_state); 2491 } 2492 } 2493 2494 ASSERT(!HDR_EMPTY(hdr)); 2495 if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr)) 2496 buf_hash_remove(hdr); 2497 2498 /* adjust state sizes (ignore arc_l2c_only) */ 2499 2500 if (update_new && new_state != arc_l2c_only) { 2501 ASSERT(HDR_HAS_L1HDR(hdr)); 2502 if (GHOST_STATE(new_state)) { 2503 ASSERT0(bufcnt); 2504 2505 /* 2506 * When moving a header to a ghost state, we first 2507 * remove all arc buffers. Thus, we'll have a 2508 * bufcnt of zero, and no arc buffer to use for 2509 * the reference. As a result, we use the arc 2510 * header pointer for the reference. 2511 */ 2512 (void) zfs_refcount_add_many(&new_state->arcs_size, 2513 HDR_GET_LSIZE(hdr), hdr); 2514 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2515 ASSERT(!HDR_HAS_RABD(hdr)); 2516 } else { 2517 uint32_t buffers = 0; 2518 2519 /* 2520 * Each individual buffer holds a unique reference, 2521 * thus we must remove each of these references one 2522 * at a time. 2523 */ 2524 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2525 buf = buf->b_next) { 2526 ASSERT3U(bufcnt, !=, 0); 2527 buffers++; 2528 2529 /* 2530 * When the arc_buf_t is sharing the data 2531 * block with the hdr, the owner of the 2532 * reference belongs to the hdr. Only 2533 * add to the refcount if the arc_buf_t is 2534 * not shared. 2535 */ 2536 if (arc_buf_is_shared(buf)) 2537 continue; 2538 2539 (void) zfs_refcount_add_many( 2540 &new_state->arcs_size, 2541 arc_buf_size(buf), buf); 2542 } 2543 ASSERT3U(bufcnt, ==, buffers); 2544 2545 if (hdr->b_l1hdr.b_pabd != NULL) { 2546 (void) zfs_refcount_add_many( 2547 &new_state->arcs_size, 2548 arc_hdr_size(hdr), hdr); 2549 } 2550 2551 if (HDR_HAS_RABD(hdr)) { 2552 (void) zfs_refcount_add_many( 2553 &new_state->arcs_size, 2554 HDR_GET_PSIZE(hdr), hdr); 2555 } 2556 } 2557 } 2558 2559 if (update_old && old_state != arc_l2c_only) { 2560 ASSERT(HDR_HAS_L1HDR(hdr)); 2561 if (GHOST_STATE(old_state)) { 2562 ASSERT0(bufcnt); 2563 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2564 ASSERT(!HDR_HAS_RABD(hdr)); 2565 2566 /* 2567 * When moving a header off of a ghost state, 2568 * the header will not contain any arc buffers. 2569 * We use the arc header pointer for the reference 2570 * which is exactly what we did when we put the 2571 * header on the ghost state. 2572 */ 2573 2574 (void) zfs_refcount_remove_many(&old_state->arcs_size, 2575 HDR_GET_LSIZE(hdr), hdr); 2576 } else { 2577 uint32_t buffers = 0; 2578 2579 /* 2580 * Each individual buffer holds a unique reference, 2581 * thus we must remove each of these references one 2582 * at a time. 2583 */ 2584 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2585 buf = buf->b_next) { 2586 ASSERT3U(bufcnt, !=, 0); 2587 buffers++; 2588 2589 /* 2590 * When the arc_buf_t is sharing the data 2591 * block with the hdr, the owner of the 2592 * reference belongs to the hdr. Only 2593 * add to the refcount if the arc_buf_t is 2594 * not shared. 2595 */ 2596 if (arc_buf_is_shared(buf)) 2597 continue; 2598 2599 (void) zfs_refcount_remove_many( 2600 &old_state->arcs_size, arc_buf_size(buf), 2601 buf); 2602 } 2603 ASSERT3U(bufcnt, ==, buffers); 2604 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 2605 HDR_HAS_RABD(hdr)); 2606 2607 if (hdr->b_l1hdr.b_pabd != NULL) { 2608 (void) zfs_refcount_remove_many( 2609 &old_state->arcs_size, arc_hdr_size(hdr), 2610 hdr); 2611 } 2612 2613 if (HDR_HAS_RABD(hdr)) { 2614 (void) zfs_refcount_remove_many( 2615 &old_state->arcs_size, HDR_GET_PSIZE(hdr), 2616 hdr); 2617 } 2618 } 2619 } 2620 2621 if (HDR_HAS_L1HDR(hdr)) { 2622 hdr->b_l1hdr.b_state = new_state; 2623 2624 if (HDR_HAS_L2HDR(hdr) && new_state != arc_l2c_only) { 2625 l2arc_hdr_arcstats_decrement_state(hdr); 2626 hdr->b_l2hdr.b_arcs_state = new_state->arcs_state; 2627 l2arc_hdr_arcstats_increment_state(hdr); 2628 } 2629 } 2630 2631 /* 2632 * L2 headers should never be on the L2 state list since they don't 2633 * have L1 headers allocated. 2634 */ 2635 ASSERT(multilist_is_empty(arc_l2c_only->arcs_list[ARC_BUFC_DATA]) && 2636 multilist_is_empty(arc_l2c_only->arcs_list[ARC_BUFC_METADATA])); 2637 } 2638 2639 void 2640 arc_space_consume(uint64_t space, arc_space_type_t type) 2641 { 2642 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2643 2644 switch (type) { 2645 default: 2646 break; 2647 case ARC_SPACE_DATA: 2648 aggsum_add(&astat_data_size, space); 2649 break; 2650 case ARC_SPACE_META: 2651 aggsum_add(&astat_metadata_size, space); 2652 break; 2653 case ARC_SPACE_BONUS: 2654 aggsum_add(&astat_bonus_size, space); 2655 break; 2656 case ARC_SPACE_DNODE: 2657 aggsum_add(&astat_dnode_size, space); 2658 break; 2659 case ARC_SPACE_DBUF: 2660 aggsum_add(&astat_dbuf_size, space); 2661 break; 2662 case ARC_SPACE_HDRS: 2663 aggsum_add(&astat_hdr_size, space); 2664 break; 2665 case ARC_SPACE_L2HDRS: 2666 aggsum_add(&astat_l2_hdr_size, space); 2667 break; 2668 case ARC_SPACE_ABD_CHUNK_WASTE: 2669 /* 2670 * Note: this includes space wasted by all scatter ABD's, not 2671 * just those allocated by the ARC. But the vast majority of 2672 * scatter ABD's come from the ARC, because other users are 2673 * very short-lived. 2674 */ 2675 aggsum_add(&astat_abd_chunk_waste_size, space); 2676 break; 2677 } 2678 2679 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2680 aggsum_add(&arc_meta_used, space); 2681 2682 aggsum_add(&arc_size, space); 2683 } 2684 2685 void 2686 arc_space_return(uint64_t space, arc_space_type_t type) 2687 { 2688 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2689 2690 switch (type) { 2691 default: 2692 break; 2693 case ARC_SPACE_DATA: 2694 aggsum_add(&astat_data_size, -space); 2695 break; 2696 case ARC_SPACE_META: 2697 aggsum_add(&astat_metadata_size, -space); 2698 break; 2699 case ARC_SPACE_BONUS: 2700 aggsum_add(&astat_bonus_size, -space); 2701 break; 2702 case ARC_SPACE_DNODE: 2703 aggsum_add(&astat_dnode_size, -space); 2704 break; 2705 case ARC_SPACE_DBUF: 2706 aggsum_add(&astat_dbuf_size, -space); 2707 break; 2708 case ARC_SPACE_HDRS: 2709 aggsum_add(&astat_hdr_size, -space); 2710 break; 2711 case ARC_SPACE_L2HDRS: 2712 aggsum_add(&astat_l2_hdr_size, -space); 2713 break; 2714 case ARC_SPACE_ABD_CHUNK_WASTE: 2715 aggsum_add(&astat_abd_chunk_waste_size, -space); 2716 break; 2717 } 2718 2719 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) { 2720 ASSERT(aggsum_compare(&arc_meta_used, space) >= 0); 2721 /* 2722 * We use the upper bound here rather than the precise value 2723 * because the arc_meta_max value doesn't need to be 2724 * precise. It's only consumed by humans via arcstats. 2725 */ 2726 if (arc_meta_max < aggsum_upper_bound(&arc_meta_used)) 2727 arc_meta_max = aggsum_upper_bound(&arc_meta_used); 2728 aggsum_add(&arc_meta_used, -space); 2729 } 2730 2731 ASSERT(aggsum_compare(&arc_size, space) >= 0); 2732 aggsum_add(&arc_size, -space); 2733 } 2734 2735 /* 2736 * Given a hdr and a buf, returns whether that buf can share its b_data buffer 2737 * with the hdr's b_pabd. 2738 */ 2739 static boolean_t 2740 arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2741 { 2742 /* 2743 * The criteria for sharing a hdr's data are: 2744 * 1. the buffer is not encrypted 2745 * 2. the hdr's compression matches the buf's compression 2746 * 3. the hdr doesn't need to be byteswapped 2747 * 4. the hdr isn't already being shared 2748 * 5. the buf is either compressed or it is the last buf in the hdr list 2749 * 2750 * Criterion #5 maintains the invariant that shared uncompressed 2751 * bufs must be the final buf in the hdr's b_buf list. Reading this, you 2752 * might ask, "if a compressed buf is allocated first, won't that be the 2753 * last thing in the list?", but in that case it's impossible to create 2754 * a shared uncompressed buf anyway (because the hdr must be compressed 2755 * to have the compressed buf). You might also think that #3 is 2756 * sufficient to make this guarantee, however it's possible 2757 * (specifically in the rare L2ARC write race mentioned in 2758 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that 2759 * is shareable, but wasn't at the time of its allocation. Rather than 2760 * allow a new shared uncompressed buf to be created and then shuffle 2761 * the list around to make it the last element, this simply disallows 2762 * sharing if the new buf isn't the first to be added. 2763 */ 2764 ASSERT3P(buf->b_hdr, ==, hdr); 2765 boolean_t hdr_compressed = 2766 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF; 2767 boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0; 2768 return (!ARC_BUF_ENCRYPTED(buf) && 2769 buf_compressed == hdr_compressed && 2770 hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS && 2771 !HDR_SHARED_DATA(hdr) && 2772 (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf))); 2773 } 2774 2775 /* 2776 * Allocate a buf for this hdr. If you care about the data that's in the hdr, 2777 * or if you want a compressed buffer, pass those flags in. Returns 0 if the 2778 * copy was made successfully, or an error code otherwise. 2779 */ 2780 static int 2781 arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb, 2782 void *tag, boolean_t encrypted, boolean_t compressed, boolean_t noauth, 2783 boolean_t fill, arc_buf_t **ret) 2784 { 2785 arc_buf_t *buf; 2786 arc_fill_flags_t flags = ARC_FILL_LOCKED; 2787 2788 ASSERT(HDR_HAS_L1HDR(hdr)); 2789 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 2790 VERIFY(hdr->b_type == ARC_BUFC_DATA || 2791 hdr->b_type == ARC_BUFC_METADATA); 2792 ASSERT3P(ret, !=, NULL); 2793 ASSERT3P(*ret, ==, NULL); 2794 IMPLY(encrypted, compressed); 2795 2796 hdr->b_l1hdr.b_mru_hits = 0; 2797 hdr->b_l1hdr.b_mru_ghost_hits = 0; 2798 hdr->b_l1hdr.b_mfu_hits = 0; 2799 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 2800 hdr->b_l1hdr.b_l2_hits = 0; 2801 2802 buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE); 2803 buf->b_hdr = hdr; 2804 buf->b_data = NULL; 2805 buf->b_next = hdr->b_l1hdr.b_buf; 2806 buf->b_flags = 0; 2807 2808 add_reference(hdr, tag); 2809 2810 /* 2811 * We're about to change the hdr's b_flags. We must either 2812 * hold the hash_lock or be undiscoverable. 2813 */ 2814 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2815 2816 /* 2817 * Only honor requests for compressed bufs if the hdr is actually 2818 * compressed. This must be overridden if the buffer is encrypted since 2819 * encrypted buffers cannot be decompressed. 2820 */ 2821 if (encrypted) { 2822 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2823 buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED; 2824 flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED; 2825 } else if (compressed && 2826 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 2827 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2828 flags |= ARC_FILL_COMPRESSED; 2829 } 2830 2831 if (noauth) { 2832 ASSERT0(encrypted); 2833 flags |= ARC_FILL_NOAUTH; 2834 } 2835 2836 /* 2837 * If the hdr's data can be shared then we share the data buffer and 2838 * set the appropriate bit in the hdr's b_flags to indicate the hdr is 2839 * sharing it's b_pabd with the arc_buf_t. Otherwise, we allocate a new 2840 * buffer to store the buf's data. 2841 * 2842 * There are two additional restrictions here because we're sharing 2843 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be 2844 * actively involved in an L2ARC write, because if this buf is used by 2845 * an arc_write() then the hdr's data buffer will be released when the 2846 * write completes, even though the L2ARC write might still be using it. 2847 * Second, the hdr's ABD must be linear so that the buf's user doesn't 2848 * need to be ABD-aware. It must be allocated via 2849 * zio_[data_]buf_alloc(), not as a page, because we need to be able 2850 * to abd_release_ownership_of_buf(), which isn't allowed on "linear 2851 * page" buffers because the ABD code needs to handle freeing them 2852 * specially. 2853 */ 2854 boolean_t can_share = arc_can_share(hdr, buf) && 2855 !HDR_L2_WRITING(hdr) && 2856 hdr->b_l1hdr.b_pabd != NULL && 2857 abd_is_linear(hdr->b_l1hdr.b_pabd) && 2858 !abd_is_linear_page(hdr->b_l1hdr.b_pabd); 2859 2860 /* Set up b_data and sharing */ 2861 if (can_share) { 2862 buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd); 2863 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2864 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2865 } else { 2866 buf->b_data = 2867 arc_get_data_buf(hdr, arc_buf_size(buf), buf); 2868 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 2869 } 2870 VERIFY3P(buf->b_data, !=, NULL); 2871 2872 hdr->b_l1hdr.b_buf = buf; 2873 hdr->b_l1hdr.b_bufcnt += 1; 2874 if (encrypted) 2875 hdr->b_crypt_hdr.b_ebufcnt += 1; 2876 2877 /* 2878 * If the user wants the data from the hdr, we need to either copy or 2879 * decompress the data. 2880 */ 2881 if (fill) { 2882 ASSERT3P(zb, !=, NULL); 2883 return (arc_buf_fill(buf, spa, zb, flags)); 2884 } 2885 2886 return (0); 2887 } 2888 2889 static char *arc_onloan_tag = "onloan"; 2890 2891 static inline void 2892 arc_loaned_bytes_update(int64_t delta) 2893 { 2894 atomic_add_64(&arc_loaned_bytes, delta); 2895 2896 /* assert that it did not wrap around */ 2897 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 2898 } 2899 2900 /* 2901 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in 2902 * flight data by arc_tempreserve_space() until they are "returned". Loaned 2903 * buffers must be returned to the arc before they can be used by the DMU or 2904 * freed. 2905 */ 2906 arc_buf_t * 2907 arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size) 2908 { 2909 arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag, 2910 is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size); 2911 2912 arc_loaned_bytes_update(arc_buf_size(buf)); 2913 2914 return (buf); 2915 } 2916 2917 arc_buf_t * 2918 arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize, 2919 enum zio_compress compression_type, uint8_t complevel) 2920 { 2921 arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag, 2922 psize, lsize, compression_type, complevel); 2923 2924 arc_loaned_bytes_update(arc_buf_size(buf)); 2925 2926 return (buf); 2927 } 2928 2929 arc_buf_t * 2930 arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder, 2931 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 2932 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 2933 enum zio_compress compression_type, uint8_t complevel) 2934 { 2935 arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj, 2936 byteorder, salt, iv, mac, ot, psize, lsize, compression_type, 2937 complevel); 2938 2939 atomic_add_64(&arc_loaned_bytes, psize); 2940 return (buf); 2941 } 2942 2943 2944 /* 2945 * Return a loaned arc buffer to the arc. 2946 */ 2947 void 2948 arc_return_buf(arc_buf_t *buf, void *tag) 2949 { 2950 arc_buf_hdr_t *hdr = buf->b_hdr; 2951 2952 ASSERT3P(buf->b_data, !=, NULL); 2953 ASSERT(HDR_HAS_L1HDR(hdr)); 2954 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag); 2955 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2956 2957 arc_loaned_bytes_update(-arc_buf_size(buf)); 2958 } 2959 2960 /* Detach an arc_buf from a dbuf (tag) */ 2961 void 2962 arc_loan_inuse_buf(arc_buf_t *buf, void *tag) 2963 { 2964 arc_buf_hdr_t *hdr = buf->b_hdr; 2965 2966 ASSERT3P(buf->b_data, !=, NULL); 2967 ASSERT(HDR_HAS_L1HDR(hdr)); 2968 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2969 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag); 2970 2971 arc_loaned_bytes_update(arc_buf_size(buf)); 2972 } 2973 2974 static void 2975 l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type) 2976 { 2977 l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP); 2978 2979 df->l2df_abd = abd; 2980 df->l2df_size = size; 2981 df->l2df_type = type; 2982 mutex_enter(&l2arc_free_on_write_mtx); 2983 list_insert_head(l2arc_free_on_write, df); 2984 mutex_exit(&l2arc_free_on_write_mtx); 2985 } 2986 2987 static void 2988 arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata) 2989 { 2990 arc_state_t *state = hdr->b_l1hdr.b_state; 2991 arc_buf_contents_t type = arc_buf_type(hdr); 2992 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 2993 2994 /* protected by hash lock, if in the hash table */ 2995 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2996 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2997 ASSERT(state != arc_anon && state != arc_l2c_only); 2998 2999 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 3000 size, hdr); 3001 } 3002 (void) zfs_refcount_remove_many(&state->arcs_size, size, hdr); 3003 if (type == ARC_BUFC_METADATA) { 3004 arc_space_return(size, ARC_SPACE_META); 3005 } else { 3006 ASSERT(type == ARC_BUFC_DATA); 3007 arc_space_return(size, ARC_SPACE_DATA); 3008 } 3009 3010 if (free_rdata) { 3011 l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type); 3012 } else { 3013 l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type); 3014 } 3015 } 3016 3017 /* 3018 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the 3019 * data buffer, we transfer the refcount ownership to the hdr and update 3020 * the appropriate kstats. 3021 */ 3022 static void 3023 arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3024 { 3025 ASSERT(arc_can_share(hdr, buf)); 3026 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3027 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 3028 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3029 3030 /* 3031 * Start sharing the data buffer. We transfer the 3032 * refcount ownership to the hdr since it always owns 3033 * the refcount whenever an arc_buf_t is shared. 3034 */ 3035 zfs_refcount_transfer_ownership_many(&hdr->b_l1hdr.b_state->arcs_size, 3036 arc_hdr_size(hdr), buf, hdr); 3037 hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf)); 3038 abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd, 3039 HDR_ISTYPE_METADATA(hdr)); 3040 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 3041 buf->b_flags |= ARC_BUF_FLAG_SHARED; 3042 3043 /* 3044 * Since we've transferred ownership to the hdr we need 3045 * to increment its compressed and uncompressed kstats and 3046 * decrement the overhead size. 3047 */ 3048 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr)); 3049 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3050 ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf)); 3051 } 3052 3053 static void 3054 arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3055 { 3056 ASSERT(arc_buf_is_shared(buf)); 3057 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3058 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3059 3060 /* 3061 * We are no longer sharing this buffer so we need 3062 * to transfer its ownership to the rightful owner. 3063 */ 3064 zfs_refcount_transfer_ownership_many(&hdr->b_l1hdr.b_state->arcs_size, 3065 arc_hdr_size(hdr), hdr, buf); 3066 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3067 abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd); 3068 abd_put(hdr->b_l1hdr.b_pabd); 3069 hdr->b_l1hdr.b_pabd = NULL; 3070 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 3071 3072 /* 3073 * Since the buffer is no longer shared between 3074 * the arc buf and the hdr, count it as overhead. 3075 */ 3076 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr)); 3077 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3078 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 3079 } 3080 3081 /* 3082 * Remove an arc_buf_t from the hdr's buf list and return the last 3083 * arc_buf_t on the list. If no buffers remain on the list then return 3084 * NULL. 3085 */ 3086 static arc_buf_t * 3087 arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3088 { 3089 ASSERT(HDR_HAS_L1HDR(hdr)); 3090 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3091 3092 arc_buf_t **bufp = &hdr->b_l1hdr.b_buf; 3093 arc_buf_t *lastbuf = NULL; 3094 3095 /* 3096 * Remove the buf from the hdr list and locate the last 3097 * remaining buffer on the list. 3098 */ 3099 while (*bufp != NULL) { 3100 if (*bufp == buf) 3101 *bufp = buf->b_next; 3102 3103 /* 3104 * If we've removed a buffer in the middle of 3105 * the list then update the lastbuf and update 3106 * bufp. 3107 */ 3108 if (*bufp != NULL) { 3109 lastbuf = *bufp; 3110 bufp = &(*bufp)->b_next; 3111 } 3112 } 3113 buf->b_next = NULL; 3114 ASSERT3P(lastbuf, !=, buf); 3115 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, lastbuf != NULL); 3116 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, hdr->b_l1hdr.b_buf != NULL); 3117 IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf)); 3118 3119 return (lastbuf); 3120 } 3121 3122 /* 3123 * Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's 3124 * list and free it. 3125 */ 3126 static void 3127 arc_buf_destroy_impl(arc_buf_t *buf) 3128 { 3129 arc_buf_hdr_t *hdr = buf->b_hdr; 3130 3131 /* 3132 * Free up the data associated with the buf but only if we're not 3133 * sharing this with the hdr. If we are sharing it with the hdr, the 3134 * hdr is responsible for doing the free. 3135 */ 3136 if (buf->b_data != NULL) { 3137 /* 3138 * We're about to change the hdr's b_flags. We must either 3139 * hold the hash_lock or be undiscoverable. 3140 */ 3141 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3142 3143 arc_cksum_verify(buf); 3144 arc_buf_unwatch(buf); 3145 3146 if (arc_buf_is_shared(buf)) { 3147 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3148 } else { 3149 uint64_t size = arc_buf_size(buf); 3150 arc_free_data_buf(hdr, buf->b_data, size, buf); 3151 ARCSTAT_INCR(arcstat_overhead_size, -size); 3152 } 3153 buf->b_data = NULL; 3154 3155 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3156 hdr->b_l1hdr.b_bufcnt -= 1; 3157 3158 if (ARC_BUF_ENCRYPTED(buf)) { 3159 hdr->b_crypt_hdr.b_ebufcnt -= 1; 3160 3161 /* 3162 * If we have no more encrypted buffers and we've 3163 * already gotten a copy of the decrypted data we can 3164 * free b_rabd to save some space. 3165 */ 3166 if (hdr->b_crypt_hdr.b_ebufcnt == 0 && 3167 HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd != NULL && 3168 !HDR_IO_IN_PROGRESS(hdr)) { 3169 arc_hdr_free_abd(hdr, B_TRUE); 3170 } 3171 } 3172 } 3173 3174 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 3175 3176 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { 3177 /* 3178 * If the current arc_buf_t is sharing its data buffer with the 3179 * hdr, then reassign the hdr's b_pabd to share it with the new 3180 * buffer at the end of the list. The shared buffer is always 3181 * the last one on the hdr's buffer list. 3182 * 3183 * There is an equivalent case for compressed bufs, but since 3184 * they aren't guaranteed to be the last buf in the list and 3185 * that is an exceedingly rare case, we just allow that space be 3186 * wasted temporarily. We must also be careful not to share 3187 * encrypted buffers, since they cannot be shared. 3188 */ 3189 if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) { 3190 /* Only one buf can be shared at once */ 3191 VERIFY(!arc_buf_is_shared(lastbuf)); 3192 /* hdr is uncompressed so can't have compressed buf */ 3193 VERIFY(!ARC_BUF_COMPRESSED(lastbuf)); 3194 3195 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3196 arc_hdr_free_abd(hdr, B_FALSE); 3197 3198 /* 3199 * We must setup a new shared block between the 3200 * last buffer and the hdr. The data would have 3201 * been allocated by the arc buf so we need to transfer 3202 * ownership to the hdr since it's now being shared. 3203 */ 3204 arc_share_buf(hdr, lastbuf); 3205 } 3206 } else if (HDR_SHARED_DATA(hdr)) { 3207 /* 3208 * Uncompressed shared buffers are always at the end 3209 * of the list. Compressed buffers don't have the 3210 * same requirements. This makes it hard to 3211 * simply assert that the lastbuf is shared so 3212 * we rely on the hdr's compression flags to determine 3213 * if we have a compressed, shared buffer. 3214 */ 3215 ASSERT3P(lastbuf, !=, NULL); 3216 ASSERT(arc_buf_is_shared(lastbuf) || 3217 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 3218 } 3219 3220 /* 3221 * Free the checksum if we're removing the last uncompressed buf from 3222 * this hdr. 3223 */ 3224 if (!arc_hdr_has_uncompressed_buf(hdr)) { 3225 arc_cksum_free(hdr); 3226 } 3227 3228 /* clean up the buf */ 3229 buf->b_hdr = NULL; 3230 kmem_cache_free(buf_cache, buf); 3231 } 3232 3233 static void 3234 arc_hdr_alloc_abd(arc_buf_hdr_t *hdr, int alloc_flags) 3235 { 3236 uint64_t size; 3237 boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0); 3238 boolean_t do_adapt = ((alloc_flags & ARC_HDR_DO_ADAPT) != 0); 3239 3240 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 3241 ASSERT(HDR_HAS_L1HDR(hdr)); 3242 ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata); 3243 IMPLY(alloc_rdata, HDR_PROTECTED(hdr)); 3244 3245 if (alloc_rdata) { 3246 size = HDR_GET_PSIZE(hdr); 3247 ASSERT3P(hdr->b_crypt_hdr.b_rabd, ==, NULL); 3248 hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr, 3249 do_adapt); 3250 ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL); 3251 ARCSTAT_INCR(arcstat_raw_size, size); 3252 } else { 3253 size = arc_hdr_size(hdr); 3254 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3255 hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr, 3256 do_adapt); 3257 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3258 } 3259 3260 ARCSTAT_INCR(arcstat_compressed_size, size); 3261 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3262 } 3263 3264 static void 3265 arc_hdr_free_abd(arc_buf_hdr_t *hdr, boolean_t free_rdata) 3266 { 3267 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 3268 3269 ASSERT(HDR_HAS_L1HDR(hdr)); 3270 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 3271 IMPLY(free_rdata, HDR_HAS_RABD(hdr)); 3272 3273 /* 3274 * If the hdr is currently being written to the l2arc then 3275 * we defer freeing the data by adding it to the l2arc_free_on_write 3276 * list. The l2arc will free the data once it's finished 3277 * writing it to the l2arc device. 3278 */ 3279 if (HDR_L2_WRITING(hdr)) { 3280 arc_hdr_free_on_write(hdr, free_rdata); 3281 ARCSTAT_BUMP(arcstat_l2_free_on_write); 3282 } else if (free_rdata) { 3283 arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr); 3284 } else { 3285 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, size, hdr); 3286 } 3287 3288 if (free_rdata) { 3289 hdr->b_crypt_hdr.b_rabd = NULL; 3290 ARCSTAT_INCR(arcstat_raw_size, -size); 3291 } else { 3292 hdr->b_l1hdr.b_pabd = NULL; 3293 } 3294 3295 if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr)) 3296 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 3297 3298 ARCSTAT_INCR(arcstat_compressed_size, -size); 3299 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3300 } 3301 3302 static arc_buf_hdr_t * 3303 arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize, 3304 boolean_t protected, enum zio_compress compression_type, uint8_t complevel, 3305 arc_buf_contents_t type, boolean_t alloc_rdata) 3306 { 3307 arc_buf_hdr_t *hdr; 3308 int flags = ARC_HDR_DO_ADAPT; 3309 3310 VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA); 3311 if (protected) { 3312 hdr = kmem_cache_alloc(hdr_full_crypt_cache, KM_PUSHPAGE); 3313 } else { 3314 hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE); 3315 } 3316 flags |= alloc_rdata ? ARC_HDR_ALLOC_RDATA : 0; 3317 3318 ASSERT(HDR_EMPTY(hdr)); 3319 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3320 HDR_SET_PSIZE(hdr, psize); 3321 HDR_SET_LSIZE(hdr, lsize); 3322 hdr->b_spa = spa; 3323 hdr->b_type = type; 3324 hdr->b_flags = 0; 3325 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR); 3326 arc_hdr_set_compress(hdr, compression_type); 3327 hdr->b_complevel = complevel; 3328 if (protected) 3329 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3330 3331 hdr->b_l1hdr.b_state = arc_anon; 3332 hdr->b_l1hdr.b_arc_access = 0; 3333 hdr->b_l1hdr.b_bufcnt = 0; 3334 hdr->b_l1hdr.b_buf = NULL; 3335 3336 /* 3337 * Allocate the hdr's buffer. This will contain either 3338 * the compressed or uncompressed data depending on the block 3339 * it references and compressed arc enablement. 3340 */ 3341 arc_hdr_alloc_abd(hdr, flags); 3342 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3343 3344 return (hdr); 3345 } 3346 3347 /* 3348 * Transition between the two allocation states for the arc_buf_hdr struct. 3349 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without 3350 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller 3351 * version is used when a cache buffer is only in the L2ARC in order to reduce 3352 * memory usage. 3353 */ 3354 static arc_buf_hdr_t * 3355 arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new) 3356 { 3357 ASSERT(HDR_HAS_L2HDR(hdr)); 3358 3359 arc_buf_hdr_t *nhdr; 3360 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3361 3362 ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) || 3363 (old == hdr_l2only_cache && new == hdr_full_cache)); 3364 3365 /* 3366 * if the caller wanted a new full header and the header is to be 3367 * encrypted we will actually allocate the header from the full crypt 3368 * cache instead. The same applies to freeing from the old cache. 3369 */ 3370 if (HDR_PROTECTED(hdr) && new == hdr_full_cache) 3371 new = hdr_full_crypt_cache; 3372 if (HDR_PROTECTED(hdr) && old == hdr_full_cache) 3373 old = hdr_full_crypt_cache; 3374 3375 nhdr = kmem_cache_alloc(new, KM_PUSHPAGE); 3376 3377 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3378 buf_hash_remove(hdr); 3379 3380 bcopy(hdr, nhdr, HDR_L2ONLY_SIZE); 3381 3382 if (new == hdr_full_cache || new == hdr_full_crypt_cache) { 3383 arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3384 /* 3385 * arc_access and arc_change_state need to be aware that a 3386 * header has just come out of L2ARC, so we set its state to 3387 * l2c_only even though it's about to change. 3388 */ 3389 nhdr->b_l1hdr.b_state = arc_l2c_only; 3390 3391 /* Verify previous threads set to NULL before freeing */ 3392 ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL); 3393 ASSERT(!HDR_HAS_RABD(hdr)); 3394 } else { 3395 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3396 ASSERT0(hdr->b_l1hdr.b_bufcnt); 3397 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3398 3399 /* 3400 * If we've reached here, We must have been called from 3401 * arc_evict_hdr(), as such we should have already been 3402 * removed from any ghost list we were previously on 3403 * (which protects us from racing with arc_evict_state), 3404 * thus no locking is needed during this check. 3405 */ 3406 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3407 3408 /* 3409 * A buffer must not be moved into the arc_l2c_only 3410 * state if it's not finished being written out to the 3411 * l2arc device. Otherwise, the b_l1hdr.b_pabd field 3412 * might try to be accessed, even though it was removed. 3413 */ 3414 VERIFY(!HDR_L2_WRITING(hdr)); 3415 VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3416 ASSERT(!HDR_HAS_RABD(hdr)); 3417 3418 arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3419 } 3420 /* 3421 * The header has been reallocated so we need to re-insert it into any 3422 * lists it was on. 3423 */ 3424 (void) buf_hash_insert(nhdr, NULL); 3425 3426 ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node)); 3427 3428 mutex_enter(&dev->l2ad_mtx); 3429 3430 /* 3431 * We must place the realloc'ed header back into the list at 3432 * the same spot. Otherwise, if it's placed earlier in the list, 3433 * l2arc_write_buffers() could find it during the function's 3434 * write phase, and try to write it out to the l2arc. 3435 */ 3436 list_insert_after(&dev->l2ad_buflist, hdr, nhdr); 3437 list_remove(&dev->l2ad_buflist, hdr); 3438 3439 mutex_exit(&dev->l2ad_mtx); 3440 3441 /* 3442 * Since we're using the pointer address as the tag when 3443 * incrementing and decrementing the l2ad_alloc refcount, we 3444 * must remove the old pointer (that we're about to destroy) and 3445 * add the new pointer to the refcount. Otherwise we'd remove 3446 * the wrong pointer address when calling arc_hdr_destroy() later. 3447 */ 3448 3449 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 3450 arc_hdr_size(hdr), hdr); 3451 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 3452 arc_hdr_size(nhdr), nhdr); 3453 3454 buf_discard_identity(hdr); 3455 kmem_cache_free(old, hdr); 3456 3457 return (nhdr); 3458 } 3459 3460 /* 3461 * This function allows an L1 header to be reallocated as a crypt 3462 * header and vice versa. If we are going to a crypt header, the 3463 * new fields will be zeroed out. 3464 */ 3465 static arc_buf_hdr_t * 3466 arc_hdr_realloc_crypt(arc_buf_hdr_t *hdr, boolean_t need_crypt) 3467 { 3468 arc_buf_hdr_t *nhdr; 3469 arc_buf_t *buf; 3470 kmem_cache_t *ncache, *ocache; 3471 unsigned nsize, osize; 3472 3473 /* 3474 * This function requires that hdr is in the arc_anon state. 3475 * Therefore it won't have any L2ARC data for us to worry 3476 * about copying. 3477 */ 3478 ASSERT(HDR_HAS_L1HDR(hdr)); 3479 ASSERT(!HDR_HAS_L2HDR(hdr)); 3480 ASSERT3U(!!HDR_PROTECTED(hdr), !=, need_crypt); 3481 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3482 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3483 ASSERT(!list_link_active(&hdr->b_l2hdr.b_l2node)); 3484 ASSERT3P(hdr->b_hash_next, ==, NULL); 3485 3486 if (need_crypt) { 3487 ncache = hdr_full_crypt_cache; 3488 nsize = sizeof (hdr->b_crypt_hdr); 3489 ocache = hdr_full_cache; 3490 osize = HDR_FULL_SIZE; 3491 } else { 3492 ncache = hdr_full_cache; 3493 nsize = HDR_FULL_SIZE; 3494 ocache = hdr_full_crypt_cache; 3495 osize = sizeof (hdr->b_crypt_hdr); 3496 } 3497 3498 nhdr = kmem_cache_alloc(ncache, KM_PUSHPAGE); 3499 3500 /* 3501 * Copy all members that aren't locks or condvars to the new header. 3502 * No lists are pointing to us (as we asserted above), so we don't 3503 * need to worry about the list nodes. 3504 */ 3505 nhdr->b_dva = hdr->b_dva; 3506 nhdr->b_birth = hdr->b_birth; 3507 nhdr->b_type = hdr->b_type; 3508 nhdr->b_flags = hdr->b_flags; 3509 nhdr->b_psize = hdr->b_psize; 3510 nhdr->b_lsize = hdr->b_lsize; 3511 nhdr->b_spa = hdr->b_spa; 3512 nhdr->b_l1hdr.b_freeze_cksum = hdr->b_l1hdr.b_freeze_cksum; 3513 nhdr->b_l1hdr.b_bufcnt = hdr->b_l1hdr.b_bufcnt; 3514 nhdr->b_l1hdr.b_byteswap = hdr->b_l1hdr.b_byteswap; 3515 nhdr->b_l1hdr.b_state = hdr->b_l1hdr.b_state; 3516 nhdr->b_l1hdr.b_arc_access = hdr->b_l1hdr.b_arc_access; 3517 nhdr->b_l1hdr.b_mru_hits = hdr->b_l1hdr.b_mru_hits; 3518 nhdr->b_l1hdr.b_mru_ghost_hits = hdr->b_l1hdr.b_mru_ghost_hits; 3519 nhdr->b_l1hdr.b_mfu_hits = hdr->b_l1hdr.b_mfu_hits; 3520 nhdr->b_l1hdr.b_mfu_ghost_hits = hdr->b_l1hdr.b_mfu_ghost_hits; 3521 nhdr->b_l1hdr.b_l2_hits = hdr->b_l1hdr.b_l2_hits; 3522 nhdr->b_l1hdr.b_acb = hdr->b_l1hdr.b_acb; 3523 nhdr->b_l1hdr.b_pabd = hdr->b_l1hdr.b_pabd; 3524 3525 /* 3526 * This zfs_refcount_add() exists only to ensure that the individual 3527 * arc buffers always point to a header that is referenced, avoiding 3528 * a small race condition that could trigger ASSERTs. 3529 */ 3530 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, FTAG); 3531 nhdr->b_l1hdr.b_buf = hdr->b_l1hdr.b_buf; 3532 for (buf = nhdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) { 3533 mutex_enter(&buf->b_evict_lock); 3534 buf->b_hdr = nhdr; 3535 mutex_exit(&buf->b_evict_lock); 3536 } 3537 3538 zfs_refcount_transfer(&nhdr->b_l1hdr.b_refcnt, &hdr->b_l1hdr.b_refcnt); 3539 (void) zfs_refcount_remove(&nhdr->b_l1hdr.b_refcnt, FTAG); 3540 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3541 3542 if (need_crypt) { 3543 arc_hdr_set_flags(nhdr, ARC_FLAG_PROTECTED); 3544 } else { 3545 arc_hdr_clear_flags(nhdr, ARC_FLAG_PROTECTED); 3546 } 3547 3548 /* unset all members of the original hdr */ 3549 bzero(&hdr->b_dva, sizeof (dva_t)); 3550 hdr->b_birth = 0; 3551 hdr->b_type = ARC_BUFC_INVALID; 3552 hdr->b_flags = 0; 3553 hdr->b_psize = 0; 3554 hdr->b_lsize = 0; 3555 hdr->b_spa = 0; 3556 hdr->b_l1hdr.b_freeze_cksum = NULL; 3557 hdr->b_l1hdr.b_buf = NULL; 3558 hdr->b_l1hdr.b_bufcnt = 0; 3559 hdr->b_l1hdr.b_byteswap = 0; 3560 hdr->b_l1hdr.b_state = NULL; 3561 hdr->b_l1hdr.b_arc_access = 0; 3562 hdr->b_l1hdr.b_mru_hits = 0; 3563 hdr->b_l1hdr.b_mru_ghost_hits = 0; 3564 hdr->b_l1hdr.b_mfu_hits = 0; 3565 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 3566 hdr->b_l1hdr.b_l2_hits = 0; 3567 hdr->b_l1hdr.b_acb = NULL; 3568 hdr->b_l1hdr.b_pabd = NULL; 3569 3570 if (ocache == hdr_full_crypt_cache) { 3571 ASSERT(!HDR_HAS_RABD(hdr)); 3572 hdr->b_crypt_hdr.b_ot = DMU_OT_NONE; 3573 hdr->b_crypt_hdr.b_ebufcnt = 0; 3574 hdr->b_crypt_hdr.b_dsobj = 0; 3575 bzero(hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 3576 bzero(hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 3577 bzero(hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 3578 } 3579 3580 buf_discard_identity(hdr); 3581 kmem_cache_free(ocache, hdr); 3582 3583 return (nhdr); 3584 } 3585 3586 /* 3587 * This function is used by the send / receive code to convert a newly 3588 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It 3589 * is also used to allow the root objset block to be updated without altering 3590 * its embedded MACs. Both block types will always be uncompressed so we do not 3591 * have to worry about compression type or psize. 3592 */ 3593 void 3594 arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder, 3595 dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv, 3596 const uint8_t *mac) 3597 { 3598 arc_buf_hdr_t *hdr = buf->b_hdr; 3599 3600 ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET); 3601 ASSERT(HDR_HAS_L1HDR(hdr)); 3602 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3603 3604 buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED); 3605 if (!HDR_PROTECTED(hdr)) 3606 hdr = arc_hdr_realloc_crypt(hdr, B_TRUE); 3607 hdr->b_crypt_hdr.b_dsobj = dsobj; 3608 hdr->b_crypt_hdr.b_ot = ot; 3609 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3610 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3611 if (!arc_hdr_has_uncompressed_buf(hdr)) 3612 arc_cksum_free(hdr); 3613 3614 if (salt != NULL) 3615 bcopy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 3616 if (iv != NULL) 3617 bcopy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 3618 if (mac != NULL) 3619 bcopy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 3620 } 3621 3622 /* 3623 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller. 3624 * The buf is returned thawed since we expect the consumer to modify it. 3625 */ 3626 arc_buf_t * 3627 arc_alloc_buf(spa_t *spa, void *tag, arc_buf_contents_t type, int32_t size) 3628 { 3629 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size, 3630 B_FALSE, ZIO_COMPRESS_OFF, 0, type, B_FALSE); 3631 3632 arc_buf_t *buf = NULL; 3633 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE, 3634 B_FALSE, B_FALSE, &buf)); 3635 arc_buf_thaw(buf); 3636 3637 return (buf); 3638 } 3639 3640 /* 3641 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this 3642 * for bufs containing metadata. 3643 */ 3644 arc_buf_t * 3645 arc_alloc_compressed_buf(spa_t *spa, void *tag, uint64_t psize, uint64_t lsize, 3646 enum zio_compress compression_type, uint8_t complevel) 3647 { 3648 ASSERT3U(lsize, >, 0); 3649 ASSERT3U(lsize, >=, psize); 3650 ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF); 3651 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3652 3653 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 3654 B_FALSE, compression_type, complevel, ARC_BUFC_DATA, B_FALSE); 3655 3656 arc_buf_t *buf = NULL; 3657 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, 3658 B_TRUE, B_FALSE, B_FALSE, &buf)); 3659 arc_buf_thaw(buf); 3660 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3661 3662 if (!arc_buf_is_shared(buf)) { 3663 /* 3664 * To ensure that the hdr has the correct data in it if we call 3665 * arc_untransform() on this buf before it's been written to 3666 * disk, it's easiest if we just set up sharing between the 3667 * buf and the hdr. 3668 */ 3669 arc_hdr_free_abd(hdr, B_FALSE); 3670 arc_share_buf(hdr, buf); 3671 } 3672 3673 return (buf); 3674 } 3675 3676 arc_buf_t * 3677 arc_alloc_raw_buf(spa_t *spa, void *tag, uint64_t dsobj, boolean_t byteorder, 3678 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 3679 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 3680 enum zio_compress compression_type, uint8_t complevel) 3681 { 3682 arc_buf_hdr_t *hdr; 3683 arc_buf_t *buf; 3684 arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ? 3685 ARC_BUFC_METADATA : ARC_BUFC_DATA; 3686 3687 ASSERT3U(lsize, >, 0); 3688 ASSERT3U(lsize, >=, psize); 3689 ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF); 3690 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3691 3692 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE, 3693 compression_type, complevel, type, B_TRUE); 3694 3695 hdr->b_crypt_hdr.b_dsobj = dsobj; 3696 hdr->b_crypt_hdr.b_ot = ot; 3697 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3698 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3699 bcopy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 3700 bcopy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 3701 bcopy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 3702 3703 /* 3704 * This buffer will be considered encrypted even if the ot is not an 3705 * encrypted type. It will become authenticated instead in 3706 * arc_write_ready(). 3707 */ 3708 buf = NULL; 3709 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE, 3710 B_FALSE, B_FALSE, &buf)); 3711 arc_buf_thaw(buf); 3712 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3713 3714 return (buf); 3715 } 3716 3717 static void 3718 l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 3719 boolean_t state_only) 3720 { 3721 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3722 l2arc_dev_t *dev = l2hdr->b_dev; 3723 uint64_t lsize = HDR_GET_LSIZE(hdr); 3724 uint64_t psize = HDR_GET_PSIZE(hdr); 3725 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3726 arc_buf_contents_t type = hdr->b_type; 3727 int64_t lsize_s; 3728 int64_t psize_s; 3729 int64_t asize_s; 3730 3731 if (incr) { 3732 lsize_s = lsize; 3733 psize_s = psize; 3734 asize_s = asize; 3735 } else { 3736 lsize_s = -lsize; 3737 psize_s = -psize; 3738 asize_s = -asize; 3739 } 3740 3741 /* If the buffer is a prefetch, count it as such. */ 3742 if (HDR_PREFETCH(hdr)) { 3743 ARCSTAT_INCR(arcstat_l2_prefetch_asize, asize_s); 3744 } else { 3745 /* 3746 * We use the value stored in the L2 header upon initial 3747 * caching in L2ARC. This value will be updated in case 3748 * an MRU/MRU_ghost buffer transitions to MFU but the L2ARC 3749 * metadata (log entry) cannot currently be updated. Having 3750 * the ARC state in the L2 header solves the problem of a 3751 * possibly absent L1 header (apparent in buffers restored 3752 * from persistent L2ARC). 3753 */ 3754 switch (hdr->b_l2hdr.b_arcs_state) { 3755 case ARC_STATE_MRU_GHOST: 3756 case ARC_STATE_MRU: 3757 ARCSTAT_INCR(arcstat_l2_mru_asize, asize_s); 3758 break; 3759 case ARC_STATE_MFU_GHOST: 3760 case ARC_STATE_MFU: 3761 ARCSTAT_INCR(arcstat_l2_mfu_asize, asize_s); 3762 break; 3763 default: 3764 break; 3765 } 3766 } 3767 3768 if (state_only) 3769 return; 3770 3771 ARCSTAT_INCR(arcstat_l2_psize, psize_s); 3772 ARCSTAT_INCR(arcstat_l2_lsize, lsize_s); 3773 3774 switch (type) { 3775 case ARC_BUFC_DATA: 3776 ARCSTAT_INCR(arcstat_l2_bufc_data_asize, asize_s); 3777 break; 3778 case ARC_BUFC_METADATA: 3779 ARCSTAT_INCR(arcstat_l2_bufc_metadata_asize, asize_s); 3780 break; 3781 default: 3782 break; 3783 } 3784 } 3785 3786 3787 static void 3788 arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr) 3789 { 3790 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3791 l2arc_dev_t *dev = l2hdr->b_dev; 3792 uint64_t psize = HDR_GET_PSIZE(hdr); 3793 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3794 3795 ASSERT(MUTEX_HELD(&dev->l2ad_mtx)); 3796 ASSERT(HDR_HAS_L2HDR(hdr)); 3797 3798 list_remove(&dev->l2ad_buflist, hdr); 3799 3800 l2arc_hdr_arcstats_decrement(hdr); 3801 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); 3802 3803 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), 3804 hdr); 3805 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 3806 } 3807 3808 static void 3809 arc_hdr_destroy(arc_buf_hdr_t *hdr) 3810 { 3811 if (HDR_HAS_L1HDR(hdr)) { 3812 ASSERT(hdr->b_l1hdr.b_buf == NULL || 3813 hdr->b_l1hdr.b_bufcnt > 0); 3814 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3815 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3816 } 3817 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3818 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 3819 3820 if (HDR_HAS_L2HDR(hdr)) { 3821 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3822 boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx); 3823 3824 if (!buflist_held) 3825 mutex_enter(&dev->l2ad_mtx); 3826 3827 /* 3828 * Even though we checked this conditional above, we 3829 * need to check this again now that we have the 3830 * l2ad_mtx. This is because we could be racing with 3831 * another thread calling l2arc_evict() which might have 3832 * destroyed this header's L2 portion as we were waiting 3833 * to acquire the l2ad_mtx. If that happens, we don't 3834 * want to re-destroy the header's L2 portion. 3835 */ 3836 if (HDR_HAS_L2HDR(hdr)) 3837 arc_hdr_l2hdr_destroy(hdr); 3838 3839 if (!buflist_held) 3840 mutex_exit(&dev->l2ad_mtx); 3841 } 3842 3843 /* 3844 * The header's identify can only be safely discarded once it is no 3845 * longer discoverable. This requires removing it from the hash table 3846 * and the l2arc header list. After this point the hash lock can not 3847 * be used to protect the header. 3848 */ 3849 if (!HDR_EMPTY(hdr)) 3850 buf_discard_identity(hdr); 3851 3852 if (HDR_HAS_L1HDR(hdr)) { 3853 arc_cksum_free(hdr); 3854 3855 while (hdr->b_l1hdr.b_buf != NULL) 3856 arc_buf_destroy_impl(hdr->b_l1hdr.b_buf); 3857 3858 if (hdr->b_l1hdr.b_pabd != NULL) 3859 arc_hdr_free_abd(hdr, B_FALSE); 3860 3861 if (HDR_HAS_RABD(hdr)) 3862 arc_hdr_free_abd(hdr, B_TRUE); 3863 } 3864 3865 ASSERT3P(hdr->b_hash_next, ==, NULL); 3866 if (HDR_HAS_L1HDR(hdr)) { 3867 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3868 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 3869 3870 if (!HDR_PROTECTED(hdr)) { 3871 kmem_cache_free(hdr_full_cache, hdr); 3872 } else { 3873 kmem_cache_free(hdr_full_crypt_cache, hdr); 3874 } 3875 } else { 3876 kmem_cache_free(hdr_l2only_cache, hdr); 3877 } 3878 } 3879 3880 void 3881 arc_buf_destroy(arc_buf_t *buf, void* tag) 3882 { 3883 arc_buf_hdr_t *hdr = buf->b_hdr; 3884 3885 if (hdr->b_l1hdr.b_state == arc_anon) { 3886 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 3887 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3888 VERIFY0(remove_reference(hdr, NULL, tag)); 3889 arc_hdr_destroy(hdr); 3890 return; 3891 } 3892 3893 kmutex_t *hash_lock = HDR_LOCK(hdr); 3894 mutex_enter(hash_lock); 3895 3896 ASSERT3P(hdr, ==, buf->b_hdr); 3897 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3898 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 3899 ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon); 3900 ASSERT3P(buf->b_data, !=, NULL); 3901 3902 (void) remove_reference(hdr, hash_lock, tag); 3903 arc_buf_destroy_impl(buf); 3904 mutex_exit(hash_lock); 3905 } 3906 3907 /* 3908 * Evict the arc_buf_hdr that is provided as a parameter. The resultant 3909 * state of the header is dependent on its state prior to entering this 3910 * function. The following transitions are possible: 3911 * 3912 * - arc_mru -> arc_mru_ghost 3913 * - arc_mfu -> arc_mfu_ghost 3914 * - arc_mru_ghost -> arc_l2c_only 3915 * - arc_mru_ghost -> deleted 3916 * - arc_mfu_ghost -> arc_l2c_only 3917 * - arc_mfu_ghost -> deleted 3918 */ 3919 static int64_t 3920 arc_evict_hdr(arc_buf_hdr_t *hdr, kmutex_t *hash_lock) 3921 { 3922 arc_state_t *evicted_state, *state; 3923 int64_t bytes_evicted = 0; 3924 int min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ? 3925 arc_min_prescient_prefetch_ms : arc_min_prefetch_ms; 3926 3927 ASSERT(MUTEX_HELD(hash_lock)); 3928 ASSERT(HDR_HAS_L1HDR(hdr)); 3929 3930 state = hdr->b_l1hdr.b_state; 3931 if (GHOST_STATE(state)) { 3932 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3933 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3934 3935 /* 3936 * l2arc_write_buffers() relies on a header's L1 portion 3937 * (i.e. its b_pabd field) during it's write phase. 3938 * Thus, we cannot push a header onto the arc_l2c_only 3939 * state (removing its L1 piece) until the header is 3940 * done being written to the l2arc. 3941 */ 3942 if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) { 3943 ARCSTAT_BUMP(arcstat_evict_l2_skip); 3944 return (bytes_evicted); 3945 } 3946 3947 ARCSTAT_BUMP(arcstat_deleted); 3948 bytes_evicted += HDR_GET_LSIZE(hdr); 3949 3950 DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr); 3951 3952 if (HDR_HAS_L2HDR(hdr)) { 3953 ASSERT(hdr->b_l1hdr.b_pabd == NULL); 3954 ASSERT(!HDR_HAS_RABD(hdr)); 3955 /* 3956 * This buffer is cached on the 2nd Level ARC; 3957 * don't destroy the header. 3958 */ 3959 arc_change_state(arc_l2c_only, hdr, hash_lock); 3960 /* 3961 * dropping from L1+L2 cached to L2-only, 3962 * realloc to remove the L1 header. 3963 */ 3964 hdr = arc_hdr_realloc(hdr, hdr_full_cache, 3965 hdr_l2only_cache); 3966 } else { 3967 arc_change_state(arc_anon, hdr, hash_lock); 3968 arc_hdr_destroy(hdr); 3969 } 3970 return (bytes_evicted); 3971 } 3972 3973 ASSERT(state == arc_mru || state == arc_mfu); 3974 evicted_state = (state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost; 3975 3976 /* prefetch buffers have a minimum lifespan */ 3977 if (HDR_IO_IN_PROGRESS(hdr) || 3978 ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) && 3979 ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access < 3980 MSEC_TO_TICK(min_lifetime))) { 3981 ARCSTAT_BUMP(arcstat_evict_skip); 3982 return (bytes_evicted); 3983 } 3984 3985 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3986 while (hdr->b_l1hdr.b_buf) { 3987 arc_buf_t *buf = hdr->b_l1hdr.b_buf; 3988 if (!mutex_tryenter(&buf->b_evict_lock)) { 3989 ARCSTAT_BUMP(arcstat_mutex_miss); 3990 break; 3991 } 3992 if (buf->b_data != NULL) 3993 bytes_evicted += HDR_GET_LSIZE(hdr); 3994 mutex_exit(&buf->b_evict_lock); 3995 arc_buf_destroy_impl(buf); 3996 } 3997 3998 if (HDR_HAS_L2HDR(hdr)) { 3999 ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr)); 4000 } else { 4001 if (l2arc_write_eligible(hdr->b_spa, hdr)) { 4002 ARCSTAT_INCR(arcstat_evict_l2_eligible, 4003 HDR_GET_LSIZE(hdr)); 4004 4005 switch (state->arcs_state) { 4006 case ARC_STATE_MRU: 4007 ARCSTAT_INCR( 4008 arcstat_evict_l2_eligible_mru, 4009 HDR_GET_LSIZE(hdr)); 4010 break; 4011 case ARC_STATE_MFU: 4012 ARCSTAT_INCR( 4013 arcstat_evict_l2_eligible_mfu, 4014 HDR_GET_LSIZE(hdr)); 4015 break; 4016 default: 4017 break; 4018 } 4019 } else { 4020 ARCSTAT_INCR(arcstat_evict_l2_ineligible, 4021 HDR_GET_LSIZE(hdr)); 4022 } 4023 } 4024 4025 if (hdr->b_l1hdr.b_bufcnt == 0) { 4026 arc_cksum_free(hdr); 4027 4028 bytes_evicted += arc_hdr_size(hdr); 4029 4030 /* 4031 * If this hdr is being evicted and has a compressed 4032 * buffer then we discard it here before we change states. 4033 * This ensures that the accounting is updated correctly 4034 * in arc_free_data_impl(). 4035 */ 4036 if (hdr->b_l1hdr.b_pabd != NULL) 4037 arc_hdr_free_abd(hdr, B_FALSE); 4038 4039 if (HDR_HAS_RABD(hdr)) 4040 arc_hdr_free_abd(hdr, B_TRUE); 4041 4042 arc_change_state(evicted_state, hdr, hash_lock); 4043 ASSERT(HDR_IN_HASH_TABLE(hdr)); 4044 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 4045 DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr); 4046 } 4047 4048 return (bytes_evicted); 4049 } 4050 4051 static void 4052 arc_set_need_free(void) 4053 { 4054 ASSERT(MUTEX_HELD(&arc_evict_lock)); 4055 int64_t remaining = arc_free_memory() - arc_sys_free / 2; 4056 arc_evict_waiter_t *aw = list_tail(&arc_evict_waiters); 4057 if (aw == NULL) { 4058 arc_need_free = MAX(-remaining, 0); 4059 } else { 4060 arc_need_free = 4061 MAX(-remaining, (int64_t)(aw->aew_count - arc_evict_count)); 4062 } 4063 } 4064 4065 static uint64_t 4066 arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker, 4067 uint64_t spa, int64_t bytes) 4068 { 4069 multilist_sublist_t *mls; 4070 uint64_t bytes_evicted = 0; 4071 arc_buf_hdr_t *hdr; 4072 kmutex_t *hash_lock; 4073 int evict_count = 0; 4074 4075 ASSERT3P(marker, !=, NULL); 4076 IMPLY(bytes < 0, bytes == ARC_EVICT_ALL); 4077 4078 mls = multilist_sublist_lock(ml, idx); 4079 4080 for (hdr = multilist_sublist_prev(mls, marker); hdr != NULL; 4081 hdr = multilist_sublist_prev(mls, marker)) { 4082 if ((bytes != ARC_EVICT_ALL && bytes_evicted >= bytes) || 4083 (evict_count >= zfs_arc_evict_batch_limit)) 4084 break; 4085 4086 /* 4087 * To keep our iteration location, move the marker 4088 * forward. Since we're not holding hdr's hash lock, we 4089 * must be very careful and not remove 'hdr' from the 4090 * sublist. Otherwise, other consumers might mistake the 4091 * 'hdr' as not being on a sublist when they call the 4092 * multilist_link_active() function (they all rely on 4093 * the hash lock protecting concurrent insertions and 4094 * removals). multilist_sublist_move_forward() was 4095 * specifically implemented to ensure this is the case 4096 * (only 'marker' will be removed and re-inserted). 4097 */ 4098 multilist_sublist_move_forward(mls, marker); 4099 4100 /* 4101 * The only case where the b_spa field should ever be 4102 * zero, is the marker headers inserted by 4103 * arc_evict_state(). It's possible for multiple threads 4104 * to be calling arc_evict_state() concurrently (e.g. 4105 * dsl_pool_close() and zio_inject_fault()), so we must 4106 * skip any markers we see from these other threads. 4107 */ 4108 if (hdr->b_spa == 0) 4109 continue; 4110 4111 /* we're only interested in evicting buffers of a certain spa */ 4112 if (spa != 0 && hdr->b_spa != spa) { 4113 ARCSTAT_BUMP(arcstat_evict_skip); 4114 continue; 4115 } 4116 4117 hash_lock = HDR_LOCK(hdr); 4118 4119 /* 4120 * We aren't calling this function from any code path 4121 * that would already be holding a hash lock, so we're 4122 * asserting on this assumption to be defensive in case 4123 * this ever changes. Without this check, it would be 4124 * possible to incorrectly increment arcstat_mutex_miss 4125 * below (e.g. if the code changed such that we called 4126 * this function with a hash lock held). 4127 */ 4128 ASSERT(!MUTEX_HELD(hash_lock)); 4129 4130 if (mutex_tryenter(hash_lock)) { 4131 uint64_t evicted = arc_evict_hdr(hdr, hash_lock); 4132 mutex_exit(hash_lock); 4133 4134 bytes_evicted += evicted; 4135 4136 /* 4137 * If evicted is zero, arc_evict_hdr() must have 4138 * decided to skip this header, don't increment 4139 * evict_count in this case. 4140 */ 4141 if (evicted != 0) 4142 evict_count++; 4143 4144 } else { 4145 ARCSTAT_BUMP(arcstat_mutex_miss); 4146 } 4147 } 4148 4149 multilist_sublist_unlock(mls); 4150 4151 /* 4152 * Increment the count of evicted bytes, and wake up any threads that 4153 * are waiting for the count to reach this value. Since the list is 4154 * ordered by ascending aew_count, we pop off the beginning of the 4155 * list until we reach the end, or a waiter that's past the current 4156 * "count". Doing this outside the loop reduces the number of times 4157 * we need to acquire the global arc_evict_lock. 4158 * 4159 * Only wake when there's sufficient free memory in the system 4160 * (specifically, arc_sys_free/2, which by default is a bit more than 4161 * 1/64th of RAM). See the comments in arc_wait_for_eviction(). 4162 */ 4163 mutex_enter(&arc_evict_lock); 4164 arc_evict_count += bytes_evicted; 4165 4166 if ((int64_t)(arc_free_memory() - arc_sys_free / 2) > 0) { 4167 arc_evict_waiter_t *aw; 4168 while ((aw = list_head(&arc_evict_waiters)) != NULL && 4169 aw->aew_count <= arc_evict_count) { 4170 list_remove(&arc_evict_waiters, aw); 4171 cv_broadcast(&aw->aew_cv); 4172 } 4173 } 4174 arc_set_need_free(); 4175 mutex_exit(&arc_evict_lock); 4176 4177 /* 4178 * If the ARC size is reduced from arc_c_max to arc_c_min (especially 4179 * if the average cached block is small), eviction can be on-CPU for 4180 * many seconds. To ensure that other threads that may be bound to 4181 * this CPU are able to make progress, make a voluntary preemption 4182 * call here. 4183 */ 4184 cond_resched(); 4185 4186 return (bytes_evicted); 4187 } 4188 4189 /* 4190 * Evict buffers from the given arc state, until we've removed the 4191 * specified number of bytes. Move the removed buffers to the 4192 * appropriate evict state. 4193 * 4194 * This function makes a "best effort". It skips over any buffers 4195 * it can't get a hash_lock on, and so, may not catch all candidates. 4196 * It may also return without evicting as much space as requested. 4197 * 4198 * If bytes is specified using the special value ARC_EVICT_ALL, this 4199 * will evict all available (i.e. unlocked and evictable) buffers from 4200 * the given arc state; which is used by arc_flush(). 4201 */ 4202 static uint64_t 4203 arc_evict_state(arc_state_t *state, uint64_t spa, int64_t bytes, 4204 arc_buf_contents_t type) 4205 { 4206 uint64_t total_evicted = 0; 4207 multilist_t *ml = state->arcs_list[type]; 4208 int num_sublists; 4209 arc_buf_hdr_t **markers; 4210 4211 IMPLY(bytes < 0, bytes == ARC_EVICT_ALL); 4212 4213 num_sublists = multilist_get_num_sublists(ml); 4214 4215 /* 4216 * If we've tried to evict from each sublist, made some 4217 * progress, but still have not hit the target number of bytes 4218 * to evict, we want to keep trying. The markers allow us to 4219 * pick up where we left off for each individual sublist, rather 4220 * than starting from the tail each time. 4221 */ 4222 markers = kmem_zalloc(sizeof (*markers) * num_sublists, KM_SLEEP); 4223 for (int i = 0; i < num_sublists; i++) { 4224 multilist_sublist_t *mls; 4225 4226 markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP); 4227 4228 /* 4229 * A b_spa of 0 is used to indicate that this header is 4230 * a marker. This fact is used in arc_evict_type() and 4231 * arc_evict_state_impl(). 4232 */ 4233 markers[i]->b_spa = 0; 4234 4235 mls = multilist_sublist_lock(ml, i); 4236 multilist_sublist_insert_tail(mls, markers[i]); 4237 multilist_sublist_unlock(mls); 4238 } 4239 4240 /* 4241 * While we haven't hit our target number of bytes to evict, or 4242 * we're evicting all available buffers. 4243 */ 4244 while (total_evicted < bytes || bytes == ARC_EVICT_ALL) { 4245 int sublist_idx = multilist_get_random_index(ml); 4246 uint64_t scan_evicted = 0; 4247 4248 /* 4249 * Try to reduce pinned dnodes with a floor of arc_dnode_limit. 4250 * Request that 10% of the LRUs be scanned by the superblock 4251 * shrinker. 4252 */ 4253 if (type == ARC_BUFC_DATA && aggsum_compare(&astat_dnode_size, 4254 arc_dnode_size_limit) > 0) { 4255 arc_prune_async((aggsum_upper_bound(&astat_dnode_size) - 4256 arc_dnode_size_limit) / sizeof (dnode_t) / 4257 zfs_arc_dnode_reduce_percent); 4258 } 4259 4260 /* 4261 * Start eviction using a randomly selected sublist, 4262 * this is to try and evenly balance eviction across all 4263 * sublists. Always starting at the same sublist 4264 * (e.g. index 0) would cause evictions to favor certain 4265 * sublists over others. 4266 */ 4267 for (int i = 0; i < num_sublists; i++) { 4268 uint64_t bytes_remaining; 4269 uint64_t bytes_evicted; 4270 4271 if (bytes == ARC_EVICT_ALL) 4272 bytes_remaining = ARC_EVICT_ALL; 4273 else if (total_evicted < bytes) 4274 bytes_remaining = bytes - total_evicted; 4275 else 4276 break; 4277 4278 bytes_evicted = arc_evict_state_impl(ml, sublist_idx, 4279 markers[sublist_idx], spa, bytes_remaining); 4280 4281 scan_evicted += bytes_evicted; 4282 total_evicted += bytes_evicted; 4283 4284 /* we've reached the end, wrap to the beginning */ 4285 if (++sublist_idx >= num_sublists) 4286 sublist_idx = 0; 4287 } 4288 4289 /* 4290 * If we didn't evict anything during this scan, we have 4291 * no reason to believe we'll evict more during another 4292 * scan, so break the loop. 4293 */ 4294 if (scan_evicted == 0) { 4295 /* This isn't possible, let's make that obvious */ 4296 ASSERT3S(bytes, !=, 0); 4297 4298 /* 4299 * When bytes is ARC_EVICT_ALL, the only way to 4300 * break the loop is when scan_evicted is zero. 4301 * In that case, we actually have evicted enough, 4302 * so we don't want to increment the kstat. 4303 */ 4304 if (bytes != ARC_EVICT_ALL) { 4305 ASSERT3S(total_evicted, <, bytes); 4306 ARCSTAT_BUMP(arcstat_evict_not_enough); 4307 } 4308 4309 break; 4310 } 4311 } 4312 4313 for (int i = 0; i < num_sublists; i++) { 4314 multilist_sublist_t *mls = multilist_sublist_lock(ml, i); 4315 multilist_sublist_remove(mls, markers[i]); 4316 multilist_sublist_unlock(mls); 4317 4318 kmem_cache_free(hdr_full_cache, markers[i]); 4319 } 4320 kmem_free(markers, sizeof (*markers) * num_sublists); 4321 4322 return (total_evicted); 4323 } 4324 4325 /* 4326 * Flush all "evictable" data of the given type from the arc state 4327 * specified. This will not evict any "active" buffers (i.e. referenced). 4328 * 4329 * When 'retry' is set to B_FALSE, the function will make a single pass 4330 * over the state and evict any buffers that it can. Since it doesn't 4331 * continually retry the eviction, it might end up leaving some buffers 4332 * in the ARC due to lock misses. 4333 * 4334 * When 'retry' is set to B_TRUE, the function will continually retry the 4335 * eviction until *all* evictable buffers have been removed from the 4336 * state. As a result, if concurrent insertions into the state are 4337 * allowed (e.g. if the ARC isn't shutting down), this function might 4338 * wind up in an infinite loop, continually trying to evict buffers. 4339 */ 4340 static uint64_t 4341 arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, 4342 boolean_t retry) 4343 { 4344 uint64_t evicted = 0; 4345 4346 while (zfs_refcount_count(&state->arcs_esize[type]) != 0) { 4347 evicted += arc_evict_state(state, spa, ARC_EVICT_ALL, type); 4348 4349 if (!retry) 4350 break; 4351 } 4352 4353 return (evicted); 4354 } 4355 4356 /* 4357 * Evict the specified number of bytes from the state specified, 4358 * restricting eviction to the spa and type given. This function 4359 * prevents us from trying to evict more from a state's list than 4360 * is "evictable", and to skip evicting altogether when passed a 4361 * negative value for "bytes". In contrast, arc_evict_state() will 4362 * evict everything it can, when passed a negative value for "bytes". 4363 */ 4364 static uint64_t 4365 arc_evict_impl(arc_state_t *state, uint64_t spa, int64_t bytes, 4366 arc_buf_contents_t type) 4367 { 4368 int64_t delta; 4369 4370 if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) { 4371 delta = MIN(zfs_refcount_count(&state->arcs_esize[type]), 4372 bytes); 4373 return (arc_evict_state(state, spa, delta, type)); 4374 } 4375 4376 return (0); 4377 } 4378 4379 /* 4380 * The goal of this function is to evict enough meta data buffers from the 4381 * ARC in order to enforce the arc_meta_limit. Achieving this is slightly 4382 * more complicated than it appears because it is common for data buffers 4383 * to have holds on meta data buffers. In addition, dnode meta data buffers 4384 * will be held by the dnodes in the block preventing them from being freed. 4385 * This means we can't simply traverse the ARC and expect to always find 4386 * enough unheld meta data buffer to release. 4387 * 4388 * Therefore, this function has been updated to make alternating passes 4389 * over the ARC releasing data buffers and then newly unheld meta data 4390 * buffers. This ensures forward progress is maintained and meta_used 4391 * will decrease. Normally this is sufficient, but if required the ARC 4392 * will call the registered prune callbacks causing dentry and inodes to 4393 * be dropped from the VFS cache. This will make dnode meta data buffers 4394 * available for reclaim. 4395 */ 4396 static uint64_t 4397 arc_evict_meta_balanced(uint64_t meta_used) 4398 { 4399 int64_t delta, prune = 0, adjustmnt; 4400 uint64_t total_evicted = 0; 4401 arc_buf_contents_t type = ARC_BUFC_DATA; 4402 int restarts = MAX(zfs_arc_meta_adjust_restarts, 0); 4403 4404 restart: 4405 /* 4406 * This slightly differs than the way we evict from the mru in 4407 * arc_evict because we don't have a "target" value (i.e. no 4408 * "meta" arc_p). As a result, I think we can completely 4409 * cannibalize the metadata in the MRU before we evict the 4410 * metadata from the MFU. I think we probably need to implement a 4411 * "metadata arc_p" value to do this properly. 4412 */ 4413 adjustmnt = meta_used - arc_meta_limit; 4414 4415 if (adjustmnt > 0 && 4416 zfs_refcount_count(&arc_mru->arcs_esize[type]) > 0) { 4417 delta = MIN(zfs_refcount_count(&arc_mru->arcs_esize[type]), 4418 adjustmnt); 4419 total_evicted += arc_evict_impl(arc_mru, 0, delta, type); 4420 adjustmnt -= delta; 4421 } 4422 4423 /* 4424 * We can't afford to recalculate adjustmnt here. If we do, 4425 * new metadata buffers can sneak into the MRU or ANON lists, 4426 * thus penalize the MFU metadata. Although the fudge factor is 4427 * small, it has been empirically shown to be significant for 4428 * certain workloads (e.g. creating many empty directories). As 4429 * such, we use the original calculation for adjustmnt, and 4430 * simply decrement the amount of data evicted from the MRU. 4431 */ 4432 4433 if (adjustmnt > 0 && 4434 zfs_refcount_count(&arc_mfu->arcs_esize[type]) > 0) { 4435 delta = MIN(zfs_refcount_count(&arc_mfu->arcs_esize[type]), 4436 adjustmnt); 4437 total_evicted += arc_evict_impl(arc_mfu, 0, delta, type); 4438 } 4439 4440 adjustmnt = meta_used - arc_meta_limit; 4441 4442 if (adjustmnt > 0 && 4443 zfs_refcount_count(&arc_mru_ghost->arcs_esize[type]) > 0) { 4444 delta = MIN(adjustmnt, 4445 zfs_refcount_count(&arc_mru_ghost->arcs_esize[type])); 4446 total_evicted += arc_evict_impl(arc_mru_ghost, 0, delta, type); 4447 adjustmnt -= delta; 4448 } 4449 4450 if (adjustmnt > 0 && 4451 zfs_refcount_count(&arc_mfu_ghost->arcs_esize[type]) > 0) { 4452 delta = MIN(adjustmnt, 4453 zfs_refcount_count(&arc_mfu_ghost->arcs_esize[type])); 4454 total_evicted += arc_evict_impl(arc_mfu_ghost, 0, delta, type); 4455 } 4456 4457 /* 4458 * If after attempting to make the requested adjustment to the ARC 4459 * the meta limit is still being exceeded then request that the 4460 * higher layers drop some cached objects which have holds on ARC 4461 * meta buffers. Requests to the upper layers will be made with 4462 * increasingly large scan sizes until the ARC is below the limit. 4463 */ 4464 if (meta_used > arc_meta_limit) { 4465 if (type == ARC_BUFC_DATA) { 4466 type = ARC_BUFC_METADATA; 4467 } else { 4468 type = ARC_BUFC_DATA; 4469 4470 if (zfs_arc_meta_prune) { 4471 prune += zfs_arc_meta_prune; 4472 arc_prune_async(prune); 4473 } 4474 } 4475 4476 if (restarts > 0) { 4477 restarts--; 4478 goto restart; 4479 } 4480 } 4481 return (total_evicted); 4482 } 4483 4484 /* 4485 * Evict metadata buffers from the cache, such that arc_meta_used is 4486 * capped by the arc_meta_limit tunable. 4487 */ 4488 static uint64_t 4489 arc_evict_meta_only(uint64_t meta_used) 4490 { 4491 uint64_t total_evicted = 0; 4492 int64_t target; 4493 4494 /* 4495 * If we're over the meta limit, we want to evict enough 4496 * metadata to get back under the meta limit. We don't want to 4497 * evict so much that we drop the MRU below arc_p, though. If 4498 * we're over the meta limit more than we're over arc_p, we 4499 * evict some from the MRU here, and some from the MFU below. 4500 */ 4501 target = MIN((int64_t)(meta_used - arc_meta_limit), 4502 (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) + 4503 zfs_refcount_count(&arc_mru->arcs_size) - arc_p)); 4504 4505 total_evicted += arc_evict_impl(arc_mru, 0, target, ARC_BUFC_METADATA); 4506 4507 /* 4508 * Similar to the above, we want to evict enough bytes to get us 4509 * below the meta limit, but not so much as to drop us below the 4510 * space allotted to the MFU (which is defined as arc_c - arc_p). 4511 */ 4512 target = MIN((int64_t)(meta_used - arc_meta_limit), 4513 (int64_t)(zfs_refcount_count(&arc_mfu->arcs_size) - 4514 (arc_c - arc_p))); 4515 4516 total_evicted += arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); 4517 4518 return (total_evicted); 4519 } 4520 4521 static uint64_t 4522 arc_evict_meta(uint64_t meta_used) 4523 { 4524 if (zfs_arc_meta_strategy == ARC_STRATEGY_META_ONLY) 4525 return (arc_evict_meta_only(meta_used)); 4526 else 4527 return (arc_evict_meta_balanced(meta_used)); 4528 } 4529 4530 /* 4531 * Return the type of the oldest buffer in the given arc state 4532 * 4533 * This function will select a random sublist of type ARC_BUFC_DATA and 4534 * a random sublist of type ARC_BUFC_METADATA. The tail of each sublist 4535 * is compared, and the type which contains the "older" buffer will be 4536 * returned. 4537 */ 4538 static arc_buf_contents_t 4539 arc_evict_type(arc_state_t *state) 4540 { 4541 multilist_t *data_ml = state->arcs_list[ARC_BUFC_DATA]; 4542 multilist_t *meta_ml = state->arcs_list[ARC_BUFC_METADATA]; 4543 int data_idx = multilist_get_random_index(data_ml); 4544 int meta_idx = multilist_get_random_index(meta_ml); 4545 multilist_sublist_t *data_mls; 4546 multilist_sublist_t *meta_mls; 4547 arc_buf_contents_t type; 4548 arc_buf_hdr_t *data_hdr; 4549 arc_buf_hdr_t *meta_hdr; 4550 4551 /* 4552 * We keep the sublist lock until we're finished, to prevent 4553 * the headers from being destroyed via arc_evict_state(). 4554 */ 4555 data_mls = multilist_sublist_lock(data_ml, data_idx); 4556 meta_mls = multilist_sublist_lock(meta_ml, meta_idx); 4557 4558 /* 4559 * These two loops are to ensure we skip any markers that 4560 * might be at the tail of the lists due to arc_evict_state(). 4561 */ 4562 4563 for (data_hdr = multilist_sublist_tail(data_mls); data_hdr != NULL; 4564 data_hdr = multilist_sublist_prev(data_mls, data_hdr)) { 4565 if (data_hdr->b_spa != 0) 4566 break; 4567 } 4568 4569 for (meta_hdr = multilist_sublist_tail(meta_mls); meta_hdr != NULL; 4570 meta_hdr = multilist_sublist_prev(meta_mls, meta_hdr)) { 4571 if (meta_hdr->b_spa != 0) 4572 break; 4573 } 4574 4575 if (data_hdr == NULL && meta_hdr == NULL) { 4576 type = ARC_BUFC_DATA; 4577 } else if (data_hdr == NULL) { 4578 ASSERT3P(meta_hdr, !=, NULL); 4579 type = ARC_BUFC_METADATA; 4580 } else if (meta_hdr == NULL) { 4581 ASSERT3P(data_hdr, !=, NULL); 4582 type = ARC_BUFC_DATA; 4583 } else { 4584 ASSERT3P(data_hdr, !=, NULL); 4585 ASSERT3P(meta_hdr, !=, NULL); 4586 4587 /* The headers can't be on the sublist without an L1 header */ 4588 ASSERT(HDR_HAS_L1HDR(data_hdr)); 4589 ASSERT(HDR_HAS_L1HDR(meta_hdr)); 4590 4591 if (data_hdr->b_l1hdr.b_arc_access < 4592 meta_hdr->b_l1hdr.b_arc_access) { 4593 type = ARC_BUFC_DATA; 4594 } else { 4595 type = ARC_BUFC_METADATA; 4596 } 4597 } 4598 4599 multilist_sublist_unlock(meta_mls); 4600 multilist_sublist_unlock(data_mls); 4601 4602 return (type); 4603 } 4604 4605 /* 4606 * Evict buffers from the cache, such that arc_size is capped by arc_c. 4607 */ 4608 static uint64_t 4609 arc_evict(void) 4610 { 4611 uint64_t total_evicted = 0; 4612 uint64_t bytes; 4613 int64_t target; 4614 uint64_t asize = aggsum_value(&arc_size); 4615 uint64_t ameta = aggsum_value(&arc_meta_used); 4616 4617 /* 4618 * If we're over arc_meta_limit, we want to correct that before 4619 * potentially evicting data buffers below. 4620 */ 4621 total_evicted += arc_evict_meta(ameta); 4622 4623 /* 4624 * Adjust MRU size 4625 * 4626 * If we're over the target cache size, we want to evict enough 4627 * from the list to get back to our target size. We don't want 4628 * to evict too much from the MRU, such that it drops below 4629 * arc_p. So, if we're over our target cache size more than 4630 * the MRU is over arc_p, we'll evict enough to get back to 4631 * arc_p here, and then evict more from the MFU below. 4632 */ 4633 target = MIN((int64_t)(asize - arc_c), 4634 (int64_t)(zfs_refcount_count(&arc_anon->arcs_size) + 4635 zfs_refcount_count(&arc_mru->arcs_size) + ameta - arc_p)); 4636 4637 /* 4638 * If we're below arc_meta_min, always prefer to evict data. 4639 * Otherwise, try to satisfy the requested number of bytes to 4640 * evict from the type which contains older buffers; in an 4641 * effort to keep newer buffers in the cache regardless of their 4642 * type. If we cannot satisfy the number of bytes from this 4643 * type, spill over into the next type. 4644 */ 4645 if (arc_evict_type(arc_mru) == ARC_BUFC_METADATA && 4646 ameta > arc_meta_min) { 4647 bytes = arc_evict_impl(arc_mru, 0, target, ARC_BUFC_METADATA); 4648 total_evicted += bytes; 4649 4650 /* 4651 * If we couldn't evict our target number of bytes from 4652 * metadata, we try to get the rest from data. 4653 */ 4654 target -= bytes; 4655 4656 total_evicted += 4657 arc_evict_impl(arc_mru, 0, target, ARC_BUFC_DATA); 4658 } else { 4659 bytes = arc_evict_impl(arc_mru, 0, target, ARC_BUFC_DATA); 4660 total_evicted += bytes; 4661 4662 /* 4663 * If we couldn't evict our target number of bytes from 4664 * data, we try to get the rest from metadata. 4665 */ 4666 target -= bytes; 4667 4668 total_evicted += 4669 arc_evict_impl(arc_mru, 0, target, ARC_BUFC_METADATA); 4670 } 4671 4672 /* 4673 * Re-sum ARC stats after the first round of evictions. 4674 */ 4675 asize = aggsum_value(&arc_size); 4676 ameta = aggsum_value(&arc_meta_used); 4677 4678 4679 /* 4680 * Adjust MFU size 4681 * 4682 * Now that we've tried to evict enough from the MRU to get its 4683 * size back to arc_p, if we're still above the target cache 4684 * size, we evict the rest from the MFU. 4685 */ 4686 target = asize - arc_c; 4687 4688 if (arc_evict_type(arc_mfu) == ARC_BUFC_METADATA && 4689 ameta > arc_meta_min) { 4690 bytes = arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); 4691 total_evicted += bytes; 4692 4693 /* 4694 * If we couldn't evict our target number of bytes from 4695 * metadata, we try to get the rest from data. 4696 */ 4697 target -= bytes; 4698 4699 total_evicted += 4700 arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_DATA); 4701 } else { 4702 bytes = arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_DATA); 4703 total_evicted += bytes; 4704 4705 /* 4706 * If we couldn't evict our target number of bytes from 4707 * data, we try to get the rest from data. 4708 */ 4709 target -= bytes; 4710 4711 total_evicted += 4712 arc_evict_impl(arc_mfu, 0, target, ARC_BUFC_METADATA); 4713 } 4714 4715 /* 4716 * Adjust ghost lists 4717 * 4718 * In addition to the above, the ARC also defines target values 4719 * for the ghost lists. The sum of the mru list and mru ghost 4720 * list should never exceed the target size of the cache, and 4721 * the sum of the mru list, mfu list, mru ghost list, and mfu 4722 * ghost list should never exceed twice the target size of the 4723 * cache. The following logic enforces these limits on the ghost 4724 * caches, and evicts from them as needed. 4725 */ 4726 target = zfs_refcount_count(&arc_mru->arcs_size) + 4727 zfs_refcount_count(&arc_mru_ghost->arcs_size) - arc_c; 4728 4729 bytes = arc_evict_impl(arc_mru_ghost, 0, target, ARC_BUFC_DATA); 4730 total_evicted += bytes; 4731 4732 target -= bytes; 4733 4734 total_evicted += 4735 arc_evict_impl(arc_mru_ghost, 0, target, ARC_BUFC_METADATA); 4736 4737 /* 4738 * We assume the sum of the mru list and mfu list is less than 4739 * or equal to arc_c (we enforced this above), which means we 4740 * can use the simpler of the two equations below: 4741 * 4742 * mru + mfu + mru ghost + mfu ghost <= 2 * arc_c 4743 * mru ghost + mfu ghost <= arc_c 4744 */ 4745 target = zfs_refcount_count(&arc_mru_ghost->arcs_size) + 4746 zfs_refcount_count(&arc_mfu_ghost->arcs_size) - arc_c; 4747 4748 bytes = arc_evict_impl(arc_mfu_ghost, 0, target, ARC_BUFC_DATA); 4749 total_evicted += bytes; 4750 4751 target -= bytes; 4752 4753 total_evicted += 4754 arc_evict_impl(arc_mfu_ghost, 0, target, ARC_BUFC_METADATA); 4755 4756 return (total_evicted); 4757 } 4758 4759 void 4760 arc_flush(spa_t *spa, boolean_t retry) 4761 { 4762 uint64_t guid = 0; 4763 4764 /* 4765 * If retry is B_TRUE, a spa must not be specified since we have 4766 * no good way to determine if all of a spa's buffers have been 4767 * evicted from an arc state. 4768 */ 4769 ASSERT(!retry || spa == 0); 4770 4771 if (spa != NULL) 4772 guid = spa_load_guid(spa); 4773 4774 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry); 4775 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry); 4776 4777 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry); 4778 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry); 4779 4780 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry); 4781 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry); 4782 4783 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry); 4784 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry); 4785 } 4786 4787 void 4788 arc_reduce_target_size(int64_t to_free) 4789 { 4790 uint64_t asize = aggsum_value(&arc_size); 4791 4792 /* 4793 * All callers want the ARC to actually evict (at least) this much 4794 * memory. Therefore we reduce from the lower of the current size and 4795 * the target size. This way, even if arc_c is much higher than 4796 * arc_size (as can be the case after many calls to arc_freed(), we will 4797 * immediately have arc_c < arc_size and therefore the arc_evict_zthr 4798 * will evict. 4799 */ 4800 uint64_t c = MIN(arc_c, asize); 4801 4802 if (c > to_free && c - to_free > arc_c_min) { 4803 arc_c = c - to_free; 4804 atomic_add_64(&arc_p, -(arc_p >> arc_shrink_shift)); 4805 if (arc_p > arc_c) 4806 arc_p = (arc_c >> 1); 4807 ASSERT(arc_c >= arc_c_min); 4808 ASSERT((int64_t)arc_p >= 0); 4809 } else { 4810 arc_c = arc_c_min; 4811 } 4812 4813 if (asize > arc_c) { 4814 /* See comment in arc_evict_cb_check() on why lock+flag */ 4815 mutex_enter(&arc_evict_lock); 4816 arc_evict_needed = B_TRUE; 4817 mutex_exit(&arc_evict_lock); 4818 zthr_wakeup(arc_evict_zthr); 4819 } 4820 } 4821 4822 /* 4823 * Determine if the system is under memory pressure and is asking 4824 * to reclaim memory. A return value of B_TRUE indicates that the system 4825 * is under memory pressure and that the arc should adjust accordingly. 4826 */ 4827 boolean_t 4828 arc_reclaim_needed(void) 4829 { 4830 return (arc_available_memory() < 0); 4831 } 4832 4833 void 4834 arc_kmem_reap_soon(void) 4835 { 4836 size_t i; 4837 kmem_cache_t *prev_cache = NULL; 4838 kmem_cache_t *prev_data_cache = NULL; 4839 extern kmem_cache_t *zio_buf_cache[]; 4840 extern kmem_cache_t *zio_data_buf_cache[]; 4841 4842 #ifdef _KERNEL 4843 if ((aggsum_compare(&arc_meta_used, arc_meta_limit) >= 0) && 4844 zfs_arc_meta_prune) { 4845 /* 4846 * We are exceeding our meta-data cache limit. 4847 * Prune some entries to release holds on meta-data. 4848 */ 4849 arc_prune_async(zfs_arc_meta_prune); 4850 } 4851 #if defined(_ILP32) 4852 /* 4853 * Reclaim unused memory from all kmem caches. 4854 */ 4855 kmem_reap(); 4856 #endif 4857 #endif 4858 4859 for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { 4860 #if defined(_ILP32) 4861 /* reach upper limit of cache size on 32-bit */ 4862 if (zio_buf_cache[i] == NULL) 4863 break; 4864 #endif 4865 if (zio_buf_cache[i] != prev_cache) { 4866 prev_cache = zio_buf_cache[i]; 4867 kmem_cache_reap_now(zio_buf_cache[i]); 4868 } 4869 if (zio_data_buf_cache[i] != prev_data_cache) { 4870 prev_data_cache = zio_data_buf_cache[i]; 4871 kmem_cache_reap_now(zio_data_buf_cache[i]); 4872 } 4873 } 4874 kmem_cache_reap_now(buf_cache); 4875 kmem_cache_reap_now(hdr_full_cache); 4876 kmem_cache_reap_now(hdr_l2only_cache); 4877 kmem_cache_reap_now(zfs_btree_leaf_cache); 4878 abd_cache_reap_now(); 4879 } 4880 4881 /* ARGSUSED */ 4882 static boolean_t 4883 arc_evict_cb_check(void *arg, zthr_t *zthr) 4884 { 4885 #ifdef ZFS_DEBUG 4886 /* 4887 * This is necessary in order to keep the kstat information 4888 * up to date for tools that display kstat data such as the 4889 * mdb ::arc dcmd and the Linux crash utility. These tools 4890 * typically do not call kstat's update function, but simply 4891 * dump out stats from the most recent update. Without 4892 * this call, these commands may show stale stats for the 4893 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even 4894 * with this call, the data might be out of date if the 4895 * evict thread hasn't been woken recently; but that should 4896 * suffice. The arc_state_t structures can be queried 4897 * directly if more accurate information is needed. 4898 */ 4899 if (arc_ksp != NULL) 4900 arc_ksp->ks_update(arc_ksp, KSTAT_READ); 4901 #endif 4902 4903 /* 4904 * We have to rely on arc_wait_for_eviction() to tell us when to 4905 * evict, rather than checking if we are overflowing here, so that we 4906 * are sure to not leave arc_wait_for_eviction() waiting on aew_cv. 4907 * If we have become "not overflowing" since arc_wait_for_eviction() 4908 * checked, we need to wake it up. We could broadcast the CV here, 4909 * but arc_wait_for_eviction() may have not yet gone to sleep. We 4910 * would need to use a mutex to ensure that this function doesn't 4911 * broadcast until arc_wait_for_eviction() has gone to sleep (e.g. 4912 * the arc_evict_lock). However, the lock ordering of such a lock 4913 * would necessarily be incorrect with respect to the zthr_lock, 4914 * which is held before this function is called, and is held by 4915 * arc_wait_for_eviction() when it calls zthr_wakeup(). 4916 */ 4917 return (arc_evict_needed); 4918 } 4919 4920 /* 4921 * Keep arc_size under arc_c by running arc_evict which evicts data 4922 * from the ARC. 4923 */ 4924 /* ARGSUSED */ 4925 static void 4926 arc_evict_cb(void *arg, zthr_t *zthr) 4927 { 4928 uint64_t evicted = 0; 4929 fstrans_cookie_t cookie = spl_fstrans_mark(); 4930 4931 /* Evict from cache */ 4932 evicted = arc_evict(); 4933 4934 /* 4935 * If evicted is zero, we couldn't evict anything 4936 * via arc_evict(). This could be due to hash lock 4937 * collisions, but more likely due to the majority of 4938 * arc buffers being unevictable. Therefore, even if 4939 * arc_size is above arc_c, another pass is unlikely to 4940 * be helpful and could potentially cause us to enter an 4941 * infinite loop. Additionally, zthr_iscancelled() is 4942 * checked here so that if the arc is shutting down, the 4943 * broadcast will wake any remaining arc evict waiters. 4944 */ 4945 mutex_enter(&arc_evict_lock); 4946 arc_evict_needed = !zthr_iscancelled(arc_evict_zthr) && 4947 evicted > 0 && aggsum_compare(&arc_size, arc_c) > 0; 4948 if (!arc_evict_needed) { 4949 /* 4950 * We're either no longer overflowing, or we 4951 * can't evict anything more, so we should wake 4952 * arc_get_data_impl() sooner. 4953 */ 4954 arc_evict_waiter_t *aw; 4955 while ((aw = list_remove_head(&arc_evict_waiters)) != NULL) { 4956 cv_broadcast(&aw->aew_cv); 4957 } 4958 arc_set_need_free(); 4959 } 4960 mutex_exit(&arc_evict_lock); 4961 spl_fstrans_unmark(cookie); 4962 } 4963 4964 /* ARGSUSED */ 4965 static boolean_t 4966 arc_reap_cb_check(void *arg, zthr_t *zthr) 4967 { 4968 int64_t free_memory = arc_available_memory(); 4969 static int reap_cb_check_counter = 0; 4970 4971 /* 4972 * If a kmem reap is already active, don't schedule more. We must 4973 * check for this because kmem_cache_reap_soon() won't actually 4974 * block on the cache being reaped (this is to prevent callers from 4975 * becoming implicitly blocked by a system-wide kmem reap -- which, 4976 * on a system with many, many full magazines, can take minutes). 4977 */ 4978 if (!kmem_cache_reap_active() && free_memory < 0) { 4979 4980 arc_no_grow = B_TRUE; 4981 arc_warm = B_TRUE; 4982 /* 4983 * Wait at least zfs_grow_retry (default 5) seconds 4984 * before considering growing. 4985 */ 4986 arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry); 4987 return (B_TRUE); 4988 } else if (free_memory < arc_c >> arc_no_grow_shift) { 4989 arc_no_grow = B_TRUE; 4990 } else if (gethrtime() >= arc_growtime) { 4991 arc_no_grow = B_FALSE; 4992 } 4993 4994 /* 4995 * Called unconditionally every 60 seconds to reclaim unused 4996 * zstd compression and decompression context. This is done 4997 * here to avoid the need for an independent thread. 4998 */ 4999 if (!((reap_cb_check_counter++) % 60)) 5000 zfs_zstd_cache_reap_now(); 5001 5002 return (B_FALSE); 5003 } 5004 5005 /* 5006 * Keep enough free memory in the system by reaping the ARC's kmem 5007 * caches. To cause more slabs to be reapable, we may reduce the 5008 * target size of the cache (arc_c), causing the arc_evict_cb() 5009 * to free more buffers. 5010 */ 5011 /* ARGSUSED */ 5012 static void 5013 arc_reap_cb(void *arg, zthr_t *zthr) 5014 { 5015 int64_t free_memory; 5016 fstrans_cookie_t cookie = spl_fstrans_mark(); 5017 5018 /* 5019 * Kick off asynchronous kmem_reap()'s of all our caches. 5020 */ 5021 arc_kmem_reap_soon(); 5022 5023 /* 5024 * Wait at least arc_kmem_cache_reap_retry_ms between 5025 * arc_kmem_reap_soon() calls. Without this check it is possible to 5026 * end up in a situation where we spend lots of time reaping 5027 * caches, while we're near arc_c_min. Waiting here also gives the 5028 * subsequent free memory check a chance of finding that the 5029 * asynchronous reap has already freed enough memory, and we don't 5030 * need to call arc_reduce_target_size(). 5031 */ 5032 delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000); 5033 5034 /* 5035 * Reduce the target size as needed to maintain the amount of free 5036 * memory in the system at a fraction of the arc_size (1/128th by 5037 * default). If oversubscribed (free_memory < 0) then reduce the 5038 * target arc_size by the deficit amount plus the fractional 5039 * amount. If free memory is positive but less then the fractional 5040 * amount, reduce by what is needed to hit the fractional amount. 5041 */ 5042 free_memory = arc_available_memory(); 5043 5044 int64_t to_free = 5045 (arc_c >> arc_shrink_shift) - free_memory; 5046 if (to_free > 0) { 5047 arc_reduce_target_size(to_free); 5048 } 5049 spl_fstrans_unmark(cookie); 5050 } 5051 5052 #ifdef _KERNEL 5053 /* 5054 * Determine the amount of memory eligible for eviction contained in the 5055 * ARC. All clean data reported by the ghost lists can always be safely 5056 * evicted. Due to arc_c_min, the same does not hold for all clean data 5057 * contained by the regular mru and mfu lists. 5058 * 5059 * In the case of the regular mru and mfu lists, we need to report as 5060 * much clean data as possible, such that evicting that same reported 5061 * data will not bring arc_size below arc_c_min. Thus, in certain 5062 * circumstances, the total amount of clean data in the mru and mfu 5063 * lists might not actually be evictable. 5064 * 5065 * The following two distinct cases are accounted for: 5066 * 5067 * 1. The sum of the amount of dirty data contained by both the mru and 5068 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 5069 * is greater than or equal to arc_c_min. 5070 * (i.e. amount of dirty data >= arc_c_min) 5071 * 5072 * This is the easy case; all clean data contained by the mru and mfu 5073 * lists is evictable. Evicting all clean data can only drop arc_size 5074 * to the amount of dirty data, which is greater than arc_c_min. 5075 * 5076 * 2. The sum of the amount of dirty data contained by both the mru and 5077 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 5078 * is less than arc_c_min. 5079 * (i.e. arc_c_min > amount of dirty data) 5080 * 5081 * 2.1. arc_size is greater than or equal arc_c_min. 5082 * (i.e. arc_size >= arc_c_min > amount of dirty data) 5083 * 5084 * In this case, not all clean data from the regular mru and mfu 5085 * lists is actually evictable; we must leave enough clean data 5086 * to keep arc_size above arc_c_min. Thus, the maximum amount of 5087 * evictable data from the two lists combined, is exactly the 5088 * difference between arc_size and arc_c_min. 5089 * 5090 * 2.2. arc_size is less than arc_c_min 5091 * (i.e. arc_c_min > arc_size > amount of dirty data) 5092 * 5093 * In this case, none of the data contained in the mru and mfu 5094 * lists is evictable, even if it's clean. Since arc_size is 5095 * already below arc_c_min, evicting any more would only 5096 * increase this negative difference. 5097 */ 5098 5099 #endif /* _KERNEL */ 5100 5101 /* 5102 * Adapt arc info given the number of bytes we are trying to add and 5103 * the state that we are coming from. This function is only called 5104 * when we are adding new content to the cache. 5105 */ 5106 static void 5107 arc_adapt(int bytes, arc_state_t *state) 5108 { 5109 int mult; 5110 uint64_t arc_p_min = (arc_c >> arc_p_min_shift); 5111 int64_t mrug_size = zfs_refcount_count(&arc_mru_ghost->arcs_size); 5112 int64_t mfug_size = zfs_refcount_count(&arc_mfu_ghost->arcs_size); 5113 5114 ASSERT(bytes > 0); 5115 /* 5116 * Adapt the target size of the MRU list: 5117 * - if we just hit in the MRU ghost list, then increase 5118 * the target size of the MRU list. 5119 * - if we just hit in the MFU ghost list, then increase 5120 * the target size of the MFU list by decreasing the 5121 * target size of the MRU list. 5122 */ 5123 if (state == arc_mru_ghost) { 5124 mult = (mrug_size >= mfug_size) ? 1 : (mfug_size / mrug_size); 5125 if (!zfs_arc_p_dampener_disable) 5126 mult = MIN(mult, 10); /* avoid wild arc_p adjustment */ 5127 5128 arc_p = MIN(arc_c - arc_p_min, arc_p + bytes * mult); 5129 } else if (state == arc_mfu_ghost) { 5130 uint64_t delta; 5131 5132 mult = (mfug_size >= mrug_size) ? 1 : (mrug_size / mfug_size); 5133 if (!zfs_arc_p_dampener_disable) 5134 mult = MIN(mult, 10); 5135 5136 delta = MIN(bytes * mult, arc_p); 5137 arc_p = MAX(arc_p_min, arc_p - delta); 5138 } 5139 ASSERT((int64_t)arc_p >= 0); 5140 5141 /* 5142 * Wake reap thread if we do not have any available memory 5143 */ 5144 if (arc_reclaim_needed()) { 5145 zthr_wakeup(arc_reap_zthr); 5146 return; 5147 } 5148 5149 if (arc_no_grow) 5150 return; 5151 5152 if (arc_c >= arc_c_max) 5153 return; 5154 5155 /* 5156 * If we're within (2 * maxblocksize) bytes of the target 5157 * cache size, increment the target cache size 5158 */ 5159 ASSERT3U(arc_c, >=, 2ULL << SPA_MAXBLOCKSHIFT); 5160 if (aggsum_upper_bound(&arc_size) >= 5161 arc_c - (2ULL << SPA_MAXBLOCKSHIFT)) { 5162 atomic_add_64(&arc_c, (int64_t)bytes); 5163 if (arc_c > arc_c_max) 5164 arc_c = arc_c_max; 5165 else if (state == arc_anon) 5166 atomic_add_64(&arc_p, (int64_t)bytes); 5167 if (arc_p > arc_c) 5168 arc_p = arc_c; 5169 } 5170 ASSERT((int64_t)arc_p >= 0); 5171 } 5172 5173 /* 5174 * Check if arc_size has grown past our upper threshold, determined by 5175 * zfs_arc_overflow_shift. 5176 */ 5177 boolean_t 5178 arc_is_overflowing(void) 5179 { 5180 /* Always allow at least one block of overflow */ 5181 int64_t overflow = MAX(SPA_MAXBLOCKSIZE, 5182 arc_c >> zfs_arc_overflow_shift); 5183 5184 /* 5185 * We just compare the lower bound here for performance reasons. Our 5186 * primary goals are to make sure that the arc never grows without 5187 * bound, and that it can reach its maximum size. This check 5188 * accomplishes both goals. The maximum amount we could run over by is 5189 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block 5190 * in the ARC. In practice, that's in the tens of MB, which is low 5191 * enough to be safe. 5192 */ 5193 return (aggsum_lower_bound(&arc_size) >= (int64_t)arc_c + overflow); 5194 } 5195 5196 static abd_t * 5197 arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, void *tag, 5198 boolean_t do_adapt) 5199 { 5200 arc_buf_contents_t type = arc_buf_type(hdr); 5201 5202 arc_get_data_impl(hdr, size, tag, do_adapt); 5203 if (type == ARC_BUFC_METADATA) { 5204 return (abd_alloc(size, B_TRUE)); 5205 } else { 5206 ASSERT(type == ARC_BUFC_DATA); 5207 return (abd_alloc(size, B_FALSE)); 5208 } 5209 } 5210 5211 static void * 5212 arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, void *tag) 5213 { 5214 arc_buf_contents_t type = arc_buf_type(hdr); 5215 5216 arc_get_data_impl(hdr, size, tag, B_TRUE); 5217 if (type == ARC_BUFC_METADATA) { 5218 return (zio_buf_alloc(size)); 5219 } else { 5220 ASSERT(type == ARC_BUFC_DATA); 5221 return (zio_data_buf_alloc(size)); 5222 } 5223 } 5224 5225 /* 5226 * Wait for the specified amount of data (in bytes) to be evicted from the 5227 * ARC, and for there to be sufficient free memory in the system. Waiting for 5228 * eviction ensures that the memory used by the ARC decreases. Waiting for 5229 * free memory ensures that the system won't run out of free pages, regardless 5230 * of ARC behavior and settings. See arc_lowmem_init(). 5231 */ 5232 void 5233 arc_wait_for_eviction(uint64_t amount) 5234 { 5235 mutex_enter(&arc_evict_lock); 5236 if (arc_is_overflowing()) { 5237 arc_evict_needed = B_TRUE; 5238 zthr_wakeup(arc_evict_zthr); 5239 5240 if (amount != 0) { 5241 arc_evict_waiter_t aw; 5242 list_link_init(&aw.aew_node); 5243 cv_init(&aw.aew_cv, NULL, CV_DEFAULT, NULL); 5244 5245 arc_evict_waiter_t *last = 5246 list_tail(&arc_evict_waiters); 5247 if (last != NULL) { 5248 ASSERT3U(last->aew_count, >, arc_evict_count); 5249 aw.aew_count = last->aew_count + amount; 5250 } else { 5251 aw.aew_count = arc_evict_count + amount; 5252 } 5253 5254 list_insert_tail(&arc_evict_waiters, &aw); 5255 5256 arc_set_need_free(); 5257 5258 DTRACE_PROBE3(arc__wait__for__eviction, 5259 uint64_t, amount, 5260 uint64_t, arc_evict_count, 5261 uint64_t, aw.aew_count); 5262 5263 /* 5264 * We will be woken up either when arc_evict_count 5265 * reaches aew_count, or when the ARC is no longer 5266 * overflowing and eviction completes. 5267 */ 5268 cv_wait(&aw.aew_cv, &arc_evict_lock); 5269 5270 /* 5271 * In case of "false" wakeup, we will still be on the 5272 * list. 5273 */ 5274 if (list_link_active(&aw.aew_node)) 5275 list_remove(&arc_evict_waiters, &aw); 5276 5277 cv_destroy(&aw.aew_cv); 5278 } 5279 } 5280 mutex_exit(&arc_evict_lock); 5281 } 5282 5283 /* 5284 * Allocate a block and return it to the caller. If we are hitting the 5285 * hard limit for the cache size, we must sleep, waiting for the eviction 5286 * thread to catch up. If we're past the target size but below the hard 5287 * limit, we'll only signal the reclaim thread and continue on. 5288 */ 5289 static void 5290 arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag, 5291 boolean_t do_adapt) 5292 { 5293 arc_state_t *state = hdr->b_l1hdr.b_state; 5294 arc_buf_contents_t type = arc_buf_type(hdr); 5295 5296 if (do_adapt) 5297 arc_adapt(size, state); 5298 5299 /* 5300 * If arc_size is currently overflowing, we must be adding data 5301 * faster than we are evicting. To ensure we don't compound the 5302 * problem by adding more data and forcing arc_size to grow even 5303 * further past it's target size, we wait for the eviction thread to 5304 * make some progress. We also wait for there to be sufficient free 5305 * memory in the system, as measured by arc_free_memory(). 5306 * 5307 * Specifically, we wait for zfs_arc_eviction_pct percent of the 5308 * requested size to be evicted. This should be more than 100%, to 5309 * ensure that that progress is also made towards getting arc_size 5310 * under arc_c. See the comment above zfs_arc_eviction_pct. 5311 * 5312 * We do the overflowing check without holding the arc_evict_lock to 5313 * reduce lock contention in this hot path. Note that 5314 * arc_wait_for_eviction() will acquire the lock and check again to 5315 * ensure we are truly overflowing before blocking. 5316 */ 5317 if (arc_is_overflowing()) { 5318 arc_wait_for_eviction(size * 5319 zfs_arc_eviction_pct / 100); 5320 } 5321 5322 VERIFY3U(hdr->b_type, ==, type); 5323 if (type == ARC_BUFC_METADATA) { 5324 arc_space_consume(size, ARC_SPACE_META); 5325 } else { 5326 arc_space_consume(size, ARC_SPACE_DATA); 5327 } 5328 5329 /* 5330 * Update the state size. Note that ghost states have a 5331 * "ghost size" and so don't need to be updated. 5332 */ 5333 if (!GHOST_STATE(state)) { 5334 5335 (void) zfs_refcount_add_many(&state->arcs_size, size, tag); 5336 5337 /* 5338 * If this is reached via arc_read, the link is 5339 * protected by the hash lock. If reached via 5340 * arc_buf_alloc, the header should not be accessed by 5341 * any other thread. And, if reached via arc_read_done, 5342 * the hash lock will protect it if it's found in the 5343 * hash table; otherwise no other thread should be 5344 * trying to [add|remove]_reference it. 5345 */ 5346 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5347 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5348 (void) zfs_refcount_add_many(&state->arcs_esize[type], 5349 size, tag); 5350 } 5351 5352 /* 5353 * If we are growing the cache, and we are adding anonymous 5354 * data, and we have outgrown arc_p, update arc_p 5355 */ 5356 if (aggsum_upper_bound(&arc_size) < arc_c && 5357 hdr->b_l1hdr.b_state == arc_anon && 5358 (zfs_refcount_count(&arc_anon->arcs_size) + 5359 zfs_refcount_count(&arc_mru->arcs_size) > arc_p)) 5360 arc_p = MIN(arc_c, arc_p + size); 5361 } 5362 } 5363 5364 static void 5365 arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, void *tag) 5366 { 5367 arc_free_data_impl(hdr, size, tag); 5368 abd_free(abd); 5369 } 5370 5371 static void 5372 arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, void *tag) 5373 { 5374 arc_buf_contents_t type = arc_buf_type(hdr); 5375 5376 arc_free_data_impl(hdr, size, tag); 5377 if (type == ARC_BUFC_METADATA) { 5378 zio_buf_free(buf, size); 5379 } else { 5380 ASSERT(type == ARC_BUFC_DATA); 5381 zio_data_buf_free(buf, size); 5382 } 5383 } 5384 5385 /* 5386 * Free the arc data buffer. 5387 */ 5388 static void 5389 arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, void *tag) 5390 { 5391 arc_state_t *state = hdr->b_l1hdr.b_state; 5392 arc_buf_contents_t type = arc_buf_type(hdr); 5393 5394 /* protected by hash lock, if in the hash table */ 5395 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5396 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5397 ASSERT(state != arc_anon && state != arc_l2c_only); 5398 5399 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 5400 size, tag); 5401 } 5402 (void) zfs_refcount_remove_many(&state->arcs_size, size, tag); 5403 5404 VERIFY3U(hdr->b_type, ==, type); 5405 if (type == ARC_BUFC_METADATA) { 5406 arc_space_return(size, ARC_SPACE_META); 5407 } else { 5408 ASSERT(type == ARC_BUFC_DATA); 5409 arc_space_return(size, ARC_SPACE_DATA); 5410 } 5411 } 5412 5413 /* 5414 * This routine is called whenever a buffer is accessed. 5415 * NOTE: the hash lock is dropped in this function. 5416 */ 5417 static void 5418 arc_access(arc_buf_hdr_t *hdr, kmutex_t *hash_lock) 5419 { 5420 clock_t now; 5421 5422 ASSERT(MUTEX_HELD(hash_lock)); 5423 ASSERT(HDR_HAS_L1HDR(hdr)); 5424 5425 if (hdr->b_l1hdr.b_state == arc_anon) { 5426 /* 5427 * This buffer is not in the cache, and does not 5428 * appear in our "ghost" list. Add the new buffer 5429 * to the MRU state. 5430 */ 5431 5432 ASSERT0(hdr->b_l1hdr.b_arc_access); 5433 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5434 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5435 arc_change_state(arc_mru, hdr, hash_lock); 5436 5437 } else if (hdr->b_l1hdr.b_state == arc_mru) { 5438 now = ddi_get_lbolt(); 5439 5440 /* 5441 * If this buffer is here because of a prefetch, then either: 5442 * - clear the flag if this is a "referencing" read 5443 * (any subsequent access will bump this into the MFU state). 5444 * or 5445 * - move the buffer to the head of the list if this is 5446 * another prefetch (to make it less likely to be evicted). 5447 */ 5448 if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { 5449 if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 0) { 5450 /* link protected by hash lock */ 5451 ASSERT(multilist_link_active( 5452 &hdr->b_l1hdr.b_arc_node)); 5453 } else { 5454 if (HDR_HAS_L2HDR(hdr)) 5455 l2arc_hdr_arcstats_decrement_state(hdr); 5456 arc_hdr_clear_flags(hdr, 5457 ARC_FLAG_PREFETCH | 5458 ARC_FLAG_PRESCIENT_PREFETCH); 5459 atomic_inc_32(&hdr->b_l1hdr.b_mru_hits); 5460 ARCSTAT_BUMP(arcstat_mru_hits); 5461 if (HDR_HAS_L2HDR(hdr)) 5462 l2arc_hdr_arcstats_increment_state(hdr); 5463 } 5464 hdr->b_l1hdr.b_arc_access = now; 5465 return; 5466 } 5467 5468 /* 5469 * This buffer has been "accessed" only once so far, 5470 * but it is still in the cache. Move it to the MFU 5471 * state. 5472 */ 5473 if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access + 5474 ARC_MINTIME)) { 5475 /* 5476 * More than 125ms have passed since we 5477 * instantiated this buffer. Move it to the 5478 * most frequently used state. 5479 */ 5480 hdr->b_l1hdr.b_arc_access = now; 5481 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5482 arc_change_state(arc_mfu, hdr, hash_lock); 5483 } 5484 atomic_inc_32(&hdr->b_l1hdr.b_mru_hits); 5485 ARCSTAT_BUMP(arcstat_mru_hits); 5486 } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) { 5487 arc_state_t *new_state; 5488 /* 5489 * This buffer has been "accessed" recently, but 5490 * was evicted from the cache. Move it to the 5491 * MFU state. 5492 */ 5493 if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { 5494 new_state = arc_mru; 5495 if (zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) > 0) { 5496 if (HDR_HAS_L2HDR(hdr)) 5497 l2arc_hdr_arcstats_decrement_state(hdr); 5498 arc_hdr_clear_flags(hdr, 5499 ARC_FLAG_PREFETCH | 5500 ARC_FLAG_PRESCIENT_PREFETCH); 5501 if (HDR_HAS_L2HDR(hdr)) 5502 l2arc_hdr_arcstats_increment_state(hdr); 5503 } 5504 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5505 } else { 5506 new_state = arc_mfu; 5507 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5508 } 5509 5510 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5511 arc_change_state(new_state, hdr, hash_lock); 5512 5513 atomic_inc_32(&hdr->b_l1hdr.b_mru_ghost_hits); 5514 ARCSTAT_BUMP(arcstat_mru_ghost_hits); 5515 } else if (hdr->b_l1hdr.b_state == arc_mfu) { 5516 /* 5517 * This buffer has been accessed more than once and is 5518 * still in the cache. Keep it in the MFU state. 5519 * 5520 * NOTE: an add_reference() that occurred when we did 5521 * the arc_read() will have kicked this off the list. 5522 * If it was a prefetch, we will explicitly move it to 5523 * the head of the list now. 5524 */ 5525 5526 atomic_inc_32(&hdr->b_l1hdr.b_mfu_hits); 5527 ARCSTAT_BUMP(arcstat_mfu_hits); 5528 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5529 } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) { 5530 arc_state_t *new_state = arc_mfu; 5531 /* 5532 * This buffer has been accessed more than once but has 5533 * been evicted from the cache. Move it back to the 5534 * MFU state. 5535 */ 5536 5537 if (HDR_PREFETCH(hdr) || HDR_PRESCIENT_PREFETCH(hdr)) { 5538 /* 5539 * This is a prefetch access... 5540 * move this block back to the MRU state. 5541 */ 5542 new_state = arc_mru; 5543 } 5544 5545 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5546 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5547 arc_change_state(new_state, hdr, hash_lock); 5548 5549 atomic_inc_32(&hdr->b_l1hdr.b_mfu_ghost_hits); 5550 ARCSTAT_BUMP(arcstat_mfu_ghost_hits); 5551 } else if (hdr->b_l1hdr.b_state == arc_l2c_only) { 5552 /* 5553 * This buffer is on the 2nd Level ARC. 5554 */ 5555 5556 hdr->b_l1hdr.b_arc_access = ddi_get_lbolt(); 5557 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5558 arc_change_state(arc_mfu, hdr, hash_lock); 5559 } else { 5560 cmn_err(CE_PANIC, "invalid arc state 0x%p", 5561 hdr->b_l1hdr.b_state); 5562 } 5563 } 5564 5565 /* 5566 * This routine is called by dbuf_hold() to update the arc_access() state 5567 * which otherwise would be skipped for entries in the dbuf cache. 5568 */ 5569 void 5570 arc_buf_access(arc_buf_t *buf) 5571 { 5572 mutex_enter(&buf->b_evict_lock); 5573 arc_buf_hdr_t *hdr = buf->b_hdr; 5574 5575 /* 5576 * Avoid taking the hash_lock when possible as an optimization. 5577 * The header must be checked again under the hash_lock in order 5578 * to handle the case where it is concurrently being released. 5579 */ 5580 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5581 mutex_exit(&buf->b_evict_lock); 5582 return; 5583 } 5584 5585 kmutex_t *hash_lock = HDR_LOCK(hdr); 5586 mutex_enter(hash_lock); 5587 5588 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5589 mutex_exit(hash_lock); 5590 mutex_exit(&buf->b_evict_lock); 5591 ARCSTAT_BUMP(arcstat_access_skip); 5592 return; 5593 } 5594 5595 mutex_exit(&buf->b_evict_lock); 5596 5597 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5598 hdr->b_l1hdr.b_state == arc_mfu); 5599 5600 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5601 arc_access(hdr, hash_lock); 5602 mutex_exit(hash_lock); 5603 5604 ARCSTAT_BUMP(arcstat_hits); 5605 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr) && !HDR_PRESCIENT_PREFETCH(hdr), 5606 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, metadata, hits); 5607 } 5608 5609 /* a generic arc_read_done_func_t which you can use */ 5610 /* ARGSUSED */ 5611 void 5612 arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5613 arc_buf_t *buf, void *arg) 5614 { 5615 if (buf == NULL) 5616 return; 5617 5618 bcopy(buf->b_data, arg, arc_buf_size(buf)); 5619 arc_buf_destroy(buf, arg); 5620 } 5621 5622 /* a generic arc_read_done_func_t */ 5623 /* ARGSUSED */ 5624 void 5625 arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5626 arc_buf_t *buf, void *arg) 5627 { 5628 arc_buf_t **bufp = arg; 5629 5630 if (buf == NULL) { 5631 ASSERT(zio == NULL || zio->io_error != 0); 5632 *bufp = NULL; 5633 } else { 5634 ASSERT(zio == NULL || zio->io_error == 0); 5635 *bufp = buf; 5636 ASSERT(buf->b_data != NULL); 5637 } 5638 } 5639 5640 static void 5641 arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp) 5642 { 5643 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 5644 ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0); 5645 ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF); 5646 } else { 5647 if (HDR_COMPRESSION_ENABLED(hdr)) { 5648 ASSERT3U(arc_hdr_get_compress(hdr), ==, 5649 BP_GET_COMPRESS(bp)); 5650 } 5651 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 5652 ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp)); 5653 ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp)); 5654 } 5655 } 5656 5657 static void 5658 arc_read_done(zio_t *zio) 5659 { 5660 blkptr_t *bp = zio->io_bp; 5661 arc_buf_hdr_t *hdr = zio->io_private; 5662 kmutex_t *hash_lock = NULL; 5663 arc_callback_t *callback_list; 5664 arc_callback_t *acb; 5665 boolean_t freeable = B_FALSE; 5666 5667 /* 5668 * The hdr was inserted into hash-table and removed from lists 5669 * prior to starting I/O. We should find this header, since 5670 * it's in the hash table, and it should be legit since it's 5671 * not possible to evict it during the I/O. The only possible 5672 * reason for it not to be found is if we were freed during the 5673 * read. 5674 */ 5675 if (HDR_IN_HASH_TABLE(hdr)) { 5676 arc_buf_hdr_t *found; 5677 5678 ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp)); 5679 ASSERT3U(hdr->b_dva.dva_word[0], ==, 5680 BP_IDENTITY(zio->io_bp)->dva_word[0]); 5681 ASSERT3U(hdr->b_dva.dva_word[1], ==, 5682 BP_IDENTITY(zio->io_bp)->dva_word[1]); 5683 5684 found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock); 5685 5686 ASSERT((found == hdr && 5687 DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) || 5688 (found == hdr && HDR_L2_READING(hdr))); 5689 ASSERT3P(hash_lock, !=, NULL); 5690 } 5691 5692 if (BP_IS_PROTECTED(bp)) { 5693 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 5694 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 5695 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 5696 hdr->b_crypt_hdr.b_iv); 5697 5698 if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) { 5699 void *tmpbuf; 5700 5701 tmpbuf = abd_borrow_buf_copy(zio->io_abd, 5702 sizeof (zil_chain_t)); 5703 zio_crypt_decode_mac_zil(tmpbuf, 5704 hdr->b_crypt_hdr.b_mac); 5705 abd_return_buf(zio->io_abd, tmpbuf, 5706 sizeof (zil_chain_t)); 5707 } else { 5708 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 5709 } 5710 } 5711 5712 if (zio->io_error == 0) { 5713 /* byteswap if necessary */ 5714 if (BP_SHOULD_BYTESWAP(zio->io_bp)) { 5715 if (BP_GET_LEVEL(zio->io_bp) > 0) { 5716 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 5717 } else { 5718 hdr->b_l1hdr.b_byteswap = 5719 DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp)); 5720 } 5721 } else { 5722 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 5723 } 5724 if (!HDR_L2_READING(hdr)) { 5725 hdr->b_complevel = zio->io_prop.zp_complevel; 5726 } 5727 } 5728 5729 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED); 5730 if (l2arc_noprefetch && HDR_PREFETCH(hdr)) 5731 arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE); 5732 5733 callback_list = hdr->b_l1hdr.b_acb; 5734 ASSERT3P(callback_list, !=, NULL); 5735 5736 if (hash_lock && zio->io_error == 0 && 5737 hdr->b_l1hdr.b_state == arc_anon) { 5738 /* 5739 * Only call arc_access on anonymous buffers. This is because 5740 * if we've issued an I/O for an evicted buffer, we've already 5741 * called arc_access (to prevent any simultaneous readers from 5742 * getting confused). 5743 */ 5744 arc_access(hdr, hash_lock); 5745 } 5746 5747 /* 5748 * If a read request has a callback (i.e. acb_done is not NULL), then we 5749 * make a buf containing the data according to the parameters which were 5750 * passed in. The implementation of arc_buf_alloc_impl() ensures that we 5751 * aren't needlessly decompressing the data multiple times. 5752 */ 5753 int callback_cnt = 0; 5754 for (acb = callback_list; acb != NULL; acb = acb->acb_next) { 5755 if (!acb->acb_done || acb->acb_nobuf) 5756 continue; 5757 5758 callback_cnt++; 5759 5760 if (zio->io_error != 0) 5761 continue; 5762 5763 int error = arc_buf_alloc_impl(hdr, zio->io_spa, 5764 &acb->acb_zb, acb->acb_private, acb->acb_encrypted, 5765 acb->acb_compressed, acb->acb_noauth, B_TRUE, 5766 &acb->acb_buf); 5767 5768 /* 5769 * Assert non-speculative zios didn't fail because an 5770 * encryption key wasn't loaded 5771 */ 5772 ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) || 5773 error != EACCES); 5774 5775 /* 5776 * If we failed to decrypt, report an error now (as the zio 5777 * layer would have done if it had done the transforms). 5778 */ 5779 if (error == ECKSUM) { 5780 ASSERT(BP_IS_PROTECTED(bp)); 5781 error = SET_ERROR(EIO); 5782 if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5783 spa_log_error(zio->io_spa, &acb->acb_zb); 5784 (void) zfs_ereport_post( 5785 FM_EREPORT_ZFS_AUTHENTICATION, 5786 zio->io_spa, NULL, &acb->acb_zb, zio, 0); 5787 } 5788 } 5789 5790 if (error != 0) { 5791 /* 5792 * Decompression or decryption failed. Set 5793 * io_error so that when we call acb_done 5794 * (below), we will indicate that the read 5795 * failed. Note that in the unusual case 5796 * where one callback is compressed and another 5797 * uncompressed, we will mark all of them 5798 * as failed, even though the uncompressed 5799 * one can't actually fail. In this case, 5800 * the hdr will not be anonymous, because 5801 * if there are multiple callbacks, it's 5802 * because multiple threads found the same 5803 * arc buf in the hash table. 5804 */ 5805 zio->io_error = error; 5806 } 5807 } 5808 5809 /* 5810 * If there are multiple callbacks, we must have the hash lock, 5811 * because the only way for multiple threads to find this hdr is 5812 * in the hash table. This ensures that if there are multiple 5813 * callbacks, the hdr is not anonymous. If it were anonymous, 5814 * we couldn't use arc_buf_destroy() in the error case below. 5815 */ 5816 ASSERT(callback_cnt < 2 || hash_lock != NULL); 5817 5818 hdr->b_l1hdr.b_acb = NULL; 5819 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5820 if (callback_cnt == 0) 5821 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 5822 5823 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt) || 5824 callback_list != NULL); 5825 5826 if (zio->io_error == 0) { 5827 arc_hdr_verify(hdr, zio->io_bp); 5828 } else { 5829 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 5830 if (hdr->b_l1hdr.b_state != arc_anon) 5831 arc_change_state(arc_anon, hdr, hash_lock); 5832 if (HDR_IN_HASH_TABLE(hdr)) 5833 buf_hash_remove(hdr); 5834 freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt); 5835 } 5836 5837 /* 5838 * Broadcast before we drop the hash_lock to avoid the possibility 5839 * that the hdr (and hence the cv) might be freed before we get to 5840 * the cv_broadcast(). 5841 */ 5842 cv_broadcast(&hdr->b_l1hdr.b_cv); 5843 5844 if (hash_lock != NULL) { 5845 mutex_exit(hash_lock); 5846 } else { 5847 /* 5848 * This block was freed while we waited for the read to 5849 * complete. It has been removed from the hash table and 5850 * moved to the anonymous state (so that it won't show up 5851 * in the cache). 5852 */ 5853 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 5854 freeable = zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt); 5855 } 5856 5857 /* execute each callback and free its structure */ 5858 while ((acb = callback_list) != NULL) { 5859 if (acb->acb_done != NULL) { 5860 if (zio->io_error != 0 && acb->acb_buf != NULL) { 5861 /* 5862 * If arc_buf_alloc_impl() fails during 5863 * decompression, the buf will still be 5864 * allocated, and needs to be freed here. 5865 */ 5866 arc_buf_destroy(acb->acb_buf, 5867 acb->acb_private); 5868 acb->acb_buf = NULL; 5869 } 5870 acb->acb_done(zio, &zio->io_bookmark, zio->io_bp, 5871 acb->acb_buf, acb->acb_private); 5872 } 5873 5874 if (acb->acb_zio_dummy != NULL) { 5875 acb->acb_zio_dummy->io_error = zio->io_error; 5876 zio_nowait(acb->acb_zio_dummy); 5877 } 5878 5879 callback_list = acb->acb_next; 5880 kmem_free(acb, sizeof (arc_callback_t)); 5881 } 5882 5883 if (freeable) 5884 arc_hdr_destroy(hdr); 5885 } 5886 5887 /* 5888 * "Read" the block at the specified DVA (in bp) via the 5889 * cache. If the block is found in the cache, invoke the provided 5890 * callback immediately and return. Note that the `zio' parameter 5891 * in the callback will be NULL in this case, since no IO was 5892 * required. If the block is not in the cache pass the read request 5893 * on to the spa with a substitute callback function, so that the 5894 * requested block will be added to the cache. 5895 * 5896 * If a read request arrives for a block that has a read in-progress, 5897 * either wait for the in-progress read to complete (and return the 5898 * results); or, if this is a read with a "done" func, add a record 5899 * to the read to invoke the "done" func when the read completes, 5900 * and return; or just return. 5901 * 5902 * arc_read_done() will invoke all the requested "done" functions 5903 * for readers of this block. 5904 */ 5905 int 5906 arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, 5907 arc_read_done_func_t *done, void *private, zio_priority_t priority, 5908 int zio_flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb) 5909 { 5910 arc_buf_hdr_t *hdr = NULL; 5911 kmutex_t *hash_lock = NULL; 5912 zio_t *rzio; 5913 uint64_t guid = spa_load_guid(spa); 5914 boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0; 5915 boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) && 5916 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5917 boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) && 5918 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5919 boolean_t embedded_bp = !!BP_IS_EMBEDDED(bp); 5920 boolean_t no_buf = *arc_flags & ARC_FLAG_NO_BUF; 5921 int rc = 0; 5922 5923 ASSERT(!embedded_bp || 5924 BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA); 5925 ASSERT(!BP_IS_HOLE(bp)); 5926 ASSERT(!BP_IS_REDACTED(bp)); 5927 5928 /* 5929 * Normally SPL_FSTRANS will already be set since kernel threads which 5930 * expect to call the DMU interfaces will set it when created. System 5931 * calls are similarly handled by setting/cleaning the bit in the 5932 * registered callback (module/os/.../zfs/zpl_*). 5933 * 5934 * External consumers such as Lustre which call the exported DMU 5935 * interfaces may not have set SPL_FSTRANS. To avoid a deadlock 5936 * on the hash_lock always set and clear the bit. 5937 */ 5938 fstrans_cookie_t cookie = spl_fstrans_mark(); 5939 top: 5940 if (!embedded_bp) { 5941 /* 5942 * Embedded BP's have no DVA and require no I/O to "read". 5943 * Create an anonymous arc buf to back it. 5944 */ 5945 hdr = buf_hash_find(guid, bp, &hash_lock); 5946 } 5947 5948 /* 5949 * Determine if we have an L1 cache hit or a cache miss. For simplicity 5950 * we maintain encrypted data separately from compressed / uncompressed 5951 * data. If the user is requesting raw encrypted data and we don't have 5952 * that in the header we will read from disk to guarantee that we can 5953 * get it even if the encryption keys aren't loaded. 5954 */ 5955 if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) || 5956 (hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) { 5957 arc_buf_t *buf = NULL; 5958 *arc_flags |= ARC_FLAG_CACHED; 5959 5960 if (HDR_IO_IN_PROGRESS(hdr)) { 5961 zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head; 5962 5963 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5964 mutex_exit(hash_lock); 5965 ARCSTAT_BUMP(arcstat_cached_only_in_progress); 5966 rc = SET_ERROR(ENOENT); 5967 goto out; 5968 } 5969 5970 ASSERT3P(head_zio, !=, NULL); 5971 if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) && 5972 priority == ZIO_PRIORITY_SYNC_READ) { 5973 /* 5974 * This is a sync read that needs to wait for 5975 * an in-flight async read. Request that the 5976 * zio have its priority upgraded. 5977 */ 5978 zio_change_priority(head_zio, priority); 5979 DTRACE_PROBE1(arc__async__upgrade__sync, 5980 arc_buf_hdr_t *, hdr); 5981 ARCSTAT_BUMP(arcstat_async_upgrade_sync); 5982 } 5983 if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) { 5984 arc_hdr_clear_flags(hdr, 5985 ARC_FLAG_PREDICTIVE_PREFETCH); 5986 } 5987 5988 if (*arc_flags & ARC_FLAG_WAIT) { 5989 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); 5990 mutex_exit(hash_lock); 5991 goto top; 5992 } 5993 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 5994 5995 if (done) { 5996 arc_callback_t *acb = NULL; 5997 5998 acb = kmem_zalloc(sizeof (arc_callback_t), 5999 KM_SLEEP); 6000 acb->acb_done = done; 6001 acb->acb_private = private; 6002 acb->acb_compressed = compressed_read; 6003 acb->acb_encrypted = encrypted_read; 6004 acb->acb_noauth = noauth_read; 6005 acb->acb_nobuf = no_buf; 6006 acb->acb_zb = *zb; 6007 if (pio != NULL) 6008 acb->acb_zio_dummy = zio_null(pio, 6009 spa, NULL, NULL, NULL, zio_flags); 6010 6011 ASSERT3P(acb->acb_done, !=, NULL); 6012 acb->acb_zio_head = head_zio; 6013 acb->acb_next = hdr->b_l1hdr.b_acb; 6014 hdr->b_l1hdr.b_acb = acb; 6015 } 6016 mutex_exit(hash_lock); 6017 goto out; 6018 } 6019 6020 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 6021 hdr->b_l1hdr.b_state == arc_mfu); 6022 6023 if (done && !no_buf) { 6024 if (hdr->b_flags & ARC_FLAG_PREDICTIVE_PREFETCH) { 6025 /* 6026 * This is a demand read which does not have to 6027 * wait for i/o because we did a predictive 6028 * prefetch i/o for it, which has completed. 6029 */ 6030 DTRACE_PROBE1( 6031 arc__demand__hit__predictive__prefetch, 6032 arc_buf_hdr_t *, hdr); 6033 ARCSTAT_BUMP( 6034 arcstat_demand_hit_predictive_prefetch); 6035 arc_hdr_clear_flags(hdr, 6036 ARC_FLAG_PREDICTIVE_PREFETCH); 6037 } 6038 6039 if (hdr->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) { 6040 ARCSTAT_BUMP( 6041 arcstat_demand_hit_prescient_prefetch); 6042 arc_hdr_clear_flags(hdr, 6043 ARC_FLAG_PRESCIENT_PREFETCH); 6044 } 6045 6046 ASSERT(!embedded_bp || !BP_IS_HOLE(bp)); 6047 6048 /* Get a buf with the desired data in it. */ 6049 rc = arc_buf_alloc_impl(hdr, spa, zb, private, 6050 encrypted_read, compressed_read, noauth_read, 6051 B_TRUE, &buf); 6052 if (rc == ECKSUM) { 6053 /* 6054 * Convert authentication and decryption errors 6055 * to EIO (and generate an ereport if needed) 6056 * before leaving the ARC. 6057 */ 6058 rc = SET_ERROR(EIO); 6059 if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) { 6060 spa_log_error(spa, zb); 6061 (void) zfs_ereport_post( 6062 FM_EREPORT_ZFS_AUTHENTICATION, 6063 spa, NULL, zb, NULL, 0); 6064 } 6065 } 6066 if (rc != 0) { 6067 (void) remove_reference(hdr, hash_lock, 6068 private); 6069 arc_buf_destroy_impl(buf); 6070 buf = NULL; 6071 } 6072 6073 /* assert any errors weren't due to unloaded keys */ 6074 ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) || 6075 rc != EACCES); 6076 } else if (*arc_flags & ARC_FLAG_PREFETCH && 6077 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6078 if (HDR_HAS_L2HDR(hdr)) 6079 l2arc_hdr_arcstats_decrement_state(hdr); 6080 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 6081 if (HDR_HAS_L2HDR(hdr)) 6082 l2arc_hdr_arcstats_increment_state(hdr); 6083 } 6084 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 6085 arc_access(hdr, hash_lock); 6086 if (*arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) 6087 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 6088 if (*arc_flags & ARC_FLAG_L2CACHE) 6089 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 6090 mutex_exit(hash_lock); 6091 ARCSTAT_BUMP(arcstat_hits); 6092 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), 6093 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), 6094 data, metadata, hits); 6095 6096 if (done) 6097 done(NULL, zb, bp, buf, private); 6098 } else { 6099 uint64_t lsize = BP_GET_LSIZE(bp); 6100 uint64_t psize = BP_GET_PSIZE(bp); 6101 arc_callback_t *acb; 6102 vdev_t *vd = NULL; 6103 uint64_t addr = 0; 6104 boolean_t devw = B_FALSE; 6105 uint64_t size; 6106 abd_t *hdr_abd; 6107 int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0; 6108 6109 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 6110 rc = SET_ERROR(ENOENT); 6111 if (hash_lock != NULL) 6112 mutex_exit(hash_lock); 6113 goto out; 6114 } 6115 6116 /* 6117 * Gracefully handle a damaged logical block size as a 6118 * checksum error. 6119 */ 6120 if (lsize > spa_maxblocksize(spa)) { 6121 rc = SET_ERROR(ECKSUM); 6122 if (hash_lock != NULL) 6123 mutex_exit(hash_lock); 6124 goto out; 6125 } 6126 6127 if (hdr == NULL) { 6128 /* 6129 * This block is not in the cache or it has 6130 * embedded data. 6131 */ 6132 arc_buf_hdr_t *exists = NULL; 6133 arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp); 6134 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 6135 BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), 0, type, 6136 encrypted_read); 6137 6138 if (!embedded_bp) { 6139 hdr->b_dva = *BP_IDENTITY(bp); 6140 hdr->b_birth = BP_PHYSICAL_BIRTH(bp); 6141 exists = buf_hash_insert(hdr, &hash_lock); 6142 } 6143 if (exists != NULL) { 6144 /* somebody beat us to the hash insert */ 6145 mutex_exit(hash_lock); 6146 buf_discard_identity(hdr); 6147 arc_hdr_destroy(hdr); 6148 goto top; /* restart the IO request */ 6149 } 6150 } else { 6151 /* 6152 * This block is in the ghost cache or encrypted data 6153 * was requested and we didn't have it. If it was 6154 * L2-only (and thus didn't have an L1 hdr), 6155 * we realloc the header to add an L1 hdr. 6156 */ 6157 if (!HDR_HAS_L1HDR(hdr)) { 6158 hdr = arc_hdr_realloc(hdr, hdr_l2only_cache, 6159 hdr_full_cache); 6160 } 6161 6162 if (GHOST_STATE(hdr->b_l1hdr.b_state)) { 6163 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6164 ASSERT(!HDR_HAS_RABD(hdr)); 6165 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6166 ASSERT0(zfs_refcount_count( 6167 &hdr->b_l1hdr.b_refcnt)); 6168 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 6169 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 6170 } else if (HDR_IO_IN_PROGRESS(hdr)) { 6171 /* 6172 * If this header already had an IO in progress 6173 * and we are performing another IO to fetch 6174 * encrypted data we must wait until the first 6175 * IO completes so as not to confuse 6176 * arc_read_done(). This should be very rare 6177 * and so the performance impact shouldn't 6178 * matter. 6179 */ 6180 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); 6181 mutex_exit(hash_lock); 6182 goto top; 6183 } 6184 6185 /* 6186 * This is a delicate dance that we play here. 6187 * This hdr might be in the ghost list so we access 6188 * it to move it out of the ghost list before we 6189 * initiate the read. If it's a prefetch then 6190 * it won't have a callback so we'll remove the 6191 * reference that arc_buf_alloc_impl() created. We 6192 * do this after we've called arc_access() to 6193 * avoid hitting an assert in remove_reference(). 6194 */ 6195 arc_adapt(arc_hdr_size(hdr), hdr->b_l1hdr.b_state); 6196 arc_access(hdr, hash_lock); 6197 arc_hdr_alloc_abd(hdr, alloc_flags); 6198 } 6199 6200 if (encrypted_read) { 6201 ASSERT(HDR_HAS_RABD(hdr)); 6202 size = HDR_GET_PSIZE(hdr); 6203 hdr_abd = hdr->b_crypt_hdr.b_rabd; 6204 zio_flags |= ZIO_FLAG_RAW; 6205 } else { 6206 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 6207 size = arc_hdr_size(hdr); 6208 hdr_abd = hdr->b_l1hdr.b_pabd; 6209 6210 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 6211 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 6212 } 6213 6214 /* 6215 * For authenticated bp's, we do not ask the ZIO layer 6216 * to authenticate them since this will cause the entire 6217 * IO to fail if the key isn't loaded. Instead, we 6218 * defer authentication until arc_buf_fill(), which will 6219 * verify the data when the key is available. 6220 */ 6221 if (BP_IS_AUTHENTICATED(bp)) 6222 zio_flags |= ZIO_FLAG_RAW_ENCRYPT; 6223 } 6224 6225 if (*arc_flags & ARC_FLAG_PREFETCH && 6226 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6227 if (HDR_HAS_L2HDR(hdr)) 6228 l2arc_hdr_arcstats_decrement_state(hdr); 6229 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 6230 if (HDR_HAS_L2HDR(hdr)) 6231 l2arc_hdr_arcstats_increment_state(hdr); 6232 } 6233 if (*arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) 6234 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 6235 if (*arc_flags & ARC_FLAG_L2CACHE) 6236 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 6237 if (BP_IS_AUTHENTICATED(bp)) 6238 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 6239 if (BP_GET_LEVEL(bp) > 0) 6240 arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT); 6241 if (*arc_flags & ARC_FLAG_PREDICTIVE_PREFETCH) 6242 arc_hdr_set_flags(hdr, ARC_FLAG_PREDICTIVE_PREFETCH); 6243 ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state)); 6244 6245 acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP); 6246 acb->acb_done = done; 6247 acb->acb_private = private; 6248 acb->acb_compressed = compressed_read; 6249 acb->acb_encrypted = encrypted_read; 6250 acb->acb_noauth = noauth_read; 6251 acb->acb_zb = *zb; 6252 6253 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6254 hdr->b_l1hdr.b_acb = acb; 6255 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6256 6257 if (HDR_HAS_L2HDR(hdr) && 6258 (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) { 6259 devw = hdr->b_l2hdr.b_dev->l2ad_writing; 6260 addr = hdr->b_l2hdr.b_daddr; 6261 /* 6262 * Lock out L2ARC device removal. 6263 */ 6264 if (vdev_is_dead(vd) || 6265 !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER)) 6266 vd = NULL; 6267 } 6268 6269 /* 6270 * We count both async reads and scrub IOs as asynchronous so 6271 * that both can be upgraded in the event of a cache hit while 6272 * the read IO is still in-flight. 6273 */ 6274 if (priority == ZIO_PRIORITY_ASYNC_READ || 6275 priority == ZIO_PRIORITY_SCRUB) 6276 arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 6277 else 6278 arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 6279 6280 /* 6281 * At this point, we have a level 1 cache miss or a blkptr 6282 * with embedded data. Try again in L2ARC if possible. 6283 */ 6284 ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize); 6285 6286 /* 6287 * Skip ARC stat bump for block pointers with embedded 6288 * data. The data are read from the blkptr itself via 6289 * decode_embedded_bp_compressed(). 6290 */ 6291 if (!embedded_bp) { 6292 DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, 6293 blkptr_t *, bp, uint64_t, lsize, 6294 zbookmark_phys_t *, zb); 6295 ARCSTAT_BUMP(arcstat_misses); 6296 ARCSTAT_CONDSTAT(!HDR_PREFETCH(hdr), 6297 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, 6298 metadata, misses); 6299 } 6300 6301 /* Check if the spa even has l2 configured */ 6302 const boolean_t spa_has_l2 = l2arc_ndev != 0 && 6303 spa->spa_l2cache.sav_count > 0; 6304 6305 if (vd != NULL && spa_has_l2 && !(l2arc_norw && devw)) { 6306 /* 6307 * Read from the L2ARC if the following are true: 6308 * 1. The L2ARC vdev was previously cached. 6309 * 2. This buffer still has L2ARC metadata. 6310 * 3. This buffer isn't currently writing to the L2ARC. 6311 * 4. The L2ARC entry wasn't evicted, which may 6312 * also have invalidated the vdev. 6313 * 5. This isn't prefetch or l2arc_noprefetch is 0. 6314 */ 6315 if (HDR_HAS_L2HDR(hdr) && 6316 !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr) && 6317 !(l2arc_noprefetch && HDR_PREFETCH(hdr))) { 6318 l2arc_read_callback_t *cb; 6319 abd_t *abd; 6320 uint64_t asize; 6321 6322 DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr); 6323 ARCSTAT_BUMP(arcstat_l2_hits); 6324 atomic_inc_32(&hdr->b_l2hdr.b_hits); 6325 6326 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), 6327 KM_SLEEP); 6328 cb->l2rcb_hdr = hdr; 6329 cb->l2rcb_bp = *bp; 6330 cb->l2rcb_zb = *zb; 6331 cb->l2rcb_flags = zio_flags; 6332 6333 /* 6334 * When Compressed ARC is disabled, but the 6335 * L2ARC block is compressed, arc_hdr_size() 6336 * will have returned LSIZE rather than PSIZE. 6337 */ 6338 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 6339 !HDR_COMPRESSION_ENABLED(hdr) && 6340 HDR_GET_PSIZE(hdr) != 0) { 6341 size = HDR_GET_PSIZE(hdr); 6342 } 6343 6344 asize = vdev_psize_to_asize(vd, size); 6345 if (asize != size) { 6346 abd = abd_alloc_for_io(asize, 6347 HDR_ISTYPE_METADATA(hdr)); 6348 cb->l2rcb_abd = abd; 6349 } else { 6350 abd = hdr_abd; 6351 } 6352 6353 ASSERT(addr >= VDEV_LABEL_START_SIZE && 6354 addr + asize <= vd->vdev_psize - 6355 VDEV_LABEL_END_SIZE); 6356 6357 /* 6358 * l2arc read. The SCL_L2ARC lock will be 6359 * released by l2arc_read_done(). 6360 * Issue a null zio if the underlying buffer 6361 * was squashed to zero size by compression. 6362 */ 6363 ASSERT3U(arc_hdr_get_compress(hdr), !=, 6364 ZIO_COMPRESS_EMPTY); 6365 rzio = zio_read_phys(pio, vd, addr, 6366 asize, abd, 6367 ZIO_CHECKSUM_OFF, 6368 l2arc_read_done, cb, priority, 6369 zio_flags | ZIO_FLAG_DONT_CACHE | 6370 ZIO_FLAG_CANFAIL | 6371 ZIO_FLAG_DONT_PROPAGATE | 6372 ZIO_FLAG_DONT_RETRY, B_FALSE); 6373 acb->acb_zio_head = rzio; 6374 6375 if (hash_lock != NULL) 6376 mutex_exit(hash_lock); 6377 6378 DTRACE_PROBE2(l2arc__read, vdev_t *, vd, 6379 zio_t *, rzio); 6380 ARCSTAT_INCR(arcstat_l2_read_bytes, 6381 HDR_GET_PSIZE(hdr)); 6382 6383 if (*arc_flags & ARC_FLAG_NOWAIT) { 6384 zio_nowait(rzio); 6385 goto out; 6386 } 6387 6388 ASSERT(*arc_flags & ARC_FLAG_WAIT); 6389 if (zio_wait(rzio) == 0) 6390 goto out; 6391 6392 /* l2arc read error; goto zio_read() */ 6393 if (hash_lock != NULL) 6394 mutex_enter(hash_lock); 6395 } else { 6396 DTRACE_PROBE1(l2arc__miss, 6397 arc_buf_hdr_t *, hdr); 6398 ARCSTAT_BUMP(arcstat_l2_misses); 6399 if (HDR_L2_WRITING(hdr)) 6400 ARCSTAT_BUMP(arcstat_l2_rw_clash); 6401 spa_config_exit(spa, SCL_L2ARC, vd); 6402 } 6403 } else { 6404 if (vd != NULL) 6405 spa_config_exit(spa, SCL_L2ARC, vd); 6406 6407 /* 6408 * Only a spa with l2 should contribute to l2 6409 * miss stats. (Including the case of having a 6410 * faulted cache device - that's also a miss.) 6411 */ 6412 if (spa_has_l2) { 6413 /* 6414 * Skip ARC stat bump for block pointers with 6415 * embedded data. The data are read from the 6416 * blkptr itself via 6417 * decode_embedded_bp_compressed(). 6418 */ 6419 if (!embedded_bp) { 6420 DTRACE_PROBE1(l2arc__miss, 6421 arc_buf_hdr_t *, hdr); 6422 ARCSTAT_BUMP(arcstat_l2_misses); 6423 } 6424 } 6425 } 6426 6427 rzio = zio_read(pio, spa, bp, hdr_abd, size, 6428 arc_read_done, hdr, priority, zio_flags, zb); 6429 acb->acb_zio_head = rzio; 6430 6431 if (hash_lock != NULL) 6432 mutex_exit(hash_lock); 6433 6434 if (*arc_flags & ARC_FLAG_WAIT) { 6435 rc = zio_wait(rzio); 6436 goto out; 6437 } 6438 6439 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 6440 zio_nowait(rzio); 6441 } 6442 6443 out: 6444 /* embedded bps don't actually go to disk */ 6445 if (!embedded_bp) 6446 spa_read_history_add(spa, zb, *arc_flags); 6447 spl_fstrans_unmark(cookie); 6448 return (rc); 6449 } 6450 6451 arc_prune_t * 6452 arc_add_prune_callback(arc_prune_func_t *func, void *private) 6453 { 6454 arc_prune_t *p; 6455 6456 p = kmem_alloc(sizeof (*p), KM_SLEEP); 6457 p->p_pfunc = func; 6458 p->p_private = private; 6459 list_link_init(&p->p_node); 6460 zfs_refcount_create(&p->p_refcnt); 6461 6462 mutex_enter(&arc_prune_mtx); 6463 zfs_refcount_add(&p->p_refcnt, &arc_prune_list); 6464 list_insert_head(&arc_prune_list, p); 6465 mutex_exit(&arc_prune_mtx); 6466 6467 return (p); 6468 } 6469 6470 void 6471 arc_remove_prune_callback(arc_prune_t *p) 6472 { 6473 boolean_t wait = B_FALSE; 6474 mutex_enter(&arc_prune_mtx); 6475 list_remove(&arc_prune_list, p); 6476 if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0) 6477 wait = B_TRUE; 6478 mutex_exit(&arc_prune_mtx); 6479 6480 /* wait for arc_prune_task to finish */ 6481 if (wait) 6482 taskq_wait_outstanding(arc_prune_taskq, 0); 6483 ASSERT0(zfs_refcount_count(&p->p_refcnt)); 6484 zfs_refcount_destroy(&p->p_refcnt); 6485 kmem_free(p, sizeof (*p)); 6486 } 6487 6488 /* 6489 * Notify the arc that a block was freed, and thus will never be used again. 6490 */ 6491 void 6492 arc_freed(spa_t *spa, const blkptr_t *bp) 6493 { 6494 arc_buf_hdr_t *hdr; 6495 kmutex_t *hash_lock; 6496 uint64_t guid = spa_load_guid(spa); 6497 6498 ASSERT(!BP_IS_EMBEDDED(bp)); 6499 6500 hdr = buf_hash_find(guid, bp, &hash_lock); 6501 if (hdr == NULL) 6502 return; 6503 6504 /* 6505 * We might be trying to free a block that is still doing I/O 6506 * (i.e. prefetch) or has a reference (i.e. a dedup-ed, 6507 * dmu_sync-ed block). If this block is being prefetched, then it 6508 * would still have the ARC_FLAG_IO_IN_PROGRESS flag set on the hdr 6509 * until the I/O completes. A block may also have a reference if it is 6510 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would 6511 * have written the new block to its final resting place on disk but 6512 * without the dedup flag set. This would have left the hdr in the MRU 6513 * state and discoverable. When the txg finally syncs it detects that 6514 * the block was overridden in open context and issues an override I/O. 6515 * Since this is a dedup block, the override I/O will determine if the 6516 * block is already in the DDT. If so, then it will replace the io_bp 6517 * with the bp from the DDT and allow the I/O to finish. When the I/O 6518 * reaches the done callback, dbuf_write_override_done, it will 6519 * check to see if the io_bp and io_bp_override are identical. 6520 * If they are not, then it indicates that the bp was replaced with 6521 * the bp in the DDT and the override bp is freed. This allows 6522 * us to arrive here with a reference on a block that is being 6523 * freed. So if we have an I/O in progress, or a reference to 6524 * this hdr, then we don't destroy the hdr. 6525 */ 6526 if (!HDR_HAS_L1HDR(hdr) || (!HDR_IO_IN_PROGRESS(hdr) && 6527 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt))) { 6528 arc_change_state(arc_anon, hdr, hash_lock); 6529 arc_hdr_destroy(hdr); 6530 mutex_exit(hash_lock); 6531 } else { 6532 mutex_exit(hash_lock); 6533 } 6534 6535 } 6536 6537 /* 6538 * Release this buffer from the cache, making it an anonymous buffer. This 6539 * must be done after a read and prior to modifying the buffer contents. 6540 * If the buffer has more than one reference, we must make 6541 * a new hdr for the buffer. 6542 */ 6543 void 6544 arc_release(arc_buf_t *buf, void *tag) 6545 { 6546 arc_buf_hdr_t *hdr = buf->b_hdr; 6547 6548 /* 6549 * It would be nice to assert that if its DMU metadata (level > 6550 * 0 || it's the dnode file), then it must be syncing context. 6551 * But we don't know that information at this level. 6552 */ 6553 6554 mutex_enter(&buf->b_evict_lock); 6555 6556 ASSERT(HDR_HAS_L1HDR(hdr)); 6557 6558 /* 6559 * We don't grab the hash lock prior to this check, because if 6560 * the buffer's header is in the arc_anon state, it won't be 6561 * linked into the hash table. 6562 */ 6563 if (hdr->b_l1hdr.b_state == arc_anon) { 6564 mutex_exit(&buf->b_evict_lock); 6565 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6566 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 6567 ASSERT(!HDR_HAS_L2HDR(hdr)); 6568 ASSERT(HDR_EMPTY(hdr)); 6569 6570 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6571 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1); 6572 ASSERT(!list_link_active(&hdr->b_l1hdr.b_arc_node)); 6573 6574 hdr->b_l1hdr.b_arc_access = 0; 6575 6576 /* 6577 * If the buf is being overridden then it may already 6578 * have a hdr that is not empty. 6579 */ 6580 buf_discard_identity(hdr); 6581 arc_buf_thaw(buf); 6582 6583 return; 6584 } 6585 6586 kmutex_t *hash_lock = HDR_LOCK(hdr); 6587 mutex_enter(hash_lock); 6588 6589 /* 6590 * This assignment is only valid as long as the hash_lock is 6591 * held, we must be careful not to reference state or the 6592 * b_state field after dropping the lock. 6593 */ 6594 arc_state_t *state = hdr->b_l1hdr.b_state; 6595 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 6596 ASSERT3P(state, !=, arc_anon); 6597 6598 /* this buffer is not on any list */ 6599 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0); 6600 6601 if (HDR_HAS_L2HDR(hdr)) { 6602 mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6603 6604 /* 6605 * We have to recheck this conditional again now that 6606 * we're holding the l2ad_mtx to prevent a race with 6607 * another thread which might be concurrently calling 6608 * l2arc_evict(). In that case, l2arc_evict() might have 6609 * destroyed the header's L2 portion as we were waiting 6610 * to acquire the l2ad_mtx. 6611 */ 6612 if (HDR_HAS_L2HDR(hdr)) 6613 arc_hdr_l2hdr_destroy(hdr); 6614 6615 mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6616 } 6617 6618 /* 6619 * Do we have more than one buf? 6620 */ 6621 if (hdr->b_l1hdr.b_bufcnt > 1) { 6622 arc_buf_hdr_t *nhdr; 6623 uint64_t spa = hdr->b_spa; 6624 uint64_t psize = HDR_GET_PSIZE(hdr); 6625 uint64_t lsize = HDR_GET_LSIZE(hdr); 6626 boolean_t protected = HDR_PROTECTED(hdr); 6627 enum zio_compress compress = arc_hdr_get_compress(hdr); 6628 arc_buf_contents_t type = arc_buf_type(hdr); 6629 VERIFY3U(hdr->b_type, ==, type); 6630 6631 ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL); 6632 (void) remove_reference(hdr, hash_lock, tag); 6633 6634 if (arc_buf_is_shared(buf) && !ARC_BUF_COMPRESSED(buf)) { 6635 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6636 ASSERT(ARC_BUF_LAST(buf)); 6637 } 6638 6639 /* 6640 * Pull the data off of this hdr and attach it to 6641 * a new anonymous hdr. Also find the last buffer 6642 * in the hdr's buffer list. 6643 */ 6644 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 6645 ASSERT3P(lastbuf, !=, NULL); 6646 6647 /* 6648 * If the current arc_buf_t and the hdr are sharing their data 6649 * buffer, then we must stop sharing that block. 6650 */ 6651 if (arc_buf_is_shared(buf)) { 6652 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6653 VERIFY(!arc_buf_is_shared(lastbuf)); 6654 6655 /* 6656 * First, sever the block sharing relationship between 6657 * buf and the arc_buf_hdr_t. 6658 */ 6659 arc_unshare_buf(hdr, buf); 6660 6661 /* 6662 * Now we need to recreate the hdr's b_pabd. Since we 6663 * have lastbuf handy, we try to share with it, but if 6664 * we can't then we allocate a new b_pabd and copy the 6665 * data from buf into it. 6666 */ 6667 if (arc_can_share(hdr, lastbuf)) { 6668 arc_share_buf(hdr, lastbuf); 6669 } else { 6670 arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); 6671 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, 6672 buf->b_data, psize); 6673 } 6674 VERIFY3P(lastbuf->b_data, !=, NULL); 6675 } else if (HDR_SHARED_DATA(hdr)) { 6676 /* 6677 * Uncompressed shared buffers are always at the end 6678 * of the list. Compressed buffers don't have the 6679 * same requirements. This makes it hard to 6680 * simply assert that the lastbuf is shared so 6681 * we rely on the hdr's compression flags to determine 6682 * if we have a compressed, shared buffer. 6683 */ 6684 ASSERT(arc_buf_is_shared(lastbuf) || 6685 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 6686 ASSERT(!ARC_BUF_SHARED(buf)); 6687 } 6688 6689 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 6690 ASSERT3P(state, !=, arc_l2c_only); 6691 6692 (void) zfs_refcount_remove_many(&state->arcs_size, 6693 arc_buf_size(buf), buf); 6694 6695 if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6696 ASSERT3P(state, !=, arc_l2c_only); 6697 (void) zfs_refcount_remove_many( 6698 &state->arcs_esize[type], 6699 arc_buf_size(buf), buf); 6700 } 6701 6702 hdr->b_l1hdr.b_bufcnt -= 1; 6703 if (ARC_BUF_ENCRYPTED(buf)) 6704 hdr->b_crypt_hdr.b_ebufcnt -= 1; 6705 6706 arc_cksum_verify(buf); 6707 arc_buf_unwatch(buf); 6708 6709 /* if this is the last uncompressed buf free the checksum */ 6710 if (!arc_hdr_has_uncompressed_buf(hdr)) 6711 arc_cksum_free(hdr); 6712 6713 mutex_exit(hash_lock); 6714 6715 /* 6716 * Allocate a new hdr. The new hdr will contain a b_pabd 6717 * buffer which will be freed in arc_write(). 6718 */ 6719 nhdr = arc_hdr_alloc(spa, psize, lsize, protected, 6720 compress, hdr->b_complevel, type, HDR_HAS_RABD(hdr)); 6721 ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL); 6722 ASSERT0(nhdr->b_l1hdr.b_bufcnt); 6723 ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt)); 6724 VERIFY3U(nhdr->b_type, ==, type); 6725 ASSERT(!HDR_SHARED_DATA(nhdr)); 6726 6727 nhdr->b_l1hdr.b_buf = buf; 6728 nhdr->b_l1hdr.b_bufcnt = 1; 6729 if (ARC_BUF_ENCRYPTED(buf)) 6730 nhdr->b_crypt_hdr.b_ebufcnt = 1; 6731 nhdr->b_l1hdr.b_mru_hits = 0; 6732 nhdr->b_l1hdr.b_mru_ghost_hits = 0; 6733 nhdr->b_l1hdr.b_mfu_hits = 0; 6734 nhdr->b_l1hdr.b_mfu_ghost_hits = 0; 6735 nhdr->b_l1hdr.b_l2_hits = 0; 6736 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag); 6737 buf->b_hdr = nhdr; 6738 6739 mutex_exit(&buf->b_evict_lock); 6740 (void) zfs_refcount_add_many(&arc_anon->arcs_size, 6741 arc_buf_size(buf), buf); 6742 } else { 6743 mutex_exit(&buf->b_evict_lock); 6744 ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1); 6745 /* protected by hash lock, or hdr is on arc_anon */ 6746 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6747 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6748 hdr->b_l1hdr.b_mru_hits = 0; 6749 hdr->b_l1hdr.b_mru_ghost_hits = 0; 6750 hdr->b_l1hdr.b_mfu_hits = 0; 6751 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 6752 hdr->b_l1hdr.b_l2_hits = 0; 6753 arc_change_state(arc_anon, hdr, hash_lock); 6754 hdr->b_l1hdr.b_arc_access = 0; 6755 6756 mutex_exit(hash_lock); 6757 buf_discard_identity(hdr); 6758 arc_buf_thaw(buf); 6759 } 6760 } 6761 6762 int 6763 arc_released(arc_buf_t *buf) 6764 { 6765 int released; 6766 6767 mutex_enter(&buf->b_evict_lock); 6768 released = (buf->b_data != NULL && 6769 buf->b_hdr->b_l1hdr.b_state == arc_anon); 6770 mutex_exit(&buf->b_evict_lock); 6771 return (released); 6772 } 6773 6774 #ifdef ZFS_DEBUG 6775 int 6776 arc_referenced(arc_buf_t *buf) 6777 { 6778 int referenced; 6779 6780 mutex_enter(&buf->b_evict_lock); 6781 referenced = (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt)); 6782 mutex_exit(&buf->b_evict_lock); 6783 return (referenced); 6784 } 6785 #endif 6786 6787 static void 6788 arc_write_ready(zio_t *zio) 6789 { 6790 arc_write_callback_t *callback = zio->io_private; 6791 arc_buf_t *buf = callback->awcb_buf; 6792 arc_buf_hdr_t *hdr = buf->b_hdr; 6793 blkptr_t *bp = zio->io_bp; 6794 uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp); 6795 fstrans_cookie_t cookie = spl_fstrans_mark(); 6796 6797 ASSERT(HDR_HAS_L1HDR(hdr)); 6798 ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt)); 6799 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 6800 6801 /* 6802 * If we're reexecuting this zio because the pool suspended, then 6803 * cleanup any state that was previously set the first time the 6804 * callback was invoked. 6805 */ 6806 if (zio->io_flags & ZIO_FLAG_REEXECUTED) { 6807 arc_cksum_free(hdr); 6808 arc_buf_unwatch(buf); 6809 if (hdr->b_l1hdr.b_pabd != NULL) { 6810 if (arc_buf_is_shared(buf)) { 6811 arc_unshare_buf(hdr, buf); 6812 } else { 6813 arc_hdr_free_abd(hdr, B_FALSE); 6814 } 6815 } 6816 6817 if (HDR_HAS_RABD(hdr)) 6818 arc_hdr_free_abd(hdr, B_TRUE); 6819 } 6820 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6821 ASSERT(!HDR_HAS_RABD(hdr)); 6822 ASSERT(!HDR_SHARED_DATA(hdr)); 6823 ASSERT(!arc_buf_is_shared(buf)); 6824 6825 callback->awcb_ready(zio, buf, callback->awcb_private); 6826 6827 if (HDR_IO_IN_PROGRESS(hdr)) 6828 ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED); 6829 6830 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6831 6832 if (BP_IS_PROTECTED(bp) != !!HDR_PROTECTED(hdr)) 6833 hdr = arc_hdr_realloc_crypt(hdr, BP_IS_PROTECTED(bp)); 6834 6835 if (BP_IS_PROTECTED(bp)) { 6836 /* ZIL blocks are written through zio_rewrite */ 6837 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 6838 ASSERT(HDR_PROTECTED(hdr)); 6839 6840 if (BP_SHOULD_BYTESWAP(bp)) { 6841 if (BP_GET_LEVEL(bp) > 0) { 6842 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 6843 } else { 6844 hdr->b_l1hdr.b_byteswap = 6845 DMU_OT_BYTESWAP(BP_GET_TYPE(bp)); 6846 } 6847 } else { 6848 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 6849 } 6850 6851 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 6852 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 6853 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 6854 hdr->b_crypt_hdr.b_iv); 6855 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 6856 } 6857 6858 /* 6859 * If this block was written for raw encryption but the zio layer 6860 * ended up only authenticating it, adjust the buffer flags now. 6861 */ 6862 if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) { 6863 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 6864 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6865 if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF) 6866 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6867 } else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) { 6868 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6869 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6870 } 6871 6872 /* this must be done after the buffer flags are adjusted */ 6873 arc_cksum_compute(buf); 6874 6875 enum zio_compress compress; 6876 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 6877 compress = ZIO_COMPRESS_OFF; 6878 } else { 6879 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 6880 compress = BP_GET_COMPRESS(bp); 6881 } 6882 HDR_SET_PSIZE(hdr, psize); 6883 arc_hdr_set_compress(hdr, compress); 6884 hdr->b_complevel = zio->io_prop.zp_complevel; 6885 6886 if (zio->io_error != 0 || psize == 0) 6887 goto out; 6888 6889 /* 6890 * Fill the hdr with data. If the buffer is encrypted we have no choice 6891 * but to copy the data into b_radb. If the hdr is compressed, the data 6892 * we want is available from the zio, otherwise we can take it from 6893 * the buf. 6894 * 6895 * We might be able to share the buf's data with the hdr here. However, 6896 * doing so would cause the ARC to be full of linear ABDs if we write a 6897 * lot of shareable data. As a compromise, we check whether scattered 6898 * ABDs are allowed, and assume that if they are then the user wants 6899 * the ARC to be primarily filled with them regardless of the data being 6900 * written. Therefore, if they're allowed then we allocate one and copy 6901 * the data into it; otherwise, we share the data directly if we can. 6902 */ 6903 if (ARC_BUF_ENCRYPTED(buf)) { 6904 ASSERT3U(psize, >, 0); 6905 ASSERT(ARC_BUF_COMPRESSED(buf)); 6906 arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT|ARC_HDR_ALLOC_RDATA); 6907 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6908 } else if (zfs_abd_scatter_enabled || !arc_can_share(hdr, buf)) { 6909 /* 6910 * Ideally, we would always copy the io_abd into b_pabd, but the 6911 * user may have disabled compressed ARC, thus we must check the 6912 * hdr's compression setting rather than the io_bp's. 6913 */ 6914 if (BP_IS_ENCRYPTED(bp)) { 6915 ASSERT3U(psize, >, 0); 6916 arc_hdr_alloc_abd(hdr, 6917 ARC_HDR_DO_ADAPT|ARC_HDR_ALLOC_RDATA); 6918 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6919 } else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 6920 !ARC_BUF_COMPRESSED(buf)) { 6921 ASSERT3U(psize, >, 0); 6922 arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); 6923 abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize); 6924 } else { 6925 ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr)); 6926 arc_hdr_alloc_abd(hdr, ARC_HDR_DO_ADAPT); 6927 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data, 6928 arc_buf_size(buf)); 6929 } 6930 } else { 6931 ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd)); 6932 ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf)); 6933 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6934 6935 arc_share_buf(hdr, buf); 6936 } 6937 6938 out: 6939 arc_hdr_verify(hdr, bp); 6940 spl_fstrans_unmark(cookie); 6941 } 6942 6943 static void 6944 arc_write_children_ready(zio_t *zio) 6945 { 6946 arc_write_callback_t *callback = zio->io_private; 6947 arc_buf_t *buf = callback->awcb_buf; 6948 6949 callback->awcb_children_ready(zio, buf, callback->awcb_private); 6950 } 6951 6952 /* 6953 * The SPA calls this callback for each physical write that happens on behalf 6954 * of a logical write. See the comment in dbuf_write_physdone() for details. 6955 */ 6956 static void 6957 arc_write_physdone(zio_t *zio) 6958 { 6959 arc_write_callback_t *cb = zio->io_private; 6960 if (cb->awcb_physdone != NULL) 6961 cb->awcb_physdone(zio, cb->awcb_buf, cb->awcb_private); 6962 } 6963 6964 static void 6965 arc_write_done(zio_t *zio) 6966 { 6967 arc_write_callback_t *callback = zio->io_private; 6968 arc_buf_t *buf = callback->awcb_buf; 6969 arc_buf_hdr_t *hdr = buf->b_hdr; 6970 6971 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6972 6973 if (zio->io_error == 0) { 6974 arc_hdr_verify(hdr, zio->io_bp); 6975 6976 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) { 6977 buf_discard_identity(hdr); 6978 } else { 6979 hdr->b_dva = *BP_IDENTITY(zio->io_bp); 6980 hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp); 6981 } 6982 } else { 6983 ASSERT(HDR_EMPTY(hdr)); 6984 } 6985 6986 /* 6987 * If the block to be written was all-zero or compressed enough to be 6988 * embedded in the BP, no write was performed so there will be no 6989 * dva/birth/checksum. The buffer must therefore remain anonymous 6990 * (and uncached). 6991 */ 6992 if (!HDR_EMPTY(hdr)) { 6993 arc_buf_hdr_t *exists; 6994 kmutex_t *hash_lock; 6995 6996 ASSERT3U(zio->io_error, ==, 0); 6997 6998 arc_cksum_verify(buf); 6999 7000 exists = buf_hash_insert(hdr, &hash_lock); 7001 if (exists != NULL) { 7002 /* 7003 * This can only happen if we overwrite for 7004 * sync-to-convergence, because we remove 7005 * buffers from the hash table when we arc_free(). 7006 */ 7007 if (zio->io_flags & ZIO_FLAG_IO_REWRITE) { 7008 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 7009 panic("bad overwrite, hdr=%p exists=%p", 7010 (void *)hdr, (void *)exists); 7011 ASSERT(zfs_refcount_is_zero( 7012 &exists->b_l1hdr.b_refcnt)); 7013 arc_change_state(arc_anon, exists, hash_lock); 7014 arc_hdr_destroy(exists); 7015 mutex_exit(hash_lock); 7016 exists = buf_hash_insert(hdr, &hash_lock); 7017 ASSERT3P(exists, ==, NULL); 7018 } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) { 7019 /* nopwrite */ 7020 ASSERT(zio->io_prop.zp_nopwrite); 7021 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 7022 panic("bad nopwrite, hdr=%p exists=%p", 7023 (void *)hdr, (void *)exists); 7024 } else { 7025 /* Dedup */ 7026 ASSERT(hdr->b_l1hdr.b_bufcnt == 1); 7027 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 7028 ASSERT(BP_GET_DEDUP(zio->io_bp)); 7029 ASSERT(BP_GET_LEVEL(zio->io_bp) == 0); 7030 } 7031 } 7032 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 7033 /* if it's not anon, we are doing a scrub */ 7034 if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon) 7035 arc_access(hdr, hash_lock); 7036 mutex_exit(hash_lock); 7037 } else { 7038 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 7039 } 7040 7041 ASSERT(!zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 7042 callback->awcb_done(zio, buf, callback->awcb_private); 7043 7044 abd_put(zio->io_abd); 7045 kmem_free(callback, sizeof (arc_write_callback_t)); 7046 } 7047 7048 zio_t * 7049 arc_write(zio_t *pio, spa_t *spa, uint64_t txg, 7050 blkptr_t *bp, arc_buf_t *buf, boolean_t l2arc, 7051 const zio_prop_t *zp, arc_write_done_func_t *ready, 7052 arc_write_done_func_t *children_ready, arc_write_done_func_t *physdone, 7053 arc_write_done_func_t *done, void *private, zio_priority_t priority, 7054 int zio_flags, const zbookmark_phys_t *zb) 7055 { 7056 arc_buf_hdr_t *hdr = buf->b_hdr; 7057 arc_write_callback_t *callback; 7058 zio_t *zio; 7059 zio_prop_t localprop = *zp; 7060 7061 ASSERT3P(ready, !=, NULL); 7062 ASSERT3P(done, !=, NULL); 7063 ASSERT(!HDR_IO_ERROR(hdr)); 7064 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 7065 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 7066 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); 7067 if (l2arc) 7068 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 7069 7070 if (ARC_BUF_ENCRYPTED(buf)) { 7071 ASSERT(ARC_BUF_COMPRESSED(buf)); 7072 localprop.zp_encrypt = B_TRUE; 7073 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 7074 localprop.zp_complevel = hdr->b_complevel; 7075 localprop.zp_byteorder = 7076 (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 7077 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 7078 bcopy(hdr->b_crypt_hdr.b_salt, localprop.zp_salt, 7079 ZIO_DATA_SALT_LEN); 7080 bcopy(hdr->b_crypt_hdr.b_iv, localprop.zp_iv, 7081 ZIO_DATA_IV_LEN); 7082 bcopy(hdr->b_crypt_hdr.b_mac, localprop.zp_mac, 7083 ZIO_DATA_MAC_LEN); 7084 if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) { 7085 localprop.zp_nopwrite = B_FALSE; 7086 localprop.zp_copies = 7087 MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1); 7088 } 7089 zio_flags |= ZIO_FLAG_RAW; 7090 } else if (ARC_BUF_COMPRESSED(buf)) { 7091 ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf)); 7092 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 7093 localprop.zp_complevel = hdr->b_complevel; 7094 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 7095 } 7096 callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP); 7097 callback->awcb_ready = ready; 7098 callback->awcb_children_ready = children_ready; 7099 callback->awcb_physdone = physdone; 7100 callback->awcb_done = done; 7101 callback->awcb_private = private; 7102 callback->awcb_buf = buf; 7103 7104 /* 7105 * The hdr's b_pabd is now stale, free it now. A new data block 7106 * will be allocated when the zio pipeline calls arc_write_ready(). 7107 */ 7108 if (hdr->b_l1hdr.b_pabd != NULL) { 7109 /* 7110 * If the buf is currently sharing the data block with 7111 * the hdr then we need to break that relationship here. 7112 * The hdr will remain with a NULL data pointer and the 7113 * buf will take sole ownership of the block. 7114 */ 7115 if (arc_buf_is_shared(buf)) { 7116 arc_unshare_buf(hdr, buf); 7117 } else { 7118 arc_hdr_free_abd(hdr, B_FALSE); 7119 } 7120 VERIFY3P(buf->b_data, !=, NULL); 7121 } 7122 7123 if (HDR_HAS_RABD(hdr)) 7124 arc_hdr_free_abd(hdr, B_TRUE); 7125 7126 if (!(zio_flags & ZIO_FLAG_RAW)) 7127 arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF); 7128 7129 ASSERT(!arc_buf_is_shared(buf)); 7130 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 7131 7132 zio = zio_write(pio, spa, txg, bp, 7133 abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)), 7134 HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready, 7135 (children_ready != NULL) ? arc_write_children_ready : NULL, 7136 arc_write_physdone, arc_write_done, callback, 7137 priority, zio_flags, zb); 7138 7139 return (zio); 7140 } 7141 7142 void 7143 arc_tempreserve_clear(uint64_t reserve) 7144 { 7145 atomic_add_64(&arc_tempreserve, -reserve); 7146 ASSERT((int64_t)arc_tempreserve >= 0); 7147 } 7148 7149 int 7150 arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg) 7151 { 7152 int error; 7153 uint64_t anon_size; 7154 7155 if (!arc_no_grow && 7156 reserve > arc_c/4 && 7157 reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT)) 7158 arc_c = MIN(arc_c_max, reserve * 4); 7159 7160 /* 7161 * Throttle when the calculated memory footprint for the TXG 7162 * exceeds the target ARC size. 7163 */ 7164 if (reserve > arc_c) { 7165 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve); 7166 return (SET_ERROR(ERESTART)); 7167 } 7168 7169 /* 7170 * Don't count loaned bufs as in flight dirty data to prevent long 7171 * network delays from blocking transactions that are ready to be 7172 * assigned to a txg. 7173 */ 7174 7175 /* assert that it has not wrapped around */ 7176 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 7177 7178 anon_size = MAX((int64_t)(zfs_refcount_count(&arc_anon->arcs_size) - 7179 arc_loaned_bytes), 0); 7180 7181 /* 7182 * Writes will, almost always, require additional memory allocations 7183 * in order to compress/encrypt/etc the data. We therefore need to 7184 * make sure that there is sufficient available memory for this. 7185 */ 7186 error = arc_memory_throttle(spa, reserve, txg); 7187 if (error != 0) 7188 return (error); 7189 7190 /* 7191 * Throttle writes when the amount of dirty data in the cache 7192 * gets too large. We try to keep the cache less than half full 7193 * of dirty blocks so that our sync times don't grow too large. 7194 * 7195 * In the case of one pool being built on another pool, we want 7196 * to make sure we don't end up throttling the lower (backing) 7197 * pool when the upper pool is the majority contributor to dirty 7198 * data. To insure we make forward progress during throttling, we 7199 * also check the current pool's net dirty data and only throttle 7200 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty 7201 * data in the cache. 7202 * 7203 * Note: if two requests come in concurrently, we might let them 7204 * both succeed, when one of them should fail. Not a huge deal. 7205 */ 7206 uint64_t total_dirty = reserve + arc_tempreserve + anon_size; 7207 uint64_t spa_dirty_anon = spa_dirty_data(spa); 7208 uint64_t rarc_c = arc_warm ? arc_c : arc_c_max; 7209 if (total_dirty > rarc_c * zfs_arc_dirty_limit_percent / 100 && 7210 anon_size > rarc_c * zfs_arc_anon_limit_percent / 100 && 7211 spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) { 7212 #ifdef ZFS_DEBUG 7213 uint64_t meta_esize = zfs_refcount_count( 7214 &arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7215 uint64_t data_esize = 7216 zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7217 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK " 7218 "anon_data=%lluK tempreserve=%lluK rarc_c=%lluK\n", 7219 arc_tempreserve >> 10, meta_esize >> 10, 7220 data_esize >> 10, reserve >> 10, rarc_c >> 10); 7221 #endif 7222 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle); 7223 return (SET_ERROR(ERESTART)); 7224 } 7225 atomic_add_64(&arc_tempreserve, reserve); 7226 return (0); 7227 } 7228 7229 static void 7230 arc_kstat_update_state(arc_state_t *state, kstat_named_t *size, 7231 kstat_named_t *evict_data, kstat_named_t *evict_metadata) 7232 { 7233 size->value.ui64 = zfs_refcount_count(&state->arcs_size); 7234 evict_data->value.ui64 = 7235 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]); 7236 evict_metadata->value.ui64 = 7237 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]); 7238 } 7239 7240 static int 7241 arc_kstat_update(kstat_t *ksp, int rw) 7242 { 7243 arc_stats_t *as = ksp->ks_data; 7244 7245 if (rw == KSTAT_WRITE) { 7246 return (SET_ERROR(EACCES)); 7247 } else { 7248 arc_kstat_update_state(arc_anon, 7249 &as->arcstat_anon_size, 7250 &as->arcstat_anon_evictable_data, 7251 &as->arcstat_anon_evictable_metadata); 7252 arc_kstat_update_state(arc_mru, 7253 &as->arcstat_mru_size, 7254 &as->arcstat_mru_evictable_data, 7255 &as->arcstat_mru_evictable_metadata); 7256 arc_kstat_update_state(arc_mru_ghost, 7257 &as->arcstat_mru_ghost_size, 7258 &as->arcstat_mru_ghost_evictable_data, 7259 &as->arcstat_mru_ghost_evictable_metadata); 7260 arc_kstat_update_state(arc_mfu, 7261 &as->arcstat_mfu_size, 7262 &as->arcstat_mfu_evictable_data, 7263 &as->arcstat_mfu_evictable_metadata); 7264 arc_kstat_update_state(arc_mfu_ghost, 7265 &as->arcstat_mfu_ghost_size, 7266 &as->arcstat_mfu_ghost_evictable_data, 7267 &as->arcstat_mfu_ghost_evictable_metadata); 7268 7269 ARCSTAT(arcstat_size) = aggsum_value(&arc_size); 7270 ARCSTAT(arcstat_meta_used) = aggsum_value(&arc_meta_used); 7271 ARCSTAT(arcstat_data_size) = aggsum_value(&astat_data_size); 7272 ARCSTAT(arcstat_metadata_size) = 7273 aggsum_value(&astat_metadata_size); 7274 ARCSTAT(arcstat_hdr_size) = aggsum_value(&astat_hdr_size); 7275 ARCSTAT(arcstat_l2_hdr_size) = aggsum_value(&astat_l2_hdr_size); 7276 ARCSTAT(arcstat_dbuf_size) = aggsum_value(&astat_dbuf_size); 7277 #if defined(COMPAT_FREEBSD11) 7278 ARCSTAT(arcstat_other_size) = aggsum_value(&astat_bonus_size) + 7279 aggsum_value(&astat_dnode_size) + 7280 aggsum_value(&astat_dbuf_size); 7281 #endif 7282 ARCSTAT(arcstat_dnode_size) = aggsum_value(&astat_dnode_size); 7283 ARCSTAT(arcstat_bonus_size) = aggsum_value(&astat_bonus_size); 7284 ARCSTAT(arcstat_abd_chunk_waste_size) = 7285 aggsum_value(&astat_abd_chunk_waste_size); 7286 7287 as->arcstat_memory_all_bytes.value.ui64 = 7288 arc_all_memory(); 7289 as->arcstat_memory_free_bytes.value.ui64 = 7290 arc_free_memory(); 7291 as->arcstat_memory_available_bytes.value.i64 = 7292 arc_available_memory(); 7293 } 7294 7295 return (0); 7296 } 7297 7298 /* 7299 * This function *must* return indices evenly distributed between all 7300 * sublists of the multilist. This is needed due to how the ARC eviction 7301 * code is laid out; arc_evict_state() assumes ARC buffers are evenly 7302 * distributed between all sublists and uses this assumption when 7303 * deciding which sublist to evict from and how much to evict from it. 7304 */ 7305 static unsigned int 7306 arc_state_multilist_index_func(multilist_t *ml, void *obj) 7307 { 7308 arc_buf_hdr_t *hdr = obj; 7309 7310 /* 7311 * We rely on b_dva to generate evenly distributed index 7312 * numbers using buf_hash below. So, as an added precaution, 7313 * let's make sure we never add empty buffers to the arc lists. 7314 */ 7315 ASSERT(!HDR_EMPTY(hdr)); 7316 7317 /* 7318 * The assumption here, is the hash value for a given 7319 * arc_buf_hdr_t will remain constant throughout its lifetime 7320 * (i.e. its b_spa, b_dva, and b_birth fields don't change). 7321 * Thus, we don't need to store the header's sublist index 7322 * on insertion, as this index can be recalculated on removal. 7323 * 7324 * Also, the low order bits of the hash value are thought to be 7325 * distributed evenly. Otherwise, in the case that the multilist 7326 * has a power of two number of sublists, each sublists' usage 7327 * would not be evenly distributed. 7328 */ 7329 return (buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) % 7330 multilist_get_num_sublists(ml)); 7331 } 7332 7333 #define WARN_IF_TUNING_IGNORED(tuning, value, do_warn) do { \ 7334 if ((do_warn) && (tuning) && ((tuning) != (value))) { \ 7335 cmn_err(CE_WARN, \ 7336 "ignoring tunable %s (using %llu instead)", \ 7337 (#tuning), (value)); \ 7338 } \ 7339 } while (0) 7340 7341 /* 7342 * Called during module initialization and periodically thereafter to 7343 * apply reasonable changes to the exposed performance tunings. Can also be 7344 * called explicitly by param_set_arc_*() functions when ARC tunables are 7345 * updated manually. Non-zero zfs_* values which differ from the currently set 7346 * values will be applied. 7347 */ 7348 void 7349 arc_tuning_update(boolean_t verbose) 7350 { 7351 uint64_t allmem = arc_all_memory(); 7352 unsigned long limit; 7353 7354 /* Valid range: 32M - <arc_c_max> */ 7355 if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) && 7356 (zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) && 7357 (zfs_arc_min <= arc_c_max)) { 7358 arc_c_min = zfs_arc_min; 7359 arc_c = MAX(arc_c, arc_c_min); 7360 } 7361 WARN_IF_TUNING_IGNORED(zfs_arc_min, arc_c_min, verbose); 7362 7363 /* Valid range: 64M - <all physical memory> */ 7364 if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) && 7365 (zfs_arc_max >= 64 << 20) && (zfs_arc_max < allmem) && 7366 (zfs_arc_max > arc_c_min)) { 7367 arc_c_max = zfs_arc_max; 7368 arc_c = MIN(arc_c, arc_c_max); 7369 arc_p = (arc_c >> 1); 7370 if (arc_meta_limit > arc_c_max) 7371 arc_meta_limit = arc_c_max; 7372 if (arc_dnode_size_limit > arc_meta_limit) 7373 arc_dnode_size_limit = arc_meta_limit; 7374 } 7375 WARN_IF_TUNING_IGNORED(zfs_arc_max, arc_c_max, verbose); 7376 7377 /* Valid range: 16M - <arc_c_max> */ 7378 if ((zfs_arc_meta_min) && (zfs_arc_meta_min != arc_meta_min) && 7379 (zfs_arc_meta_min >= 1ULL << SPA_MAXBLOCKSHIFT) && 7380 (zfs_arc_meta_min <= arc_c_max)) { 7381 arc_meta_min = zfs_arc_meta_min; 7382 if (arc_meta_limit < arc_meta_min) 7383 arc_meta_limit = arc_meta_min; 7384 if (arc_dnode_size_limit < arc_meta_min) 7385 arc_dnode_size_limit = arc_meta_min; 7386 } 7387 WARN_IF_TUNING_IGNORED(zfs_arc_meta_min, arc_meta_min, verbose); 7388 7389 /* Valid range: <arc_meta_min> - <arc_c_max> */ 7390 limit = zfs_arc_meta_limit ? zfs_arc_meta_limit : 7391 MIN(zfs_arc_meta_limit_percent, 100) * arc_c_max / 100; 7392 if ((limit != arc_meta_limit) && 7393 (limit >= arc_meta_min) && 7394 (limit <= arc_c_max)) 7395 arc_meta_limit = limit; 7396 WARN_IF_TUNING_IGNORED(zfs_arc_meta_limit, arc_meta_limit, verbose); 7397 7398 /* Valid range: <arc_meta_min> - <arc_meta_limit> */ 7399 limit = zfs_arc_dnode_limit ? zfs_arc_dnode_limit : 7400 MIN(zfs_arc_dnode_limit_percent, 100) * arc_meta_limit / 100; 7401 if ((limit != arc_dnode_size_limit) && 7402 (limit >= arc_meta_min) && 7403 (limit <= arc_meta_limit)) 7404 arc_dnode_size_limit = limit; 7405 WARN_IF_TUNING_IGNORED(zfs_arc_dnode_limit, arc_dnode_size_limit, 7406 verbose); 7407 7408 /* Valid range: 1 - N */ 7409 if (zfs_arc_grow_retry) 7410 arc_grow_retry = zfs_arc_grow_retry; 7411 7412 /* Valid range: 1 - N */ 7413 if (zfs_arc_shrink_shift) { 7414 arc_shrink_shift = zfs_arc_shrink_shift; 7415 arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1); 7416 } 7417 7418 /* Valid range: 1 - N */ 7419 if (zfs_arc_p_min_shift) 7420 arc_p_min_shift = zfs_arc_p_min_shift; 7421 7422 /* Valid range: 1 - N ms */ 7423 if (zfs_arc_min_prefetch_ms) 7424 arc_min_prefetch_ms = zfs_arc_min_prefetch_ms; 7425 7426 /* Valid range: 1 - N ms */ 7427 if (zfs_arc_min_prescient_prefetch_ms) { 7428 arc_min_prescient_prefetch_ms = 7429 zfs_arc_min_prescient_prefetch_ms; 7430 } 7431 7432 /* Valid range: 0 - 100 */ 7433 if ((zfs_arc_lotsfree_percent >= 0) && 7434 (zfs_arc_lotsfree_percent <= 100)) 7435 arc_lotsfree_percent = zfs_arc_lotsfree_percent; 7436 WARN_IF_TUNING_IGNORED(zfs_arc_lotsfree_percent, arc_lotsfree_percent, 7437 verbose); 7438 7439 /* Valid range: 0 - <all physical memory> */ 7440 if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free)) 7441 arc_sys_free = MIN(MAX(zfs_arc_sys_free, 0), allmem); 7442 WARN_IF_TUNING_IGNORED(zfs_arc_sys_free, arc_sys_free, verbose); 7443 } 7444 7445 static void 7446 arc_state_init(void) 7447 { 7448 arc_anon = &ARC_anon; 7449 arc_mru = &ARC_mru; 7450 arc_mru_ghost = &ARC_mru_ghost; 7451 arc_mfu = &ARC_mfu; 7452 arc_mfu_ghost = &ARC_mfu_ghost; 7453 arc_l2c_only = &ARC_l2c_only; 7454 7455 arc_mru->arcs_list[ARC_BUFC_METADATA] = 7456 multilist_create(sizeof (arc_buf_hdr_t), 7457 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7458 arc_state_multilist_index_func); 7459 arc_mru->arcs_list[ARC_BUFC_DATA] = 7460 multilist_create(sizeof (arc_buf_hdr_t), 7461 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7462 arc_state_multilist_index_func); 7463 arc_mru_ghost->arcs_list[ARC_BUFC_METADATA] = 7464 multilist_create(sizeof (arc_buf_hdr_t), 7465 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7466 arc_state_multilist_index_func); 7467 arc_mru_ghost->arcs_list[ARC_BUFC_DATA] = 7468 multilist_create(sizeof (arc_buf_hdr_t), 7469 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7470 arc_state_multilist_index_func); 7471 arc_mfu->arcs_list[ARC_BUFC_METADATA] = 7472 multilist_create(sizeof (arc_buf_hdr_t), 7473 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7474 arc_state_multilist_index_func); 7475 arc_mfu->arcs_list[ARC_BUFC_DATA] = 7476 multilist_create(sizeof (arc_buf_hdr_t), 7477 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7478 arc_state_multilist_index_func); 7479 arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA] = 7480 multilist_create(sizeof (arc_buf_hdr_t), 7481 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7482 arc_state_multilist_index_func); 7483 arc_mfu_ghost->arcs_list[ARC_BUFC_DATA] = 7484 multilist_create(sizeof (arc_buf_hdr_t), 7485 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7486 arc_state_multilist_index_func); 7487 arc_l2c_only->arcs_list[ARC_BUFC_METADATA] = 7488 multilist_create(sizeof (arc_buf_hdr_t), 7489 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7490 arc_state_multilist_index_func); 7491 arc_l2c_only->arcs_list[ARC_BUFC_DATA] = 7492 multilist_create(sizeof (arc_buf_hdr_t), 7493 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), 7494 arc_state_multilist_index_func); 7495 7496 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7497 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7498 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7499 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7500 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7501 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7502 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7503 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7504 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7505 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7506 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7507 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7508 7509 zfs_refcount_create(&arc_anon->arcs_size); 7510 zfs_refcount_create(&arc_mru->arcs_size); 7511 zfs_refcount_create(&arc_mru_ghost->arcs_size); 7512 zfs_refcount_create(&arc_mfu->arcs_size); 7513 zfs_refcount_create(&arc_mfu_ghost->arcs_size); 7514 zfs_refcount_create(&arc_l2c_only->arcs_size); 7515 7516 aggsum_init(&arc_meta_used, 0); 7517 aggsum_init(&arc_size, 0); 7518 aggsum_init(&astat_data_size, 0); 7519 aggsum_init(&astat_metadata_size, 0); 7520 aggsum_init(&astat_hdr_size, 0); 7521 aggsum_init(&astat_l2_hdr_size, 0); 7522 aggsum_init(&astat_bonus_size, 0); 7523 aggsum_init(&astat_dnode_size, 0); 7524 aggsum_init(&astat_dbuf_size, 0); 7525 aggsum_init(&astat_abd_chunk_waste_size, 0); 7526 7527 arc_anon->arcs_state = ARC_STATE_ANON; 7528 arc_mru->arcs_state = ARC_STATE_MRU; 7529 arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST; 7530 arc_mfu->arcs_state = ARC_STATE_MFU; 7531 arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST; 7532 arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY; 7533 } 7534 7535 static void 7536 arc_state_fini(void) 7537 { 7538 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7539 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7540 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7541 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7542 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7543 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7544 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7545 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7546 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7547 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7548 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7549 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7550 7551 zfs_refcount_destroy(&arc_anon->arcs_size); 7552 zfs_refcount_destroy(&arc_mru->arcs_size); 7553 zfs_refcount_destroy(&arc_mru_ghost->arcs_size); 7554 zfs_refcount_destroy(&arc_mfu->arcs_size); 7555 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size); 7556 zfs_refcount_destroy(&arc_l2c_only->arcs_size); 7557 7558 multilist_destroy(arc_mru->arcs_list[ARC_BUFC_METADATA]); 7559 multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]); 7560 multilist_destroy(arc_mfu->arcs_list[ARC_BUFC_METADATA]); 7561 multilist_destroy(arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]); 7562 multilist_destroy(arc_mru->arcs_list[ARC_BUFC_DATA]); 7563 multilist_destroy(arc_mru_ghost->arcs_list[ARC_BUFC_DATA]); 7564 multilist_destroy(arc_mfu->arcs_list[ARC_BUFC_DATA]); 7565 multilist_destroy(arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]); 7566 multilist_destroy(arc_l2c_only->arcs_list[ARC_BUFC_METADATA]); 7567 multilist_destroy(arc_l2c_only->arcs_list[ARC_BUFC_DATA]); 7568 7569 aggsum_fini(&arc_meta_used); 7570 aggsum_fini(&arc_size); 7571 aggsum_fini(&astat_data_size); 7572 aggsum_fini(&astat_metadata_size); 7573 aggsum_fini(&astat_hdr_size); 7574 aggsum_fini(&astat_l2_hdr_size); 7575 aggsum_fini(&astat_bonus_size); 7576 aggsum_fini(&astat_dnode_size); 7577 aggsum_fini(&astat_dbuf_size); 7578 aggsum_fini(&astat_abd_chunk_waste_size); 7579 } 7580 7581 uint64_t 7582 arc_target_bytes(void) 7583 { 7584 return (arc_c); 7585 } 7586 7587 void 7588 arc_set_limits(uint64_t allmem) 7589 { 7590 /* Set min cache to 1/32 of all memory, or 32MB, whichever is more. */ 7591 arc_c_min = MAX(allmem / 32, 2ULL << SPA_MAXBLOCKSHIFT); 7592 7593 /* How to set default max varies by platform. */ 7594 arc_c_max = arc_default_max(arc_c_min, allmem); 7595 } 7596 void 7597 arc_init(void) 7598 { 7599 uint64_t percent, allmem = arc_all_memory(); 7600 mutex_init(&arc_evict_lock, NULL, MUTEX_DEFAULT, NULL); 7601 list_create(&arc_evict_waiters, sizeof (arc_evict_waiter_t), 7602 offsetof(arc_evict_waiter_t, aew_node)); 7603 7604 arc_min_prefetch_ms = 1000; 7605 arc_min_prescient_prefetch_ms = 6000; 7606 7607 #if defined(_KERNEL) 7608 arc_lowmem_init(); 7609 #endif 7610 7611 arc_set_limits(allmem); 7612 7613 #ifndef _KERNEL 7614 /* 7615 * In userland, there's only the memory pressure that we artificially 7616 * create (see arc_available_memory()). Don't let arc_c get too 7617 * small, because it can cause transactions to be larger than 7618 * arc_c, causing arc_tempreserve_space() to fail. 7619 */ 7620 arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT); 7621 #endif 7622 7623 arc_c = arc_c_min; 7624 arc_p = (arc_c >> 1); 7625 7626 /* Set min to 1/2 of arc_c_min */ 7627 arc_meta_min = 1ULL << SPA_MAXBLOCKSHIFT; 7628 /* Initialize maximum observed usage to zero */ 7629 arc_meta_max = 0; 7630 /* 7631 * Set arc_meta_limit to a percent of arc_c_max with a floor of 7632 * arc_meta_min, and a ceiling of arc_c_max. 7633 */ 7634 percent = MIN(zfs_arc_meta_limit_percent, 100); 7635 arc_meta_limit = MAX(arc_meta_min, (percent * arc_c_max) / 100); 7636 percent = MIN(zfs_arc_dnode_limit_percent, 100); 7637 arc_dnode_size_limit = (percent * arc_meta_limit) / 100; 7638 7639 /* Apply user specified tunings */ 7640 arc_tuning_update(B_TRUE); 7641 7642 /* if kmem_flags are set, lets try to use less memory */ 7643 if (kmem_debugging()) 7644 arc_c = arc_c / 2; 7645 if (arc_c < arc_c_min) 7646 arc_c = arc_c_min; 7647 7648 arc_register_hotplug(); 7649 7650 arc_state_init(); 7651 7652 buf_init(); 7653 7654 list_create(&arc_prune_list, sizeof (arc_prune_t), 7655 offsetof(arc_prune_t, p_node)); 7656 mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL); 7657 7658 arc_prune_taskq = taskq_create("arc_prune", 100, defclsyspri, 7659 boot_ncpus, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC | 7660 TASKQ_THREADS_CPU_PCT); 7661 7662 arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED, 7663 sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL); 7664 7665 if (arc_ksp != NULL) { 7666 arc_ksp->ks_data = &arc_stats; 7667 arc_ksp->ks_update = arc_kstat_update; 7668 kstat_install(arc_ksp); 7669 } 7670 7671 arc_evict_zthr = zthr_create("arc_evict", 7672 arc_evict_cb_check, arc_evict_cb, NULL); 7673 arc_reap_zthr = zthr_create_timer("arc_reap", 7674 arc_reap_cb_check, arc_reap_cb, NULL, SEC2NSEC(1)); 7675 7676 arc_warm = B_FALSE; 7677 7678 /* 7679 * Calculate maximum amount of dirty data per pool. 7680 * 7681 * If it has been set by a module parameter, take that. 7682 * Otherwise, use a percentage of physical memory defined by 7683 * zfs_dirty_data_max_percent (default 10%) with a cap at 7684 * zfs_dirty_data_max_max (default 4G or 25% of physical memory). 7685 */ 7686 #ifdef __LP64__ 7687 if (zfs_dirty_data_max_max == 0) 7688 zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024, 7689 allmem * zfs_dirty_data_max_max_percent / 100); 7690 #else 7691 if (zfs_dirty_data_max_max == 0) 7692 zfs_dirty_data_max_max = MIN(1ULL * 1024 * 1024 * 1024, 7693 allmem * zfs_dirty_data_max_max_percent / 100); 7694 #endif 7695 7696 if (zfs_dirty_data_max == 0) { 7697 zfs_dirty_data_max = allmem * 7698 zfs_dirty_data_max_percent / 100; 7699 zfs_dirty_data_max = MIN(zfs_dirty_data_max, 7700 zfs_dirty_data_max_max); 7701 } 7702 } 7703 7704 void 7705 arc_fini(void) 7706 { 7707 arc_prune_t *p; 7708 7709 #ifdef _KERNEL 7710 arc_lowmem_fini(); 7711 #endif /* _KERNEL */ 7712 7713 /* Use B_TRUE to ensure *all* buffers are evicted */ 7714 arc_flush(NULL, B_TRUE); 7715 7716 if (arc_ksp != NULL) { 7717 kstat_delete(arc_ksp); 7718 arc_ksp = NULL; 7719 } 7720 7721 taskq_wait(arc_prune_taskq); 7722 taskq_destroy(arc_prune_taskq); 7723 7724 mutex_enter(&arc_prune_mtx); 7725 while ((p = list_head(&arc_prune_list)) != NULL) { 7726 list_remove(&arc_prune_list, p); 7727 zfs_refcount_remove(&p->p_refcnt, &arc_prune_list); 7728 zfs_refcount_destroy(&p->p_refcnt); 7729 kmem_free(p, sizeof (*p)); 7730 } 7731 mutex_exit(&arc_prune_mtx); 7732 7733 list_destroy(&arc_prune_list); 7734 mutex_destroy(&arc_prune_mtx); 7735 7736 (void) zthr_cancel(arc_evict_zthr); 7737 (void) zthr_cancel(arc_reap_zthr); 7738 7739 mutex_destroy(&arc_evict_lock); 7740 list_destroy(&arc_evict_waiters); 7741 7742 /* 7743 * Free any buffers that were tagged for destruction. This needs 7744 * to occur before arc_state_fini() runs and destroys the aggsum 7745 * values which are updated when freeing scatter ABDs. 7746 */ 7747 l2arc_do_free_on_write(); 7748 7749 /* 7750 * buf_fini() must proceed arc_state_fini() because buf_fin() may 7751 * trigger the release of kmem magazines, which can callback to 7752 * arc_space_return() which accesses aggsums freed in act_state_fini(). 7753 */ 7754 buf_fini(); 7755 arc_state_fini(); 7756 7757 arc_unregister_hotplug(); 7758 7759 /* 7760 * We destroy the zthrs after all the ARC state has been 7761 * torn down to avoid the case of them receiving any 7762 * wakeup() signals after they are destroyed. 7763 */ 7764 zthr_destroy(arc_evict_zthr); 7765 zthr_destroy(arc_reap_zthr); 7766 7767 ASSERT0(arc_loaned_bytes); 7768 } 7769 7770 /* 7771 * Level 2 ARC 7772 * 7773 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk. 7774 * It uses dedicated storage devices to hold cached data, which are populated 7775 * using large infrequent writes. The main role of this cache is to boost 7776 * the performance of random read workloads. The intended L2ARC devices 7777 * include short-stroked disks, solid state disks, and other media with 7778 * substantially faster read latency than disk. 7779 * 7780 * +-----------------------+ 7781 * | ARC | 7782 * +-----------------------+ 7783 * | ^ ^ 7784 * | | | 7785 * l2arc_feed_thread() arc_read() 7786 * | | | 7787 * | l2arc read | 7788 * V | | 7789 * +---------------+ | 7790 * | L2ARC | | 7791 * +---------------+ | 7792 * | ^ | 7793 * l2arc_write() | | 7794 * | | | 7795 * V | | 7796 * +-------+ +-------+ 7797 * | vdev | | vdev | 7798 * | cache | | cache | 7799 * +-------+ +-------+ 7800 * +=========+ .-----. 7801 * : L2ARC : |-_____-| 7802 * : devices : | Disks | 7803 * +=========+ `-_____-' 7804 * 7805 * Read requests are satisfied from the following sources, in order: 7806 * 7807 * 1) ARC 7808 * 2) vdev cache of L2ARC devices 7809 * 3) L2ARC devices 7810 * 4) vdev cache of disks 7811 * 5) disks 7812 * 7813 * Some L2ARC device types exhibit extremely slow write performance. 7814 * To accommodate for this there are some significant differences between 7815 * the L2ARC and traditional cache design: 7816 * 7817 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from 7818 * the ARC behave as usual, freeing buffers and placing headers on ghost 7819 * lists. The ARC does not send buffers to the L2ARC during eviction as 7820 * this would add inflated write latencies for all ARC memory pressure. 7821 * 7822 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. 7823 * It does this by periodically scanning buffers from the eviction-end of 7824 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are 7825 * not already there. It scans until a headroom of buffers is satisfied, 7826 * which itself is a buffer for ARC eviction. If a compressible buffer is 7827 * found during scanning and selected for writing to an L2ARC device, we 7828 * temporarily boost scanning headroom during the next scan cycle to make 7829 * sure we adapt to compression effects (which might significantly reduce 7830 * the data volume we write to L2ARC). The thread that does this is 7831 * l2arc_feed_thread(), illustrated below; example sizes are included to 7832 * provide a better sense of ratio than this diagram: 7833 * 7834 * head --> tail 7835 * +---------------------+----------+ 7836 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC 7837 * +---------------------+----------+ | o L2ARC eligible 7838 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer 7839 * +---------------------+----------+ | 7840 * 15.9 Gbytes ^ 32 Mbytes | 7841 * headroom | 7842 * l2arc_feed_thread() 7843 * | 7844 * l2arc write hand <--[oooo]--' 7845 * | 8 Mbyte 7846 * | write max 7847 * V 7848 * +==============================+ 7849 * L2ARC dev |####|#|###|###| |####| ... | 7850 * +==============================+ 7851 * 32 Gbytes 7852 * 7853 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of 7854 * evicted, then the L2ARC has cached a buffer much sooner than it probably 7855 * needed to, potentially wasting L2ARC device bandwidth and storage. It is 7856 * safe to say that this is an uncommon case, since buffers at the end of 7857 * the ARC lists have moved there due to inactivity. 7858 * 7859 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom, 7860 * then the L2ARC simply misses copying some buffers. This serves as a 7861 * pressure valve to prevent heavy read workloads from both stalling the ARC 7862 * with waits and clogging the L2ARC with writes. This also helps prevent 7863 * the potential for the L2ARC to churn if it attempts to cache content too 7864 * quickly, such as during backups of the entire pool. 7865 * 7866 * 5. After system boot and before the ARC has filled main memory, there are 7867 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru 7868 * lists can remain mostly static. Instead of searching from tail of these 7869 * lists as pictured, the l2arc_feed_thread() will search from the list heads 7870 * for eligible buffers, greatly increasing its chance of finding them. 7871 * 7872 * The L2ARC device write speed is also boosted during this time so that 7873 * the L2ARC warms up faster. Since there have been no ARC evictions yet, 7874 * there are no L2ARC reads, and no fear of degrading read performance 7875 * through increased writes. 7876 * 7877 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that 7878 * the vdev queue can aggregate them into larger and fewer writes. Each 7879 * device is written to in a rotor fashion, sweeping writes through 7880 * available space then repeating. 7881 * 7882 * 7. The L2ARC does not store dirty content. It never needs to flush 7883 * write buffers back to disk based storage. 7884 * 7885 * 8. If an ARC buffer is written (and dirtied) which also exists in the 7886 * L2ARC, the now stale L2ARC buffer is immediately dropped. 7887 * 7888 * The performance of the L2ARC can be tweaked by a number of tunables, which 7889 * may be necessary for different workloads: 7890 * 7891 * l2arc_write_max max write bytes per interval 7892 * l2arc_write_boost extra write bytes during device warmup 7893 * l2arc_noprefetch skip caching prefetched buffers 7894 * l2arc_headroom number of max device writes to precache 7895 * l2arc_headroom_boost when we find compressed buffers during ARC 7896 * scanning, we multiply headroom by this 7897 * percentage factor for the next scan cycle, 7898 * since more compressed buffers are likely to 7899 * be present 7900 * l2arc_feed_secs seconds between L2ARC writing 7901 * 7902 * Tunables may be removed or added as future performance improvements are 7903 * integrated, and also may become zpool properties. 7904 * 7905 * There are three key functions that control how the L2ARC warms up: 7906 * 7907 * l2arc_write_eligible() check if a buffer is eligible to cache 7908 * l2arc_write_size() calculate how much to write 7909 * l2arc_write_interval() calculate sleep delay between writes 7910 * 7911 * These three functions determine what to write, how much, and how quickly 7912 * to send writes. 7913 * 7914 * L2ARC persistence: 7915 * 7916 * When writing buffers to L2ARC, we periodically add some metadata to 7917 * make sure we can pick them up after reboot, thus dramatically reducing 7918 * the impact that any downtime has on the performance of storage systems 7919 * with large caches. 7920 * 7921 * The implementation works fairly simply by integrating the following two 7922 * modifications: 7923 * 7924 * *) When writing to the L2ARC, we occasionally write a "l2arc log block", 7925 * which is an additional piece of metadata which describes what's been 7926 * written. This allows us to rebuild the arc_buf_hdr_t structures of the 7927 * main ARC buffers. There are 2 linked-lists of log blocks headed by 7928 * dh_start_lbps[2]. We alternate which chain we append to, so they are 7929 * time-wise and offset-wise interleaved, but that is an optimization rather 7930 * than for correctness. The log block also includes a pointer to the 7931 * previous block in its chain. 7932 * 7933 * *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device 7934 * for our header bookkeeping purposes. This contains a device header, 7935 * which contains our top-level reference structures. We update it each 7936 * time we write a new log block, so that we're able to locate it in the 7937 * L2ARC device. If this write results in an inconsistent device header 7938 * (e.g. due to power failure), we detect this by verifying the header's 7939 * checksum and simply fail to reconstruct the L2ARC after reboot. 7940 * 7941 * Implementation diagram: 7942 * 7943 * +=== L2ARC device (not to scale) ======================================+ 7944 * | ___two newest log block pointers__.__________ | 7945 * | / \dh_start_lbps[1] | 7946 * | / \ \dh_start_lbps[0]| 7947 * |.___/__. V V | 7948 * ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---| 7949 * || hdr| ^ /^ /^ / / | 7950 * |+------+ ...--\-------/ \-----/--\------/ / | 7951 * | \--------------/ \--------------/ | 7952 * +======================================================================+ 7953 * 7954 * As can be seen on the diagram, rather than using a simple linked list, 7955 * we use a pair of linked lists with alternating elements. This is a 7956 * performance enhancement due to the fact that we only find out the 7957 * address of the next log block access once the current block has been 7958 * completely read in. Obviously, this hurts performance, because we'd be 7959 * keeping the device's I/O queue at only a 1 operation deep, thus 7960 * incurring a large amount of I/O round-trip latency. Having two lists 7961 * allows us to fetch two log blocks ahead of where we are currently 7962 * rebuilding L2ARC buffers. 7963 * 7964 * On-device data structures: 7965 * 7966 * L2ARC device header: l2arc_dev_hdr_phys_t 7967 * L2ARC log block: l2arc_log_blk_phys_t 7968 * 7969 * L2ARC reconstruction: 7970 * 7971 * When writing data, we simply write in the standard rotary fashion, 7972 * evicting buffers as we go and simply writing new data over them (writing 7973 * a new log block every now and then). This obviously means that once we 7974 * loop around the end of the device, we will start cutting into an already 7975 * committed log block (and its referenced data buffers), like so: 7976 * 7977 * current write head__ __old tail 7978 * \ / 7979 * V V 7980 * <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |--> 7981 * ^ ^^^^^^^^^___________________________________ 7982 * | \ 7983 * <<nextwrite>> may overwrite this blk and/or its bufs --' 7984 * 7985 * When importing the pool, we detect this situation and use it to stop 7986 * our scanning process (see l2arc_rebuild). 7987 * 7988 * There is one significant caveat to consider when rebuilding ARC contents 7989 * from an L2ARC device: what about invalidated buffers? Given the above 7990 * construction, we cannot update blocks which we've already written to amend 7991 * them to remove buffers which were invalidated. Thus, during reconstruction, 7992 * we might be populating the cache with buffers for data that's not on the 7993 * main pool anymore, or may have been overwritten! 7994 * 7995 * As it turns out, this isn't a problem. Every arc_read request includes 7996 * both the DVA and, crucially, the birth TXG of the BP the caller is 7997 * looking for. So even if the cache were populated by completely rotten 7998 * blocks for data that had been long deleted and/or overwritten, we'll 7999 * never actually return bad data from the cache, since the DVA with the 8000 * birth TXG uniquely identify a block in space and time - once created, 8001 * a block is immutable on disk. The worst thing we have done is wasted 8002 * some time and memory at l2arc rebuild to reconstruct outdated ARC 8003 * entries that will get dropped from the l2arc as it is being updated 8004 * with new blocks. 8005 * 8006 * L2ARC buffers that have been evicted by l2arc_evict() ahead of the write 8007 * hand are not restored. This is done by saving the offset (in bytes) 8008 * l2arc_evict() has evicted to in the L2ARC device header and taking it 8009 * into account when restoring buffers. 8010 */ 8011 8012 static boolean_t 8013 l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr) 8014 { 8015 /* 8016 * A buffer is *not* eligible for the L2ARC if it: 8017 * 1. belongs to a different spa. 8018 * 2. is already cached on the L2ARC. 8019 * 3. has an I/O in progress (it may be an incomplete read). 8020 * 4. is flagged not eligible (zfs property). 8021 */ 8022 if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) || 8023 HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr)) 8024 return (B_FALSE); 8025 8026 return (B_TRUE); 8027 } 8028 8029 static uint64_t 8030 l2arc_write_size(l2arc_dev_t *dev) 8031 { 8032 uint64_t size, dev_size, tsize; 8033 8034 /* 8035 * Make sure our globals have meaningful values in case the user 8036 * altered them. 8037 */ 8038 size = l2arc_write_max; 8039 if (size == 0) { 8040 cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must " 8041 "be greater than zero, resetting it to the default (%d)", 8042 L2ARC_WRITE_SIZE); 8043 size = l2arc_write_max = L2ARC_WRITE_SIZE; 8044 } 8045 8046 if (arc_warm == B_FALSE) 8047 size += l2arc_write_boost; 8048 8049 /* 8050 * Make sure the write size does not exceed the size of the cache 8051 * device. This is important in l2arc_evict(), otherwise infinite 8052 * iteration can occur. 8053 */ 8054 dev_size = dev->l2ad_end - dev->l2ad_start; 8055 tsize = size + l2arc_log_blk_overhead(size, dev); 8056 if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) 8057 tsize += MAX(64 * 1024 * 1024, 8058 (tsize * l2arc_trim_ahead) / 100); 8059 8060 if (tsize >= dev_size) { 8061 cmn_err(CE_NOTE, "l2arc_write_max or l2arc_write_boost " 8062 "plus the overhead of log blocks (persistent L2ARC, " 8063 "%llu bytes) exceeds the size of the cache device " 8064 "(guid %llu), resetting them to the default (%d)", 8065 l2arc_log_blk_overhead(size, dev), 8066 dev->l2ad_vdev->vdev_guid, L2ARC_WRITE_SIZE); 8067 size = l2arc_write_max = l2arc_write_boost = L2ARC_WRITE_SIZE; 8068 8069 if (arc_warm == B_FALSE) 8070 size += l2arc_write_boost; 8071 } 8072 8073 return (size); 8074 8075 } 8076 8077 static clock_t 8078 l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote) 8079 { 8080 clock_t interval, next, now; 8081 8082 /* 8083 * If the ARC lists are busy, increase our write rate; if the 8084 * lists are stale, idle back. This is achieved by checking 8085 * how much we previously wrote - if it was more than half of 8086 * what we wanted, schedule the next write much sooner. 8087 */ 8088 if (l2arc_feed_again && wrote > (wanted / 2)) 8089 interval = (hz * l2arc_feed_min_ms) / 1000; 8090 else 8091 interval = hz * l2arc_feed_secs; 8092 8093 now = ddi_get_lbolt(); 8094 next = MAX(now, MIN(now + interval, began + interval)); 8095 8096 return (next); 8097 } 8098 8099 /* 8100 * Cycle through L2ARC devices. This is how L2ARC load balances. 8101 * If a device is returned, this also returns holding the spa config lock. 8102 */ 8103 static l2arc_dev_t * 8104 l2arc_dev_get_next(void) 8105 { 8106 l2arc_dev_t *first, *next = NULL; 8107 8108 /* 8109 * Lock out the removal of spas (spa_namespace_lock), then removal 8110 * of cache devices (l2arc_dev_mtx). Once a device has been selected, 8111 * both locks will be dropped and a spa config lock held instead. 8112 */ 8113 mutex_enter(&spa_namespace_lock); 8114 mutex_enter(&l2arc_dev_mtx); 8115 8116 /* if there are no vdevs, there is nothing to do */ 8117 if (l2arc_ndev == 0) 8118 goto out; 8119 8120 first = NULL; 8121 next = l2arc_dev_last; 8122 do { 8123 /* loop around the list looking for a non-faulted vdev */ 8124 if (next == NULL) { 8125 next = list_head(l2arc_dev_list); 8126 } else { 8127 next = list_next(l2arc_dev_list, next); 8128 if (next == NULL) 8129 next = list_head(l2arc_dev_list); 8130 } 8131 8132 /* if we have come back to the start, bail out */ 8133 if (first == NULL) 8134 first = next; 8135 else if (next == first) 8136 break; 8137 8138 } while (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8139 next->l2ad_trim_all); 8140 8141 /* if we were unable to find any usable vdevs, return NULL */ 8142 if (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8143 next->l2ad_trim_all) 8144 next = NULL; 8145 8146 l2arc_dev_last = next; 8147 8148 out: 8149 mutex_exit(&l2arc_dev_mtx); 8150 8151 /* 8152 * Grab the config lock to prevent the 'next' device from being 8153 * removed while we are writing to it. 8154 */ 8155 if (next != NULL) 8156 spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER); 8157 mutex_exit(&spa_namespace_lock); 8158 8159 return (next); 8160 } 8161 8162 /* 8163 * Free buffers that were tagged for destruction. 8164 */ 8165 static void 8166 l2arc_do_free_on_write(void) 8167 { 8168 list_t *buflist; 8169 l2arc_data_free_t *df, *df_prev; 8170 8171 mutex_enter(&l2arc_free_on_write_mtx); 8172 buflist = l2arc_free_on_write; 8173 8174 for (df = list_tail(buflist); df; df = df_prev) { 8175 df_prev = list_prev(buflist, df); 8176 ASSERT3P(df->l2df_abd, !=, NULL); 8177 abd_free(df->l2df_abd); 8178 list_remove(buflist, df); 8179 kmem_free(df, sizeof (l2arc_data_free_t)); 8180 } 8181 8182 mutex_exit(&l2arc_free_on_write_mtx); 8183 } 8184 8185 /* 8186 * A write to a cache device has completed. Update all headers to allow 8187 * reads from these buffers to begin. 8188 */ 8189 static void 8190 l2arc_write_done(zio_t *zio) 8191 { 8192 l2arc_write_callback_t *cb; 8193 l2arc_lb_abd_buf_t *abd_buf; 8194 l2arc_lb_ptr_buf_t *lb_ptr_buf; 8195 l2arc_dev_t *dev; 8196 l2arc_dev_hdr_phys_t *l2dhdr; 8197 list_t *buflist; 8198 arc_buf_hdr_t *head, *hdr, *hdr_prev; 8199 kmutex_t *hash_lock; 8200 int64_t bytes_dropped = 0; 8201 8202 cb = zio->io_private; 8203 ASSERT3P(cb, !=, NULL); 8204 dev = cb->l2wcb_dev; 8205 l2dhdr = dev->l2ad_dev_hdr; 8206 ASSERT3P(dev, !=, NULL); 8207 head = cb->l2wcb_head; 8208 ASSERT3P(head, !=, NULL); 8209 buflist = &dev->l2ad_buflist; 8210 ASSERT3P(buflist, !=, NULL); 8211 DTRACE_PROBE2(l2arc__iodone, zio_t *, zio, 8212 l2arc_write_callback_t *, cb); 8213 8214 /* 8215 * All writes completed, or an error was hit. 8216 */ 8217 top: 8218 mutex_enter(&dev->l2ad_mtx); 8219 for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) { 8220 hdr_prev = list_prev(buflist, hdr); 8221 8222 hash_lock = HDR_LOCK(hdr); 8223 8224 /* 8225 * We cannot use mutex_enter or else we can deadlock 8226 * with l2arc_write_buffers (due to swapping the order 8227 * the hash lock and l2ad_mtx are taken). 8228 */ 8229 if (!mutex_tryenter(hash_lock)) { 8230 /* 8231 * Missed the hash lock. We must retry so we 8232 * don't leave the ARC_FLAG_L2_WRITING bit set. 8233 */ 8234 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry); 8235 8236 /* 8237 * We don't want to rescan the headers we've 8238 * already marked as having been written out, so 8239 * we reinsert the head node so we can pick up 8240 * where we left off. 8241 */ 8242 list_remove(buflist, head); 8243 list_insert_after(buflist, hdr, head); 8244 8245 mutex_exit(&dev->l2ad_mtx); 8246 8247 /* 8248 * We wait for the hash lock to become available 8249 * to try and prevent busy waiting, and increase 8250 * the chance we'll be able to acquire the lock 8251 * the next time around. 8252 */ 8253 mutex_enter(hash_lock); 8254 mutex_exit(hash_lock); 8255 goto top; 8256 } 8257 8258 /* 8259 * We could not have been moved into the arc_l2c_only 8260 * state while in-flight due to our ARC_FLAG_L2_WRITING 8261 * bit being set. Let's just ensure that's being enforced. 8262 */ 8263 ASSERT(HDR_HAS_L1HDR(hdr)); 8264 8265 /* 8266 * Skipped - drop L2ARC entry and mark the header as no 8267 * longer L2 eligibile. 8268 */ 8269 if (zio->io_error != 0) { 8270 /* 8271 * Error - drop L2ARC entry. 8272 */ 8273 list_remove(buflist, hdr); 8274 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 8275 8276 uint64_t psize = HDR_GET_PSIZE(hdr); 8277 l2arc_hdr_arcstats_decrement(hdr); 8278 8279 bytes_dropped += 8280 vdev_psize_to_asize(dev->l2ad_vdev, psize); 8281 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 8282 arc_hdr_size(hdr), hdr); 8283 } 8284 8285 /* 8286 * Allow ARC to begin reads and ghost list evictions to 8287 * this L2ARC entry. 8288 */ 8289 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING); 8290 8291 mutex_exit(hash_lock); 8292 } 8293 8294 /* 8295 * Free the allocated abd buffers for writing the log blocks. 8296 * If the zio failed reclaim the allocated space and remove the 8297 * pointers to these log blocks from the log block pointer list 8298 * of the L2ARC device. 8299 */ 8300 while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) { 8301 abd_free(abd_buf->abd); 8302 zio_buf_free(abd_buf, sizeof (*abd_buf)); 8303 if (zio->io_error != 0) { 8304 lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list); 8305 /* 8306 * L2BLK_GET_PSIZE returns aligned size for log 8307 * blocks. 8308 */ 8309 uint64_t asize = 8310 L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop); 8311 bytes_dropped += asize; 8312 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8313 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8314 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8315 lb_ptr_buf); 8316 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8317 kmem_free(lb_ptr_buf->lb_ptr, 8318 sizeof (l2arc_log_blkptr_t)); 8319 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8320 } 8321 } 8322 list_destroy(&cb->l2wcb_abd_list); 8323 8324 if (zio->io_error != 0) { 8325 ARCSTAT_BUMP(arcstat_l2_writes_error); 8326 8327 /* 8328 * Restore the lbps array in the header to its previous state. 8329 * If the list of log block pointers is empty, zero out the 8330 * log block pointers in the device header. 8331 */ 8332 lb_ptr_buf = list_head(&dev->l2ad_lbptr_list); 8333 for (int i = 0; i < 2; i++) { 8334 if (lb_ptr_buf == NULL) { 8335 /* 8336 * If the list is empty zero out the device 8337 * header. Otherwise zero out the second log 8338 * block pointer in the header. 8339 */ 8340 if (i == 0) { 8341 bzero(l2dhdr, dev->l2ad_dev_hdr_asize); 8342 } else { 8343 bzero(&l2dhdr->dh_start_lbps[i], 8344 sizeof (l2arc_log_blkptr_t)); 8345 } 8346 break; 8347 } 8348 bcopy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[i], 8349 sizeof (l2arc_log_blkptr_t)); 8350 lb_ptr_buf = list_next(&dev->l2ad_lbptr_list, 8351 lb_ptr_buf); 8352 } 8353 } 8354 8355 atomic_inc_64(&l2arc_writes_done); 8356 list_remove(buflist, head); 8357 ASSERT(!HDR_HAS_L1HDR(head)); 8358 kmem_cache_free(hdr_l2only_cache, head); 8359 mutex_exit(&dev->l2ad_mtx); 8360 8361 ASSERT(dev->l2ad_vdev != NULL); 8362 vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0); 8363 8364 l2arc_do_free_on_write(); 8365 8366 kmem_free(cb, sizeof (l2arc_write_callback_t)); 8367 } 8368 8369 static int 8370 l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb) 8371 { 8372 int ret; 8373 spa_t *spa = zio->io_spa; 8374 arc_buf_hdr_t *hdr = cb->l2rcb_hdr; 8375 blkptr_t *bp = zio->io_bp; 8376 uint8_t salt[ZIO_DATA_SALT_LEN]; 8377 uint8_t iv[ZIO_DATA_IV_LEN]; 8378 uint8_t mac[ZIO_DATA_MAC_LEN]; 8379 boolean_t no_crypt = B_FALSE; 8380 8381 /* 8382 * ZIL data is never be written to the L2ARC, so we don't need 8383 * special handling for its unique MAC storage. 8384 */ 8385 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 8386 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 8387 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8388 8389 /* 8390 * If the data was encrypted, decrypt it now. Note that 8391 * we must check the bp here and not the hdr, since the 8392 * hdr does not have its encryption parameters updated 8393 * until arc_read_done(). 8394 */ 8395 if (BP_IS_ENCRYPTED(bp)) { 8396 abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8397 B_TRUE); 8398 8399 zio_crypt_decode_params_bp(bp, salt, iv); 8400 zio_crypt_decode_mac_bp(bp, mac); 8401 8402 ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb, 8403 BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp), 8404 salt, iv, mac, HDR_GET_PSIZE(hdr), eabd, 8405 hdr->b_l1hdr.b_pabd, &no_crypt); 8406 if (ret != 0) { 8407 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8408 goto error; 8409 } 8410 8411 /* 8412 * If we actually performed decryption, replace b_pabd 8413 * with the decrypted data. Otherwise we can just throw 8414 * our decryption buffer away. 8415 */ 8416 if (!no_crypt) { 8417 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8418 arc_hdr_size(hdr), hdr); 8419 hdr->b_l1hdr.b_pabd = eabd; 8420 zio->io_abd = eabd; 8421 } else { 8422 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8423 } 8424 } 8425 8426 /* 8427 * If the L2ARC block was compressed, but ARC compression 8428 * is disabled we decompress the data into a new buffer and 8429 * replace the existing data. 8430 */ 8431 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8432 !HDR_COMPRESSION_ENABLED(hdr)) { 8433 abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8434 B_TRUE); 8435 void *tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 8436 8437 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 8438 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 8439 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 8440 if (ret != 0) { 8441 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8442 arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr); 8443 goto error; 8444 } 8445 8446 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8447 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8448 arc_hdr_size(hdr), hdr); 8449 hdr->b_l1hdr.b_pabd = cabd; 8450 zio->io_abd = cabd; 8451 zio->io_size = HDR_GET_LSIZE(hdr); 8452 } 8453 8454 return (0); 8455 8456 error: 8457 return (ret); 8458 } 8459 8460 8461 /* 8462 * A read to a cache device completed. Validate buffer contents before 8463 * handing over to the regular ARC routines. 8464 */ 8465 static void 8466 l2arc_read_done(zio_t *zio) 8467 { 8468 int tfm_error = 0; 8469 l2arc_read_callback_t *cb = zio->io_private; 8470 arc_buf_hdr_t *hdr; 8471 kmutex_t *hash_lock; 8472 boolean_t valid_cksum; 8473 boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) && 8474 (cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT)); 8475 8476 ASSERT3P(zio->io_vd, !=, NULL); 8477 ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE); 8478 8479 spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd); 8480 8481 ASSERT3P(cb, !=, NULL); 8482 hdr = cb->l2rcb_hdr; 8483 ASSERT3P(hdr, !=, NULL); 8484 8485 hash_lock = HDR_LOCK(hdr); 8486 mutex_enter(hash_lock); 8487 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 8488 8489 /* 8490 * If the data was read into a temporary buffer, 8491 * move it and free the buffer. 8492 */ 8493 if (cb->l2rcb_abd != NULL) { 8494 ASSERT3U(arc_hdr_size(hdr), <, zio->io_size); 8495 if (zio->io_error == 0) { 8496 if (using_rdata) { 8497 abd_copy(hdr->b_crypt_hdr.b_rabd, 8498 cb->l2rcb_abd, arc_hdr_size(hdr)); 8499 } else { 8500 abd_copy(hdr->b_l1hdr.b_pabd, 8501 cb->l2rcb_abd, arc_hdr_size(hdr)); 8502 } 8503 } 8504 8505 /* 8506 * The following must be done regardless of whether 8507 * there was an error: 8508 * - free the temporary buffer 8509 * - point zio to the real ARC buffer 8510 * - set zio size accordingly 8511 * These are required because zio is either re-used for 8512 * an I/O of the block in the case of the error 8513 * or the zio is passed to arc_read_done() and it 8514 * needs real data. 8515 */ 8516 abd_free(cb->l2rcb_abd); 8517 zio->io_size = zio->io_orig_size = arc_hdr_size(hdr); 8518 8519 if (using_rdata) { 8520 ASSERT(HDR_HAS_RABD(hdr)); 8521 zio->io_abd = zio->io_orig_abd = 8522 hdr->b_crypt_hdr.b_rabd; 8523 } else { 8524 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8525 zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd; 8526 } 8527 } 8528 8529 ASSERT3P(zio->io_abd, !=, NULL); 8530 8531 /* 8532 * Check this survived the L2ARC journey. 8533 */ 8534 ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd || 8535 (HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd)); 8536 zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */ 8537 zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */ 8538 zio->io_prop.zp_complevel = hdr->b_complevel; 8539 8540 valid_cksum = arc_cksum_is_equal(hdr, zio); 8541 8542 /* 8543 * b_rabd will always match the data as it exists on disk if it is 8544 * being used. Therefore if we are reading into b_rabd we do not 8545 * attempt to untransform the data. 8546 */ 8547 if (valid_cksum && !using_rdata) 8548 tfm_error = l2arc_untransform(zio, cb); 8549 8550 if (valid_cksum && tfm_error == 0 && zio->io_error == 0 && 8551 !HDR_L2_EVICTED(hdr)) { 8552 mutex_exit(hash_lock); 8553 zio->io_private = hdr; 8554 arc_read_done(zio); 8555 } else { 8556 /* 8557 * Buffer didn't survive caching. Increment stats and 8558 * reissue to the original storage device. 8559 */ 8560 if (zio->io_error != 0) { 8561 ARCSTAT_BUMP(arcstat_l2_io_error); 8562 } else { 8563 zio->io_error = SET_ERROR(EIO); 8564 } 8565 if (!valid_cksum || tfm_error != 0) 8566 ARCSTAT_BUMP(arcstat_l2_cksum_bad); 8567 8568 /* 8569 * If there's no waiter, issue an async i/o to the primary 8570 * storage now. If there *is* a waiter, the caller must 8571 * issue the i/o in a context where it's OK to block. 8572 */ 8573 if (zio->io_waiter == NULL) { 8574 zio_t *pio = zio_unique_parent(zio); 8575 void *abd = (using_rdata) ? 8576 hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd; 8577 8578 ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL); 8579 8580 zio = zio_read(pio, zio->io_spa, zio->io_bp, 8581 abd, zio->io_size, arc_read_done, 8582 hdr, zio->io_priority, cb->l2rcb_flags, 8583 &cb->l2rcb_zb); 8584 8585 /* 8586 * Original ZIO will be freed, so we need to update 8587 * ARC header with the new ZIO pointer to be used 8588 * by zio_change_priority() in arc_read(). 8589 */ 8590 for (struct arc_callback *acb = hdr->b_l1hdr.b_acb; 8591 acb != NULL; acb = acb->acb_next) 8592 acb->acb_zio_head = zio; 8593 8594 mutex_exit(hash_lock); 8595 zio_nowait(zio); 8596 } else { 8597 mutex_exit(hash_lock); 8598 } 8599 } 8600 8601 kmem_free(cb, sizeof (l2arc_read_callback_t)); 8602 } 8603 8604 /* 8605 * This is the list priority from which the L2ARC will search for pages to 8606 * cache. This is used within loops (0..3) to cycle through lists in the 8607 * desired order. This order can have a significant effect on cache 8608 * performance. 8609 * 8610 * Currently the metadata lists are hit first, MFU then MRU, followed by 8611 * the data lists. This function returns a locked list, and also returns 8612 * the lock pointer. 8613 */ 8614 static multilist_sublist_t * 8615 l2arc_sublist_lock(int list_num) 8616 { 8617 multilist_t *ml = NULL; 8618 unsigned int idx; 8619 8620 ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES); 8621 8622 switch (list_num) { 8623 case 0: 8624 ml = arc_mfu->arcs_list[ARC_BUFC_METADATA]; 8625 break; 8626 case 1: 8627 ml = arc_mru->arcs_list[ARC_BUFC_METADATA]; 8628 break; 8629 case 2: 8630 ml = arc_mfu->arcs_list[ARC_BUFC_DATA]; 8631 break; 8632 case 3: 8633 ml = arc_mru->arcs_list[ARC_BUFC_DATA]; 8634 break; 8635 default: 8636 return (NULL); 8637 } 8638 8639 /* 8640 * Return a randomly-selected sublist. This is acceptable 8641 * because the caller feeds only a little bit of data for each 8642 * call (8MB). Subsequent calls will result in different 8643 * sublists being selected. 8644 */ 8645 idx = multilist_get_random_index(ml); 8646 return (multilist_sublist_lock(ml, idx)); 8647 } 8648 8649 /* 8650 * Calculates the maximum overhead of L2ARC metadata log blocks for a given 8651 * L2ARC write size. l2arc_evict and l2arc_write_size need to include this 8652 * overhead in processing to make sure there is enough headroom available 8653 * when writing buffers. 8654 */ 8655 static inline uint64_t 8656 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev) 8657 { 8658 if (dev->l2ad_log_entries == 0) { 8659 return (0); 8660 } else { 8661 uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT; 8662 8663 uint64_t log_blocks = (log_entries + 8664 dev->l2ad_log_entries - 1) / 8665 dev->l2ad_log_entries; 8666 8667 return (vdev_psize_to_asize(dev->l2ad_vdev, 8668 sizeof (l2arc_log_blk_phys_t)) * log_blocks); 8669 } 8670 } 8671 8672 /* 8673 * Evict buffers from the device write hand to the distance specified in 8674 * bytes. This distance may span populated buffers, it may span nothing. 8675 * This is clearing a region on the L2ARC device ready for writing. 8676 * If the 'all' boolean is set, every buffer is evicted. 8677 */ 8678 static void 8679 l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all) 8680 { 8681 list_t *buflist; 8682 arc_buf_hdr_t *hdr, *hdr_prev; 8683 kmutex_t *hash_lock; 8684 uint64_t taddr; 8685 l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev; 8686 vdev_t *vd = dev->l2ad_vdev; 8687 boolean_t rerun; 8688 8689 buflist = &dev->l2ad_buflist; 8690 8691 /* 8692 * We need to add in the worst case scenario of log block overhead. 8693 */ 8694 distance += l2arc_log_blk_overhead(distance, dev); 8695 if (vd->vdev_has_trim && l2arc_trim_ahead > 0) { 8696 /* 8697 * Trim ahead of the write size 64MB or (l2arc_trim_ahead/100) 8698 * times the write size, whichever is greater. 8699 */ 8700 distance += MAX(64 * 1024 * 1024, 8701 (distance * l2arc_trim_ahead) / 100); 8702 } 8703 8704 top: 8705 rerun = B_FALSE; 8706 if (dev->l2ad_hand >= (dev->l2ad_end - distance)) { 8707 /* 8708 * When there is no space to accommodate upcoming writes, 8709 * evict to the end. Then bump the write and evict hands 8710 * to the start and iterate. This iteration does not 8711 * happen indefinitely as we make sure in 8712 * l2arc_write_size() that when the write hand is reset, 8713 * the write size does not exceed the end of the device. 8714 */ 8715 rerun = B_TRUE; 8716 taddr = dev->l2ad_end; 8717 } else { 8718 taddr = dev->l2ad_hand + distance; 8719 } 8720 DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist, 8721 uint64_t, taddr, boolean_t, all); 8722 8723 if (!all) { 8724 /* 8725 * This check has to be placed after deciding whether to 8726 * iterate (rerun). 8727 */ 8728 if (dev->l2ad_first) { 8729 /* 8730 * This is the first sweep through the device. There is 8731 * nothing to evict. We have already trimmmed the 8732 * whole device. 8733 */ 8734 goto out; 8735 } else { 8736 /* 8737 * Trim the space to be evicted. 8738 */ 8739 if (vd->vdev_has_trim && dev->l2ad_evict < taddr && 8740 l2arc_trim_ahead > 0) { 8741 /* 8742 * We have to drop the spa_config lock because 8743 * vdev_trim_range() will acquire it. 8744 * l2ad_evict already accounts for the label 8745 * size. To prevent vdev_trim_ranges() from 8746 * adding it again, we subtract it from 8747 * l2ad_evict. 8748 */ 8749 spa_config_exit(dev->l2ad_spa, SCL_L2ARC, dev); 8750 vdev_trim_simple(vd, 8751 dev->l2ad_evict - VDEV_LABEL_START_SIZE, 8752 taddr - dev->l2ad_evict); 8753 spa_config_enter(dev->l2ad_spa, SCL_L2ARC, dev, 8754 RW_READER); 8755 } 8756 8757 /* 8758 * When rebuilding L2ARC we retrieve the evict hand 8759 * from the header of the device. Of note, l2arc_evict() 8760 * does not actually delete buffers from the cache 8761 * device, but trimming may do so depending on the 8762 * hardware implementation. Thus keeping track of the 8763 * evict hand is useful. 8764 */ 8765 dev->l2ad_evict = MAX(dev->l2ad_evict, taddr); 8766 } 8767 } 8768 8769 retry: 8770 mutex_enter(&dev->l2ad_mtx); 8771 /* 8772 * We have to account for evicted log blocks. Run vdev_space_update() 8773 * on log blocks whose offset (in bytes) is before the evicted offset 8774 * (in bytes) by searching in the list of pointers to log blocks 8775 * present in the L2ARC device. 8776 */ 8777 for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf; 8778 lb_ptr_buf = lb_ptr_buf_prev) { 8779 8780 lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf); 8781 8782 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 8783 uint64_t asize = L2BLK_GET_PSIZE( 8784 (lb_ptr_buf->lb_ptr)->lbp_prop); 8785 8786 /* 8787 * We don't worry about log blocks left behind (ie 8788 * lbp_payload_start < l2ad_hand) because l2arc_write_buffers() 8789 * will never write more than l2arc_evict() evicts. 8790 */ 8791 if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) { 8792 break; 8793 } else { 8794 vdev_space_update(vd, -asize, 0, 0); 8795 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8796 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8797 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8798 lb_ptr_buf); 8799 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8800 list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf); 8801 kmem_free(lb_ptr_buf->lb_ptr, 8802 sizeof (l2arc_log_blkptr_t)); 8803 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8804 } 8805 } 8806 8807 for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) { 8808 hdr_prev = list_prev(buflist, hdr); 8809 8810 ASSERT(!HDR_EMPTY(hdr)); 8811 hash_lock = HDR_LOCK(hdr); 8812 8813 /* 8814 * We cannot use mutex_enter or else we can deadlock 8815 * with l2arc_write_buffers (due to swapping the order 8816 * the hash lock and l2ad_mtx are taken). 8817 */ 8818 if (!mutex_tryenter(hash_lock)) { 8819 /* 8820 * Missed the hash lock. Retry. 8821 */ 8822 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry); 8823 mutex_exit(&dev->l2ad_mtx); 8824 mutex_enter(hash_lock); 8825 mutex_exit(hash_lock); 8826 goto retry; 8827 } 8828 8829 /* 8830 * A header can't be on this list if it doesn't have L2 header. 8831 */ 8832 ASSERT(HDR_HAS_L2HDR(hdr)); 8833 8834 /* Ensure this header has finished being written. */ 8835 ASSERT(!HDR_L2_WRITING(hdr)); 8836 ASSERT(!HDR_L2_WRITE_HEAD(hdr)); 8837 8838 if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict || 8839 hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) { 8840 /* 8841 * We've evicted to the target address, 8842 * or the end of the device. 8843 */ 8844 mutex_exit(hash_lock); 8845 break; 8846 } 8847 8848 if (!HDR_HAS_L1HDR(hdr)) { 8849 ASSERT(!HDR_L2_READING(hdr)); 8850 /* 8851 * This doesn't exist in the ARC. Destroy. 8852 * arc_hdr_destroy() will call list_remove() 8853 * and decrement arcstat_l2_lsize. 8854 */ 8855 arc_change_state(arc_anon, hdr, hash_lock); 8856 arc_hdr_destroy(hdr); 8857 } else { 8858 ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only); 8859 ARCSTAT_BUMP(arcstat_l2_evict_l1cached); 8860 /* 8861 * Invalidate issued or about to be issued 8862 * reads, since we may be about to write 8863 * over this location. 8864 */ 8865 if (HDR_L2_READING(hdr)) { 8866 ARCSTAT_BUMP(arcstat_l2_evict_reading); 8867 arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED); 8868 } 8869 8870 arc_hdr_l2hdr_destroy(hdr); 8871 } 8872 mutex_exit(hash_lock); 8873 } 8874 mutex_exit(&dev->l2ad_mtx); 8875 8876 out: 8877 /* 8878 * We need to check if we evict all buffers, otherwise we may iterate 8879 * unnecessarily. 8880 */ 8881 if (!all && rerun) { 8882 /* 8883 * Bump device hand to the device start if it is approaching the 8884 * end. l2arc_evict() has already evicted ahead for this case. 8885 */ 8886 dev->l2ad_hand = dev->l2ad_start; 8887 dev->l2ad_evict = dev->l2ad_start; 8888 dev->l2ad_first = B_FALSE; 8889 goto top; 8890 } 8891 8892 if (!all) { 8893 /* 8894 * In case of cache device removal (all) the following 8895 * assertions may be violated without functional consequences 8896 * as the device is about to be removed. 8897 */ 8898 ASSERT3U(dev->l2ad_hand + distance, <, dev->l2ad_end); 8899 if (!dev->l2ad_first) 8900 ASSERT3U(dev->l2ad_hand, <, dev->l2ad_evict); 8901 } 8902 } 8903 8904 /* 8905 * Handle any abd transforms that might be required for writing to the L2ARC. 8906 * If successful, this function will always return an abd with the data 8907 * transformed as it is on disk in a new abd of asize bytes. 8908 */ 8909 static int 8910 l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize, 8911 abd_t **abd_out) 8912 { 8913 int ret; 8914 void *tmp = NULL; 8915 abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd; 8916 enum zio_compress compress = HDR_GET_COMPRESS(hdr); 8917 uint64_t psize = HDR_GET_PSIZE(hdr); 8918 uint64_t size = arc_hdr_size(hdr); 8919 boolean_t ismd = HDR_ISTYPE_METADATA(hdr); 8920 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 8921 dsl_crypto_key_t *dck = NULL; 8922 uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 }; 8923 boolean_t no_crypt = B_FALSE; 8924 8925 ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8926 !HDR_COMPRESSION_ENABLED(hdr)) || 8927 HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize); 8928 ASSERT3U(psize, <=, asize); 8929 8930 /* 8931 * If this data simply needs its own buffer, we simply allocate it 8932 * and copy the data. This may be done to eliminate a dependency on a 8933 * shared buffer or to reallocate the buffer to match asize. 8934 */ 8935 if (HDR_HAS_RABD(hdr) && asize != psize) { 8936 ASSERT3U(asize, >=, psize); 8937 to_write = abd_alloc_for_io(asize, ismd); 8938 abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize); 8939 if (psize != asize) 8940 abd_zero_off(to_write, psize, asize - psize); 8941 goto out; 8942 } 8943 8944 if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) && 8945 !HDR_ENCRYPTED(hdr)) { 8946 ASSERT3U(size, ==, psize); 8947 to_write = abd_alloc_for_io(asize, ismd); 8948 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); 8949 if (size != asize) 8950 abd_zero_off(to_write, size, asize - size); 8951 goto out; 8952 } 8953 8954 if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) { 8955 cabd = abd_alloc_for_io(asize, ismd); 8956 tmp = abd_borrow_buf(cabd, asize); 8957 8958 psize = zio_compress_data(compress, to_write, tmp, size, 8959 hdr->b_complevel); 8960 8961 if (psize >= size) { 8962 abd_return_buf(cabd, tmp, asize); 8963 HDR_SET_COMPRESS(hdr, ZIO_COMPRESS_OFF); 8964 to_write = cabd; 8965 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); 8966 if (size != asize) 8967 abd_zero_off(to_write, size, asize - size); 8968 goto encrypt; 8969 } 8970 ASSERT3U(psize, <=, HDR_GET_PSIZE(hdr)); 8971 if (psize < asize) 8972 bzero((char *)tmp + psize, asize - psize); 8973 psize = HDR_GET_PSIZE(hdr); 8974 abd_return_buf_copy(cabd, tmp, asize); 8975 to_write = cabd; 8976 } 8977 8978 encrypt: 8979 if (HDR_ENCRYPTED(hdr)) { 8980 eabd = abd_alloc_for_io(asize, ismd); 8981 8982 /* 8983 * If the dataset was disowned before the buffer 8984 * made it to this point, the key to re-encrypt 8985 * it won't be available. In this case we simply 8986 * won't write the buffer to the L2ARC. 8987 */ 8988 ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj, 8989 FTAG, &dck); 8990 if (ret != 0) 8991 goto error; 8992 8993 ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key, 8994 hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt, 8995 hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd, 8996 &no_crypt); 8997 if (ret != 0) 8998 goto error; 8999 9000 if (no_crypt) 9001 abd_copy(eabd, to_write, psize); 9002 9003 if (psize != asize) 9004 abd_zero_off(eabd, psize, asize - psize); 9005 9006 /* assert that the MAC we got here matches the one we saved */ 9007 ASSERT0(bcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN)); 9008 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9009 9010 if (to_write == cabd) 9011 abd_free(cabd); 9012 9013 to_write = eabd; 9014 } 9015 9016 out: 9017 ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd); 9018 *abd_out = to_write; 9019 return (0); 9020 9021 error: 9022 if (dck != NULL) 9023 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9024 if (cabd != NULL) 9025 abd_free(cabd); 9026 if (eabd != NULL) 9027 abd_free(eabd); 9028 9029 *abd_out = NULL; 9030 return (ret); 9031 } 9032 9033 static void 9034 l2arc_blk_fetch_done(zio_t *zio) 9035 { 9036 l2arc_read_callback_t *cb; 9037 9038 cb = zio->io_private; 9039 if (cb->l2rcb_abd != NULL) 9040 abd_put(cb->l2rcb_abd); 9041 kmem_free(cb, sizeof (l2arc_read_callback_t)); 9042 } 9043 9044 /* 9045 * Find and write ARC buffers to the L2ARC device. 9046 * 9047 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid 9048 * for reading until they have completed writing. 9049 * The headroom_boost is an in-out parameter used to maintain headroom boost 9050 * state between calls to this function. 9051 * 9052 * Returns the number of bytes actually written (which may be smaller than 9053 * the delta by which the device hand has changed due to alignment and the 9054 * writing of log blocks). 9055 */ 9056 static uint64_t 9057 l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz) 9058 { 9059 arc_buf_hdr_t *hdr, *hdr_prev, *head; 9060 uint64_t write_asize, write_psize, write_lsize, headroom; 9061 boolean_t full; 9062 l2arc_write_callback_t *cb = NULL; 9063 zio_t *pio, *wzio; 9064 uint64_t guid = spa_load_guid(spa); 9065 9066 ASSERT3P(dev->l2ad_vdev, !=, NULL); 9067 9068 pio = NULL; 9069 write_lsize = write_asize = write_psize = 0; 9070 full = B_FALSE; 9071 head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE); 9072 arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR); 9073 9074 /* 9075 * Copy buffers for L2ARC writing. 9076 */ 9077 for (int try = 0; try < L2ARC_FEED_TYPES; try++) { 9078 /* 9079 * If try == 1 or 3, we cache MRU metadata and data 9080 * respectively. 9081 */ 9082 if (l2arc_mfuonly) { 9083 if (try == 1 || try == 3) 9084 continue; 9085 } 9086 9087 multilist_sublist_t *mls = l2arc_sublist_lock(try); 9088 uint64_t passed_sz = 0; 9089 9090 VERIFY3P(mls, !=, NULL); 9091 9092 /* 9093 * L2ARC fast warmup. 9094 * 9095 * Until the ARC is warm and starts to evict, read from the 9096 * head of the ARC lists rather than the tail. 9097 */ 9098 if (arc_warm == B_FALSE) 9099 hdr = multilist_sublist_head(mls); 9100 else 9101 hdr = multilist_sublist_tail(mls); 9102 9103 headroom = target_sz * l2arc_headroom; 9104 if (zfs_compressed_arc_enabled) 9105 headroom = (headroom * l2arc_headroom_boost) / 100; 9106 9107 for (; hdr; hdr = hdr_prev) { 9108 kmutex_t *hash_lock; 9109 abd_t *to_write = NULL; 9110 9111 if (arc_warm == B_FALSE) 9112 hdr_prev = multilist_sublist_next(mls, hdr); 9113 else 9114 hdr_prev = multilist_sublist_prev(mls, hdr); 9115 9116 hash_lock = HDR_LOCK(hdr); 9117 if (!mutex_tryenter(hash_lock)) { 9118 /* 9119 * Skip this buffer rather than waiting. 9120 */ 9121 continue; 9122 } 9123 9124 passed_sz += HDR_GET_LSIZE(hdr); 9125 if (l2arc_headroom != 0 && passed_sz > headroom) { 9126 /* 9127 * Searched too far. 9128 */ 9129 mutex_exit(hash_lock); 9130 break; 9131 } 9132 9133 if (!l2arc_write_eligible(guid, hdr)) { 9134 mutex_exit(hash_lock); 9135 continue; 9136 } 9137 9138 /* 9139 * We rely on the L1 portion of the header below, so 9140 * it's invalid for this header to have been evicted out 9141 * of the ghost cache, prior to being written out. The 9142 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 9143 */ 9144 ASSERT(HDR_HAS_L1HDR(hdr)); 9145 9146 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 9147 ASSERT3U(arc_hdr_size(hdr), >, 0); 9148 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 9149 HDR_HAS_RABD(hdr)); 9150 uint64_t psize = HDR_GET_PSIZE(hdr); 9151 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, 9152 psize); 9153 9154 if ((write_asize + asize) > target_sz) { 9155 full = B_TRUE; 9156 mutex_exit(hash_lock); 9157 break; 9158 } 9159 9160 /* 9161 * We rely on the L1 portion of the header below, so 9162 * it's invalid for this header to have been evicted out 9163 * of the ghost cache, prior to being written out. The 9164 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 9165 */ 9166 arc_hdr_set_flags(hdr, ARC_FLAG_L2_WRITING); 9167 ASSERT(HDR_HAS_L1HDR(hdr)); 9168 9169 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 9170 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 9171 HDR_HAS_RABD(hdr)); 9172 ASSERT3U(arc_hdr_size(hdr), >, 0); 9173 9174 /* 9175 * If this header has b_rabd, we can use this since it 9176 * must always match the data exactly as it exists on 9177 * disk. Otherwise, the L2ARC can normally use the 9178 * hdr's data, but if we're sharing data between the 9179 * hdr and one of its bufs, L2ARC needs its own copy of 9180 * the data so that the ZIO below can't race with the 9181 * buf consumer. To ensure that this copy will be 9182 * available for the lifetime of the ZIO and be cleaned 9183 * up afterwards, we add it to the l2arc_free_on_write 9184 * queue. If we need to apply any transforms to the 9185 * data (compression, encryption) we will also need the 9186 * extra buffer. 9187 */ 9188 if (HDR_HAS_RABD(hdr) && psize == asize) { 9189 to_write = hdr->b_crypt_hdr.b_rabd; 9190 } else if ((HDR_COMPRESSION_ENABLED(hdr) || 9191 HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) && 9192 !HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) && 9193 psize == asize) { 9194 to_write = hdr->b_l1hdr.b_pabd; 9195 } else { 9196 int ret; 9197 arc_buf_contents_t type = arc_buf_type(hdr); 9198 9199 ret = l2arc_apply_transforms(spa, hdr, asize, 9200 &to_write); 9201 if (ret != 0) { 9202 arc_hdr_clear_flags(hdr, 9203 ARC_FLAG_L2_WRITING); 9204 mutex_exit(hash_lock); 9205 continue; 9206 } 9207 9208 l2arc_free_abd_on_write(to_write, asize, type); 9209 } 9210 9211 if (pio == NULL) { 9212 /* 9213 * Insert a dummy header on the buflist so 9214 * l2arc_write_done() can find where the 9215 * write buffers begin without searching. 9216 */ 9217 mutex_enter(&dev->l2ad_mtx); 9218 list_insert_head(&dev->l2ad_buflist, head); 9219 mutex_exit(&dev->l2ad_mtx); 9220 9221 cb = kmem_alloc( 9222 sizeof (l2arc_write_callback_t), KM_SLEEP); 9223 cb->l2wcb_dev = dev; 9224 cb->l2wcb_head = head; 9225 /* 9226 * Create a list to save allocated abd buffers 9227 * for l2arc_log_blk_commit(). 9228 */ 9229 list_create(&cb->l2wcb_abd_list, 9230 sizeof (l2arc_lb_abd_buf_t), 9231 offsetof(l2arc_lb_abd_buf_t, node)); 9232 pio = zio_root(spa, l2arc_write_done, cb, 9233 ZIO_FLAG_CANFAIL); 9234 } 9235 9236 hdr->b_l2hdr.b_dev = dev; 9237 hdr->b_l2hdr.b_hits = 0; 9238 9239 hdr->b_l2hdr.b_daddr = dev->l2ad_hand; 9240 hdr->b_l2hdr.b_arcs_state = 9241 hdr->b_l1hdr.b_state->arcs_state; 9242 arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR); 9243 9244 mutex_enter(&dev->l2ad_mtx); 9245 list_insert_head(&dev->l2ad_buflist, hdr); 9246 mutex_exit(&dev->l2ad_mtx); 9247 9248 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 9249 arc_hdr_size(hdr), hdr); 9250 9251 wzio = zio_write_phys(pio, dev->l2ad_vdev, 9252 hdr->b_l2hdr.b_daddr, asize, to_write, 9253 ZIO_CHECKSUM_OFF, NULL, hdr, 9254 ZIO_PRIORITY_ASYNC_WRITE, 9255 ZIO_FLAG_CANFAIL, B_FALSE); 9256 9257 write_lsize += HDR_GET_LSIZE(hdr); 9258 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, 9259 zio_t *, wzio); 9260 9261 write_psize += psize; 9262 write_asize += asize; 9263 dev->l2ad_hand += asize; 9264 l2arc_hdr_arcstats_increment(hdr); 9265 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9266 9267 mutex_exit(hash_lock); 9268 9269 /* 9270 * Append buf info to current log and commit if full. 9271 * arcstat_l2_{size,asize} kstats are updated 9272 * internally. 9273 */ 9274 if (l2arc_log_blk_insert(dev, hdr)) 9275 l2arc_log_blk_commit(dev, pio, cb); 9276 9277 zio_nowait(wzio); 9278 } 9279 9280 multilist_sublist_unlock(mls); 9281 9282 if (full == B_TRUE) 9283 break; 9284 } 9285 9286 /* No buffers selected for writing? */ 9287 if (pio == NULL) { 9288 ASSERT0(write_lsize); 9289 ASSERT(!HDR_HAS_L1HDR(head)); 9290 kmem_cache_free(hdr_l2only_cache, head); 9291 9292 /* 9293 * Although we did not write any buffers l2ad_evict may 9294 * have advanced. 9295 */ 9296 l2arc_dev_hdr_update(dev); 9297 9298 return (0); 9299 } 9300 9301 if (!dev->l2ad_first) 9302 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 9303 9304 ASSERT3U(write_asize, <=, target_sz); 9305 ARCSTAT_BUMP(arcstat_l2_writes_sent); 9306 ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize); 9307 9308 dev->l2ad_writing = B_TRUE; 9309 (void) zio_wait(pio); 9310 dev->l2ad_writing = B_FALSE; 9311 9312 /* 9313 * Update the device header after the zio completes as 9314 * l2arc_write_done() may have updated the memory holding the log block 9315 * pointers in the device header. 9316 */ 9317 l2arc_dev_hdr_update(dev); 9318 9319 return (write_asize); 9320 } 9321 9322 static boolean_t 9323 l2arc_hdr_limit_reached(void) 9324 { 9325 int64_t s = aggsum_upper_bound(&astat_l2_hdr_size); 9326 9327 return (arc_reclaim_needed() || (s > arc_meta_limit * 3 / 4) || 9328 (s > (arc_warm ? arc_c : arc_c_max) * l2arc_meta_percent / 100)); 9329 } 9330 9331 /* 9332 * This thread feeds the L2ARC at regular intervals. This is the beating 9333 * heart of the L2ARC. 9334 */ 9335 /* ARGSUSED */ 9336 static void 9337 l2arc_feed_thread(void *unused) 9338 { 9339 callb_cpr_t cpr; 9340 l2arc_dev_t *dev; 9341 spa_t *spa; 9342 uint64_t size, wrote; 9343 clock_t begin, next = ddi_get_lbolt(); 9344 fstrans_cookie_t cookie; 9345 9346 CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG); 9347 9348 mutex_enter(&l2arc_feed_thr_lock); 9349 9350 cookie = spl_fstrans_mark(); 9351 while (l2arc_thread_exit == 0) { 9352 CALLB_CPR_SAFE_BEGIN(&cpr); 9353 (void) cv_timedwait_idle(&l2arc_feed_thr_cv, 9354 &l2arc_feed_thr_lock, next); 9355 CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock); 9356 next = ddi_get_lbolt() + hz; 9357 9358 /* 9359 * Quick check for L2ARC devices. 9360 */ 9361 mutex_enter(&l2arc_dev_mtx); 9362 if (l2arc_ndev == 0) { 9363 mutex_exit(&l2arc_dev_mtx); 9364 continue; 9365 } 9366 mutex_exit(&l2arc_dev_mtx); 9367 begin = ddi_get_lbolt(); 9368 9369 /* 9370 * This selects the next l2arc device to write to, and in 9371 * doing so the next spa to feed from: dev->l2ad_spa. This 9372 * will return NULL if there are now no l2arc devices or if 9373 * they are all faulted. 9374 * 9375 * If a device is returned, its spa's config lock is also 9376 * held to prevent device removal. l2arc_dev_get_next() 9377 * will grab and release l2arc_dev_mtx. 9378 */ 9379 if ((dev = l2arc_dev_get_next()) == NULL) 9380 continue; 9381 9382 spa = dev->l2ad_spa; 9383 ASSERT3P(spa, !=, NULL); 9384 9385 /* 9386 * If the pool is read-only then force the feed thread to 9387 * sleep a little longer. 9388 */ 9389 if (!spa_writeable(spa)) { 9390 next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz; 9391 spa_config_exit(spa, SCL_L2ARC, dev); 9392 continue; 9393 } 9394 9395 /* 9396 * Avoid contributing to memory pressure. 9397 */ 9398 if (l2arc_hdr_limit_reached()) { 9399 ARCSTAT_BUMP(arcstat_l2_abort_lowmem); 9400 spa_config_exit(spa, SCL_L2ARC, dev); 9401 continue; 9402 } 9403 9404 ARCSTAT_BUMP(arcstat_l2_feeds); 9405 9406 size = l2arc_write_size(dev); 9407 9408 /* 9409 * Evict L2ARC buffers that will be overwritten. 9410 */ 9411 l2arc_evict(dev, size, B_FALSE); 9412 9413 /* 9414 * Write ARC buffers. 9415 */ 9416 wrote = l2arc_write_buffers(spa, dev, size); 9417 9418 /* 9419 * Calculate interval between writes. 9420 */ 9421 next = l2arc_write_interval(begin, size, wrote); 9422 spa_config_exit(spa, SCL_L2ARC, dev); 9423 } 9424 spl_fstrans_unmark(cookie); 9425 9426 l2arc_thread_exit = 0; 9427 cv_broadcast(&l2arc_feed_thr_cv); 9428 CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */ 9429 thread_exit(); 9430 } 9431 9432 boolean_t 9433 l2arc_vdev_present(vdev_t *vd) 9434 { 9435 return (l2arc_vdev_get(vd) != NULL); 9436 } 9437 9438 /* 9439 * Returns the l2arc_dev_t associated with a particular vdev_t or NULL if 9440 * the vdev_t isn't an L2ARC device. 9441 */ 9442 l2arc_dev_t * 9443 l2arc_vdev_get(vdev_t *vd) 9444 { 9445 l2arc_dev_t *dev; 9446 9447 mutex_enter(&l2arc_dev_mtx); 9448 for (dev = list_head(l2arc_dev_list); dev != NULL; 9449 dev = list_next(l2arc_dev_list, dev)) { 9450 if (dev->l2ad_vdev == vd) 9451 break; 9452 } 9453 mutex_exit(&l2arc_dev_mtx); 9454 9455 return (dev); 9456 } 9457 9458 /* 9459 * Add a vdev for use by the L2ARC. By this point the spa has already 9460 * validated the vdev and opened it. 9461 */ 9462 void 9463 l2arc_add_vdev(spa_t *spa, vdev_t *vd) 9464 { 9465 l2arc_dev_t *adddev; 9466 uint64_t l2dhdr_asize; 9467 9468 ASSERT(!l2arc_vdev_present(vd)); 9469 9470 /* 9471 * Create a new l2arc device entry. 9472 */ 9473 adddev = vmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP); 9474 adddev->l2ad_spa = spa; 9475 adddev->l2ad_vdev = vd; 9476 /* leave extra size for an l2arc device header */ 9477 l2dhdr_asize = adddev->l2ad_dev_hdr_asize = 9478 MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift); 9479 adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize; 9480 adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd); 9481 ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end); 9482 adddev->l2ad_hand = adddev->l2ad_start; 9483 adddev->l2ad_evict = adddev->l2ad_start; 9484 adddev->l2ad_first = B_TRUE; 9485 adddev->l2ad_writing = B_FALSE; 9486 adddev->l2ad_trim_all = B_FALSE; 9487 list_link_init(&adddev->l2ad_node); 9488 adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP); 9489 9490 mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL); 9491 /* 9492 * This is a list of all ARC buffers that are still valid on the 9493 * device. 9494 */ 9495 list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t), 9496 offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node)); 9497 9498 /* 9499 * This is a list of pointers to log blocks that are still present 9500 * on the device. 9501 */ 9502 list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t), 9503 offsetof(l2arc_lb_ptr_buf_t, node)); 9504 9505 vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand); 9506 zfs_refcount_create(&adddev->l2ad_alloc); 9507 zfs_refcount_create(&adddev->l2ad_lb_asize); 9508 zfs_refcount_create(&adddev->l2ad_lb_count); 9509 9510 /* 9511 * Add device to global list 9512 */ 9513 mutex_enter(&l2arc_dev_mtx); 9514 list_insert_head(l2arc_dev_list, adddev); 9515 atomic_inc_64(&l2arc_ndev); 9516 mutex_exit(&l2arc_dev_mtx); 9517 9518 /* 9519 * Decide if vdev is eligible for L2ARC rebuild 9520 */ 9521 l2arc_rebuild_vdev(adddev->l2ad_vdev, B_FALSE); 9522 } 9523 9524 void 9525 l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen) 9526 { 9527 l2arc_dev_t *dev = NULL; 9528 l2arc_dev_hdr_phys_t *l2dhdr; 9529 uint64_t l2dhdr_asize; 9530 spa_t *spa; 9531 9532 dev = l2arc_vdev_get(vd); 9533 ASSERT3P(dev, !=, NULL); 9534 spa = dev->l2ad_spa; 9535 l2dhdr = dev->l2ad_dev_hdr; 9536 l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9537 9538 /* 9539 * The L2ARC has to hold at least the payload of one log block for 9540 * them to be restored (persistent L2ARC). The payload of a log block 9541 * depends on the amount of its log entries. We always write log blocks 9542 * with 1022 entries. How many of them are committed or restored depends 9543 * on the size of the L2ARC device. Thus the maximum payload of 9544 * one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device 9545 * is less than that, we reduce the amount of committed and restored 9546 * log entries per block so as to enable persistence. 9547 */ 9548 if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) { 9549 dev->l2ad_log_entries = 0; 9550 } else { 9551 dev->l2ad_log_entries = MIN((dev->l2ad_end - 9552 dev->l2ad_start) >> SPA_MAXBLOCKSHIFT, 9553 L2ARC_LOG_BLK_MAX_ENTRIES); 9554 } 9555 9556 /* 9557 * Read the device header, if an error is returned do not rebuild L2ARC. 9558 */ 9559 if (l2arc_dev_hdr_read(dev) == 0 && dev->l2ad_log_entries > 0) { 9560 /* 9561 * If we are onlining a cache device (vdev_reopen) that was 9562 * still present (l2arc_vdev_present()) and rebuild is enabled, 9563 * we should evict all ARC buffers and pointers to log blocks 9564 * and reclaim their space before restoring its contents to 9565 * L2ARC. 9566 */ 9567 if (reopen) { 9568 if (!l2arc_rebuild_enabled) { 9569 return; 9570 } else { 9571 l2arc_evict(dev, 0, B_TRUE); 9572 /* start a new log block */ 9573 dev->l2ad_log_ent_idx = 0; 9574 dev->l2ad_log_blk_payload_asize = 0; 9575 dev->l2ad_log_blk_payload_start = 0; 9576 } 9577 } 9578 /* 9579 * Just mark the device as pending for a rebuild. We won't 9580 * be starting a rebuild in line here as it would block pool 9581 * import. Instead spa_load_impl will hand that off to an 9582 * async task which will call l2arc_spa_rebuild_start. 9583 */ 9584 dev->l2ad_rebuild = B_TRUE; 9585 } else if (spa_writeable(spa)) { 9586 /* 9587 * In this case TRIM the whole device if l2arc_trim_ahead > 0, 9588 * otherwise create a new header. We zero out the memory holding 9589 * the header to reset dh_start_lbps. If we TRIM the whole 9590 * device the new header will be written by 9591 * vdev_trim_l2arc_thread() at the end of the TRIM to update the 9592 * trim_state in the header too. When reading the header, if 9593 * trim_state is not VDEV_TRIM_COMPLETE and l2arc_trim_ahead > 0 9594 * we opt to TRIM the whole device again. 9595 */ 9596 if (l2arc_trim_ahead > 0) { 9597 dev->l2ad_trim_all = B_TRUE; 9598 } else { 9599 bzero(l2dhdr, l2dhdr_asize); 9600 l2arc_dev_hdr_update(dev); 9601 } 9602 } 9603 } 9604 9605 /* 9606 * Remove a vdev from the L2ARC. 9607 */ 9608 void 9609 l2arc_remove_vdev(vdev_t *vd) 9610 { 9611 l2arc_dev_t *remdev = NULL; 9612 9613 /* 9614 * Find the device by vdev 9615 */ 9616 remdev = l2arc_vdev_get(vd); 9617 ASSERT3P(remdev, !=, NULL); 9618 9619 /* 9620 * Cancel any ongoing or scheduled rebuild. 9621 */ 9622 mutex_enter(&l2arc_rebuild_thr_lock); 9623 if (remdev->l2ad_rebuild_began == B_TRUE) { 9624 remdev->l2ad_rebuild_cancel = B_TRUE; 9625 while (remdev->l2ad_rebuild == B_TRUE) 9626 cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock); 9627 } 9628 mutex_exit(&l2arc_rebuild_thr_lock); 9629 9630 /* 9631 * Remove device from global list 9632 */ 9633 mutex_enter(&l2arc_dev_mtx); 9634 list_remove(l2arc_dev_list, remdev); 9635 l2arc_dev_last = NULL; /* may have been invalidated */ 9636 atomic_dec_64(&l2arc_ndev); 9637 mutex_exit(&l2arc_dev_mtx); 9638 9639 /* 9640 * Clear all buflists and ARC references. L2ARC device flush. 9641 */ 9642 l2arc_evict(remdev, 0, B_TRUE); 9643 list_destroy(&remdev->l2ad_buflist); 9644 ASSERT(list_is_empty(&remdev->l2ad_lbptr_list)); 9645 list_destroy(&remdev->l2ad_lbptr_list); 9646 mutex_destroy(&remdev->l2ad_mtx); 9647 zfs_refcount_destroy(&remdev->l2ad_alloc); 9648 zfs_refcount_destroy(&remdev->l2ad_lb_asize); 9649 zfs_refcount_destroy(&remdev->l2ad_lb_count); 9650 kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize); 9651 vmem_free(remdev, sizeof (l2arc_dev_t)); 9652 } 9653 9654 void 9655 l2arc_init(void) 9656 { 9657 l2arc_thread_exit = 0; 9658 l2arc_ndev = 0; 9659 l2arc_writes_sent = 0; 9660 l2arc_writes_done = 0; 9661 9662 mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9663 cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL); 9664 mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9665 cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL); 9666 mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL); 9667 mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL); 9668 9669 l2arc_dev_list = &L2ARC_dev_list; 9670 l2arc_free_on_write = &L2ARC_free_on_write; 9671 list_create(l2arc_dev_list, sizeof (l2arc_dev_t), 9672 offsetof(l2arc_dev_t, l2ad_node)); 9673 list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t), 9674 offsetof(l2arc_data_free_t, l2df_list_node)); 9675 } 9676 9677 void 9678 l2arc_fini(void) 9679 { 9680 mutex_destroy(&l2arc_feed_thr_lock); 9681 cv_destroy(&l2arc_feed_thr_cv); 9682 mutex_destroy(&l2arc_rebuild_thr_lock); 9683 cv_destroy(&l2arc_rebuild_thr_cv); 9684 mutex_destroy(&l2arc_dev_mtx); 9685 mutex_destroy(&l2arc_free_on_write_mtx); 9686 9687 list_destroy(l2arc_dev_list); 9688 list_destroy(l2arc_free_on_write); 9689 } 9690 9691 void 9692 l2arc_start(void) 9693 { 9694 if (!(spa_mode_global & SPA_MODE_WRITE)) 9695 return; 9696 9697 (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0, 9698 TS_RUN, defclsyspri); 9699 } 9700 9701 void 9702 l2arc_stop(void) 9703 { 9704 if (!(spa_mode_global & SPA_MODE_WRITE)) 9705 return; 9706 9707 mutex_enter(&l2arc_feed_thr_lock); 9708 cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */ 9709 l2arc_thread_exit = 1; 9710 while (l2arc_thread_exit != 0) 9711 cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock); 9712 mutex_exit(&l2arc_feed_thr_lock); 9713 } 9714 9715 /* 9716 * Punches out rebuild threads for the L2ARC devices in a spa. This should 9717 * be called after pool import from the spa async thread, since starting 9718 * these threads directly from spa_import() will make them part of the 9719 * "zpool import" context and delay process exit (and thus pool import). 9720 */ 9721 void 9722 l2arc_spa_rebuild_start(spa_t *spa) 9723 { 9724 ASSERT(MUTEX_HELD(&spa_namespace_lock)); 9725 9726 /* 9727 * Locate the spa's l2arc devices and kick off rebuild threads. 9728 */ 9729 for (int i = 0; i < spa->spa_l2cache.sav_count; i++) { 9730 l2arc_dev_t *dev = 9731 l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]); 9732 if (dev == NULL) { 9733 /* Don't attempt a rebuild if the vdev is UNAVAIL */ 9734 continue; 9735 } 9736 mutex_enter(&l2arc_rebuild_thr_lock); 9737 if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) { 9738 dev->l2ad_rebuild_began = B_TRUE; 9739 (void) thread_create(NULL, 0, l2arc_dev_rebuild_thread, 9740 dev, 0, &p0, TS_RUN, minclsyspri); 9741 } 9742 mutex_exit(&l2arc_rebuild_thr_lock); 9743 } 9744 } 9745 9746 /* 9747 * Main entry point for L2ARC rebuilding. 9748 */ 9749 static void 9750 l2arc_dev_rebuild_thread(void *arg) 9751 { 9752 l2arc_dev_t *dev = arg; 9753 9754 VERIFY(!dev->l2ad_rebuild_cancel); 9755 VERIFY(dev->l2ad_rebuild); 9756 (void) l2arc_rebuild(dev); 9757 mutex_enter(&l2arc_rebuild_thr_lock); 9758 dev->l2ad_rebuild_began = B_FALSE; 9759 dev->l2ad_rebuild = B_FALSE; 9760 mutex_exit(&l2arc_rebuild_thr_lock); 9761 9762 thread_exit(); 9763 } 9764 9765 /* 9766 * This function implements the actual L2ARC metadata rebuild. It: 9767 * starts reading the log block chain and restores each block's contents 9768 * to memory (reconstructing arc_buf_hdr_t's). 9769 * 9770 * Operation stops under any of the following conditions: 9771 * 9772 * 1) We reach the end of the log block chain. 9773 * 2) We encounter *any* error condition (cksum errors, io errors) 9774 */ 9775 static int 9776 l2arc_rebuild(l2arc_dev_t *dev) 9777 { 9778 vdev_t *vd = dev->l2ad_vdev; 9779 spa_t *spa = vd->vdev_spa; 9780 int err = 0; 9781 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9782 l2arc_log_blk_phys_t *this_lb, *next_lb; 9783 zio_t *this_io = NULL, *next_io = NULL; 9784 l2arc_log_blkptr_t lbps[2]; 9785 l2arc_lb_ptr_buf_t *lb_ptr_buf; 9786 boolean_t lock_held; 9787 9788 this_lb = vmem_zalloc(sizeof (*this_lb), KM_SLEEP); 9789 next_lb = vmem_zalloc(sizeof (*next_lb), KM_SLEEP); 9790 9791 /* 9792 * We prevent device removal while issuing reads to the device, 9793 * then during the rebuilding phases we drop this lock again so 9794 * that a spa_unload or device remove can be initiated - this is 9795 * safe, because the spa will signal us to stop before removing 9796 * our device and wait for us to stop. 9797 */ 9798 spa_config_enter(spa, SCL_L2ARC, vd, RW_READER); 9799 lock_held = B_TRUE; 9800 9801 /* 9802 * Retrieve the persistent L2ARC device state. 9803 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9804 */ 9805 dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start); 9806 dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr + 9807 L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop), 9808 dev->l2ad_start); 9809 dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST); 9810 9811 vd->vdev_trim_action_time = l2dhdr->dh_trim_action_time; 9812 vd->vdev_trim_state = l2dhdr->dh_trim_state; 9813 9814 /* 9815 * In case the zfs module parameter l2arc_rebuild_enabled is false 9816 * we do not start the rebuild process. 9817 */ 9818 if (!l2arc_rebuild_enabled) 9819 goto out; 9820 9821 /* Prepare the rebuild process */ 9822 bcopy(l2dhdr->dh_start_lbps, lbps, sizeof (lbps)); 9823 9824 /* Start the rebuild process */ 9825 for (;;) { 9826 if (!l2arc_log_blkptr_valid(dev, &lbps[0])) 9827 break; 9828 9829 if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1], 9830 this_lb, next_lb, this_io, &next_io)) != 0) 9831 goto out; 9832 9833 /* 9834 * Our memory pressure valve. If the system is running low 9835 * on memory, rather than swamping memory with new ARC buf 9836 * hdrs, we opt not to rebuild the L2ARC. At this point, 9837 * however, we have already set up our L2ARC dev to chain in 9838 * new metadata log blocks, so the user may choose to offline/ 9839 * online the L2ARC dev at a later time (or re-import the pool) 9840 * to reconstruct it (when there's less memory pressure). 9841 */ 9842 if (l2arc_hdr_limit_reached()) { 9843 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem); 9844 cmn_err(CE_NOTE, "System running low on memory, " 9845 "aborting L2ARC rebuild."); 9846 err = SET_ERROR(ENOMEM); 9847 goto out; 9848 } 9849 9850 spa_config_exit(spa, SCL_L2ARC, vd); 9851 lock_held = B_FALSE; 9852 9853 /* 9854 * Now that we know that the next_lb checks out alright, we 9855 * can start reconstruction from this log block. 9856 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9857 */ 9858 uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop); 9859 l2arc_log_blk_restore(dev, this_lb, asize); 9860 9861 /* 9862 * log block restored, include its pointer in the list of 9863 * pointers to log blocks present in the L2ARC device. 9864 */ 9865 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 9866 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), 9867 KM_SLEEP); 9868 bcopy(&lbps[0], lb_ptr_buf->lb_ptr, 9869 sizeof (l2arc_log_blkptr_t)); 9870 mutex_enter(&dev->l2ad_mtx); 9871 list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf); 9872 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 9873 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 9874 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 9875 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 9876 mutex_exit(&dev->l2ad_mtx); 9877 vdev_space_update(vd, asize, 0, 0); 9878 9879 /* 9880 * Protection against loops of log blocks: 9881 * 9882 * l2ad_hand l2ad_evict 9883 * V V 9884 * l2ad_start |=======================================| l2ad_end 9885 * -----|||----|||---|||----||| 9886 * (3) (2) (1) (0) 9887 * ---|||---|||----|||---||| 9888 * (7) (6) (5) (4) 9889 * 9890 * In this situation the pointer of log block (4) passes 9891 * l2arc_log_blkptr_valid() but the log block should not be 9892 * restored as it is overwritten by the payload of log block 9893 * (0). Only log blocks (0)-(3) should be restored. We check 9894 * whether l2ad_evict lies in between the payload starting 9895 * offset of the next log block (lbps[1].lbp_payload_start) 9896 * and the payload starting offset of the present log block 9897 * (lbps[0].lbp_payload_start). If true and this isn't the 9898 * first pass, we are looping from the beginning and we should 9899 * stop. 9900 */ 9901 if (l2arc_range_check_overlap(lbps[1].lbp_payload_start, 9902 lbps[0].lbp_payload_start, dev->l2ad_evict) && 9903 !dev->l2ad_first) 9904 goto out; 9905 9906 cond_resched(); 9907 for (;;) { 9908 mutex_enter(&l2arc_rebuild_thr_lock); 9909 if (dev->l2ad_rebuild_cancel) { 9910 dev->l2ad_rebuild = B_FALSE; 9911 cv_signal(&l2arc_rebuild_thr_cv); 9912 mutex_exit(&l2arc_rebuild_thr_lock); 9913 err = SET_ERROR(ECANCELED); 9914 goto out; 9915 } 9916 mutex_exit(&l2arc_rebuild_thr_lock); 9917 if (spa_config_tryenter(spa, SCL_L2ARC, vd, 9918 RW_READER)) { 9919 lock_held = B_TRUE; 9920 break; 9921 } 9922 /* 9923 * L2ARC config lock held by somebody in writer, 9924 * possibly due to them trying to remove us. They'll 9925 * likely to want us to shut down, so after a little 9926 * delay, we check l2ad_rebuild_cancel and retry 9927 * the lock again. 9928 */ 9929 delay(1); 9930 } 9931 9932 /* 9933 * Continue with the next log block. 9934 */ 9935 lbps[0] = lbps[1]; 9936 lbps[1] = this_lb->lb_prev_lbp; 9937 PTR_SWAP(this_lb, next_lb); 9938 this_io = next_io; 9939 next_io = NULL; 9940 } 9941 9942 if (this_io != NULL) 9943 l2arc_log_blk_fetch_abort(this_io); 9944 out: 9945 if (next_io != NULL) 9946 l2arc_log_blk_fetch_abort(next_io); 9947 vmem_free(this_lb, sizeof (*this_lb)); 9948 vmem_free(next_lb, sizeof (*next_lb)); 9949 9950 if (!l2arc_rebuild_enabled) { 9951 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9952 "disabled"); 9953 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) { 9954 ARCSTAT_BUMP(arcstat_l2_rebuild_success); 9955 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9956 "successful, restored %llu blocks", 9957 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 9958 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) { 9959 /* 9960 * No error but also nothing restored, meaning the lbps array 9961 * in the device header points to invalid/non-present log 9962 * blocks. Reset the header. 9963 */ 9964 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9965 "no valid log blocks"); 9966 bzero(l2dhdr, dev->l2ad_dev_hdr_asize); 9967 l2arc_dev_hdr_update(dev); 9968 } else if (err == ECANCELED) { 9969 /* 9970 * In case the rebuild was canceled do not log to spa history 9971 * log as the pool may be in the process of being removed. 9972 */ 9973 zfs_dbgmsg("L2ARC rebuild aborted, restored %llu blocks", 9974 zfs_refcount_count(&dev->l2ad_lb_count)); 9975 } else if (err != 0) { 9976 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 9977 "aborted, restored %llu blocks", 9978 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 9979 } 9980 9981 if (lock_held) 9982 spa_config_exit(spa, SCL_L2ARC, vd); 9983 9984 return (err); 9985 } 9986 9987 /* 9988 * Attempts to read the device header on the provided L2ARC device and writes 9989 * it to `hdr'. On success, this function returns 0, otherwise the appropriate 9990 * error code is returned. 9991 */ 9992 static int 9993 l2arc_dev_hdr_read(l2arc_dev_t *dev) 9994 { 9995 int err; 9996 uint64_t guid; 9997 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9998 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9999 abd_t *abd; 10000 10001 guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10002 10003 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10004 10005 err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev, 10006 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, 10007 ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_SYNC_READ, 10008 ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | 10009 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | 10010 ZIO_FLAG_SPECULATIVE, B_FALSE)); 10011 10012 abd_put(abd); 10013 10014 if (err != 0) { 10015 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors); 10016 zfs_dbgmsg("L2ARC IO error (%d) while reading device header, " 10017 "vdev guid: %llu", err, dev->l2ad_vdev->vdev_guid); 10018 return (err); 10019 } 10020 10021 if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC)) 10022 byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr)); 10023 10024 if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC || 10025 l2dhdr->dh_spa_guid != guid || 10026 l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid || 10027 l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION || 10028 l2dhdr->dh_log_entries != dev->l2ad_log_entries || 10029 l2dhdr->dh_end != dev->l2ad_end || 10030 !l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end, 10031 l2dhdr->dh_evict) || 10032 (l2dhdr->dh_trim_state != VDEV_TRIM_COMPLETE && 10033 l2arc_trim_ahead > 0)) { 10034 /* 10035 * Attempt to rebuild a device containing no actual dev hdr 10036 * or containing a header from some other pool or from another 10037 * version of persistent L2ARC. 10038 */ 10039 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported); 10040 return (SET_ERROR(ENOTSUP)); 10041 } 10042 10043 return (0); 10044 } 10045 10046 /* 10047 * Reads L2ARC log blocks from storage and validates their contents. 10048 * 10049 * This function implements a simple fetcher to make sure that while 10050 * we're processing one buffer the L2ARC is already fetching the next 10051 * one in the chain. 10052 * 10053 * The arguments this_lp and next_lp point to the current and next log block 10054 * address in the block chain. Similarly, this_lb and next_lb hold the 10055 * l2arc_log_blk_phys_t's of the current and next L2ARC blk. 10056 * 10057 * The `this_io' and `next_io' arguments are used for block fetching. 10058 * When issuing the first blk IO during rebuild, you should pass NULL for 10059 * `this_io'. This function will then issue a sync IO to read the block and 10060 * also issue an async IO to fetch the next block in the block chain. The 10061 * fetched IO is returned in `next_io'. On subsequent calls to this 10062 * function, pass the value returned in `next_io' from the previous call 10063 * as `this_io' and a fresh `next_io' pointer to hold the next fetch IO. 10064 * Prior to the call, you should initialize your `next_io' pointer to be 10065 * NULL. If no fetch IO was issued, the pointer is left set at NULL. 10066 * 10067 * On success, this function returns 0, otherwise it returns an appropriate 10068 * error code. On error the fetching IO is aborted and cleared before 10069 * returning from this function. Therefore, if we return `success', the 10070 * caller can assume that we have taken care of cleanup of fetch IOs. 10071 */ 10072 static int 10073 l2arc_log_blk_read(l2arc_dev_t *dev, 10074 const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp, 10075 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 10076 zio_t *this_io, zio_t **next_io) 10077 { 10078 int err = 0; 10079 zio_cksum_t cksum; 10080 abd_t *abd = NULL; 10081 uint64_t asize; 10082 10083 ASSERT(this_lbp != NULL && next_lbp != NULL); 10084 ASSERT(this_lb != NULL && next_lb != NULL); 10085 ASSERT(next_io != NULL && *next_io == NULL); 10086 ASSERT(l2arc_log_blkptr_valid(dev, this_lbp)); 10087 10088 /* 10089 * Check to see if we have issued the IO for this log block in a 10090 * previous run. If not, this is the first call, so issue it now. 10091 */ 10092 if (this_io == NULL) { 10093 this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp, 10094 this_lb); 10095 } 10096 10097 /* 10098 * Peek to see if we can start issuing the next IO immediately. 10099 */ 10100 if (l2arc_log_blkptr_valid(dev, next_lbp)) { 10101 /* 10102 * Start issuing IO for the next log block early - this 10103 * should help keep the L2ARC device busy while we 10104 * decompress and restore this log block. 10105 */ 10106 *next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp, 10107 next_lb); 10108 } 10109 10110 /* Wait for the IO to read this log block to complete */ 10111 if ((err = zio_wait(this_io)) != 0) { 10112 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors); 10113 zfs_dbgmsg("L2ARC IO error (%d) while reading log block, " 10114 "offset: %llu, vdev guid: %llu", err, this_lbp->lbp_daddr, 10115 dev->l2ad_vdev->vdev_guid); 10116 goto cleanup; 10117 } 10118 10119 /* 10120 * Make sure the buffer checks out. 10121 * L2BLK_GET_PSIZE returns aligned size for log blocks. 10122 */ 10123 asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop); 10124 fletcher_4_native(this_lb, asize, NULL, &cksum); 10125 if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) { 10126 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors); 10127 zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, " 10128 "vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu", 10129 this_lbp->lbp_daddr, dev->l2ad_vdev->vdev_guid, 10130 dev->l2ad_hand, dev->l2ad_evict); 10131 err = SET_ERROR(ECKSUM); 10132 goto cleanup; 10133 } 10134 10135 /* Now we can take our time decoding this buffer */ 10136 switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) { 10137 case ZIO_COMPRESS_OFF: 10138 break; 10139 case ZIO_COMPRESS_LZ4: 10140 abd = abd_alloc_for_io(asize, B_TRUE); 10141 abd_copy_from_buf_off(abd, this_lb, 0, asize); 10142 if ((err = zio_decompress_data( 10143 L2BLK_GET_COMPRESS((this_lbp)->lbp_prop), 10144 abd, this_lb, asize, sizeof (*this_lb), NULL)) != 0) { 10145 err = SET_ERROR(EINVAL); 10146 goto cleanup; 10147 } 10148 break; 10149 default: 10150 err = SET_ERROR(EINVAL); 10151 goto cleanup; 10152 } 10153 if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC)) 10154 byteswap_uint64_array(this_lb, sizeof (*this_lb)); 10155 if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) { 10156 err = SET_ERROR(EINVAL); 10157 goto cleanup; 10158 } 10159 cleanup: 10160 /* Abort an in-flight fetch I/O in case of error */ 10161 if (err != 0 && *next_io != NULL) { 10162 l2arc_log_blk_fetch_abort(*next_io); 10163 *next_io = NULL; 10164 } 10165 if (abd != NULL) 10166 abd_free(abd); 10167 return (err); 10168 } 10169 10170 /* 10171 * Restores the payload of a log block to ARC. This creates empty ARC hdr 10172 * entries which only contain an l2arc hdr, essentially restoring the 10173 * buffers to their L2ARC evicted state. This function also updates space 10174 * usage on the L2ARC vdev to make sure it tracks restored buffers. 10175 */ 10176 static void 10177 l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb, 10178 uint64_t lb_asize) 10179 { 10180 uint64_t size = 0, asize = 0; 10181 uint64_t log_entries = dev->l2ad_log_entries; 10182 10183 /* 10184 * Usually arc_adapt() is called only for data, not headers, but 10185 * since we may allocate significant amount of memory here, let ARC 10186 * grow its arc_c. 10187 */ 10188 arc_adapt(log_entries * HDR_L2ONLY_SIZE, arc_l2c_only); 10189 10190 for (int i = log_entries - 1; i >= 0; i--) { 10191 /* 10192 * Restore goes in the reverse temporal direction to preserve 10193 * correct temporal ordering of buffers in the l2ad_buflist. 10194 * l2arc_hdr_restore also does a list_insert_tail instead of 10195 * list_insert_head on the l2ad_buflist: 10196 * 10197 * LIST l2ad_buflist LIST 10198 * HEAD <------ (time) ------ TAIL 10199 * direction +-----+-----+-----+-----+-----+ direction 10200 * of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild 10201 * fill +-----+-----+-----+-----+-----+ 10202 * ^ ^ 10203 * | | 10204 * | | 10205 * l2arc_feed_thread l2arc_rebuild 10206 * will place new bufs here restores bufs here 10207 * 10208 * During l2arc_rebuild() the device is not used by 10209 * l2arc_feed_thread() as dev->l2ad_rebuild is set to true. 10210 */ 10211 size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop); 10212 asize += vdev_psize_to_asize(dev->l2ad_vdev, 10213 L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop)); 10214 l2arc_hdr_restore(&lb->lb_entries[i], dev); 10215 } 10216 10217 /* 10218 * Record rebuild stats: 10219 * size Logical size of restored buffers in the L2ARC 10220 * asize Aligned size of restored buffers in the L2ARC 10221 */ 10222 ARCSTAT_INCR(arcstat_l2_rebuild_size, size); 10223 ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize); 10224 ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries); 10225 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize); 10226 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize); 10227 ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks); 10228 } 10229 10230 /* 10231 * Restores a single ARC buf hdr from a log entry. The ARC buffer is put 10232 * into a state indicating that it has been evicted to L2ARC. 10233 */ 10234 static void 10235 l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev) 10236 { 10237 arc_buf_hdr_t *hdr, *exists; 10238 kmutex_t *hash_lock; 10239 arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop); 10240 uint64_t asize; 10241 10242 /* 10243 * Do all the allocation before grabbing any locks, this lets us 10244 * sleep if memory is full and we don't have to deal with failed 10245 * allocations. 10246 */ 10247 hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type, 10248 dev, le->le_dva, le->le_daddr, 10249 L2BLK_GET_PSIZE((le)->le_prop), le->le_birth, 10250 L2BLK_GET_COMPRESS((le)->le_prop), le->le_complevel, 10251 L2BLK_GET_PROTECTED((le)->le_prop), 10252 L2BLK_GET_PREFETCH((le)->le_prop), 10253 L2BLK_GET_STATE((le)->le_prop)); 10254 asize = vdev_psize_to_asize(dev->l2ad_vdev, 10255 L2BLK_GET_PSIZE((le)->le_prop)); 10256 10257 /* 10258 * vdev_space_update() has to be called before arc_hdr_destroy() to 10259 * avoid underflow since the latter also calls vdev_space_update(). 10260 */ 10261 l2arc_hdr_arcstats_increment(hdr); 10262 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10263 10264 mutex_enter(&dev->l2ad_mtx); 10265 list_insert_tail(&dev->l2ad_buflist, hdr); 10266 (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr); 10267 mutex_exit(&dev->l2ad_mtx); 10268 10269 exists = buf_hash_insert(hdr, &hash_lock); 10270 if (exists) { 10271 /* Buffer was already cached, no need to restore it. */ 10272 arc_hdr_destroy(hdr); 10273 /* 10274 * If the buffer is already cached, check whether it has 10275 * L2ARC metadata. If not, enter them and update the flag. 10276 * This is important is case of onlining a cache device, since 10277 * we previously evicted all L2ARC metadata from ARC. 10278 */ 10279 if (!HDR_HAS_L2HDR(exists)) { 10280 arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR); 10281 exists->b_l2hdr.b_dev = dev; 10282 exists->b_l2hdr.b_daddr = le->le_daddr; 10283 exists->b_l2hdr.b_arcs_state = 10284 L2BLK_GET_STATE((le)->le_prop); 10285 mutex_enter(&dev->l2ad_mtx); 10286 list_insert_tail(&dev->l2ad_buflist, exists); 10287 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 10288 arc_hdr_size(exists), exists); 10289 mutex_exit(&dev->l2ad_mtx); 10290 l2arc_hdr_arcstats_increment(exists); 10291 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10292 } 10293 ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached); 10294 } 10295 10296 mutex_exit(hash_lock); 10297 } 10298 10299 /* 10300 * Starts an asynchronous read IO to read a log block. This is used in log 10301 * block reconstruction to start reading the next block before we are done 10302 * decoding and reconstructing the current block, to keep the l2arc device 10303 * nice and hot with read IO to process. 10304 * The returned zio will contain a newly allocated memory buffers for the IO 10305 * data which should then be freed by the caller once the zio is no longer 10306 * needed (i.e. due to it having completed). If you wish to abort this 10307 * zio, you should do so using l2arc_log_blk_fetch_abort, which takes 10308 * care of disposing of the allocated buffers correctly. 10309 */ 10310 static zio_t * 10311 l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp, 10312 l2arc_log_blk_phys_t *lb) 10313 { 10314 uint32_t asize; 10315 zio_t *pio; 10316 l2arc_read_callback_t *cb; 10317 10318 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10319 asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10320 ASSERT(asize <= sizeof (l2arc_log_blk_phys_t)); 10321 10322 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP); 10323 cb->l2rcb_abd = abd_get_from_buf(lb, asize); 10324 pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb, 10325 ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | 10326 ZIO_FLAG_DONT_RETRY); 10327 (void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize, 10328 cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10329 ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | 10330 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE)); 10331 10332 return (pio); 10333 } 10334 10335 /* 10336 * Aborts a zio returned from l2arc_log_blk_fetch and frees the data 10337 * buffers allocated for it. 10338 */ 10339 static void 10340 l2arc_log_blk_fetch_abort(zio_t *zio) 10341 { 10342 (void) zio_wait(zio); 10343 } 10344 10345 /* 10346 * Creates a zio to update the device header on an l2arc device. 10347 */ 10348 void 10349 l2arc_dev_hdr_update(l2arc_dev_t *dev) 10350 { 10351 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10352 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10353 abd_t *abd; 10354 int err; 10355 10356 VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER)); 10357 10358 l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC; 10359 l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION; 10360 l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10361 l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid; 10362 l2dhdr->dh_log_entries = dev->l2ad_log_entries; 10363 l2dhdr->dh_evict = dev->l2ad_evict; 10364 l2dhdr->dh_start = dev->l2ad_start; 10365 l2dhdr->dh_end = dev->l2ad_end; 10366 l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize); 10367 l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count); 10368 l2dhdr->dh_flags = 0; 10369 l2dhdr->dh_trim_action_time = dev->l2ad_vdev->vdev_trim_action_time; 10370 l2dhdr->dh_trim_state = dev->l2ad_vdev->vdev_trim_state; 10371 if (dev->l2ad_first) 10372 l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST; 10373 10374 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10375 10376 err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev, 10377 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL, 10378 NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE)); 10379 10380 abd_put(abd); 10381 10382 if (err != 0) { 10383 zfs_dbgmsg("L2ARC IO error (%d) while writing device header, " 10384 "vdev guid: %llu", err, dev->l2ad_vdev->vdev_guid); 10385 } 10386 } 10387 10388 /* 10389 * Commits a log block to the L2ARC device. This routine is invoked from 10390 * l2arc_write_buffers when the log block fills up. 10391 * This function allocates some memory to temporarily hold the serialized 10392 * buffer to be written. This is then released in l2arc_write_done. 10393 */ 10394 static void 10395 l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb) 10396 { 10397 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10398 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10399 uint64_t psize, asize; 10400 zio_t *wzio; 10401 l2arc_lb_abd_buf_t *abd_buf; 10402 uint8_t *tmpbuf; 10403 l2arc_lb_ptr_buf_t *lb_ptr_buf; 10404 10405 VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries); 10406 10407 tmpbuf = zio_buf_alloc(sizeof (*lb)); 10408 abd_buf = zio_buf_alloc(sizeof (*abd_buf)); 10409 abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb)); 10410 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 10411 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP); 10412 10413 /* link the buffer into the block chain */ 10414 lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1]; 10415 lb->lb_magic = L2ARC_LOG_BLK_MAGIC; 10416 10417 /* 10418 * l2arc_log_blk_commit() may be called multiple times during a single 10419 * l2arc_write_buffers() call. Save the allocated abd buffers in a list 10420 * so we can free them in l2arc_write_done() later on. 10421 */ 10422 list_insert_tail(&cb->l2wcb_abd_list, abd_buf); 10423 10424 /* try to compress the buffer */ 10425 psize = zio_compress_data(ZIO_COMPRESS_LZ4, 10426 abd_buf->abd, tmpbuf, sizeof (*lb), 0); 10427 10428 /* a log block is never entirely zero */ 10429 ASSERT(psize != 0); 10430 asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 10431 ASSERT(asize <= sizeof (*lb)); 10432 10433 /* 10434 * Update the start log block pointer in the device header to point 10435 * to the log block we're about to write. 10436 */ 10437 l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0]; 10438 l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand; 10439 l2dhdr->dh_start_lbps[0].lbp_payload_asize = 10440 dev->l2ad_log_blk_payload_asize; 10441 l2dhdr->dh_start_lbps[0].lbp_payload_start = 10442 dev->l2ad_log_blk_payload_start; 10443 _NOTE(CONSTCOND) 10444 L2BLK_SET_LSIZE( 10445 (&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb)); 10446 L2BLK_SET_PSIZE( 10447 (&l2dhdr->dh_start_lbps[0])->lbp_prop, asize); 10448 L2BLK_SET_CHECKSUM( 10449 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10450 ZIO_CHECKSUM_FLETCHER_4); 10451 if (asize < sizeof (*lb)) { 10452 /* compression succeeded */ 10453 bzero(tmpbuf + psize, asize - psize); 10454 L2BLK_SET_COMPRESS( 10455 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10456 ZIO_COMPRESS_LZ4); 10457 } else { 10458 /* compression failed */ 10459 bcopy(lb, tmpbuf, sizeof (*lb)); 10460 L2BLK_SET_COMPRESS( 10461 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10462 ZIO_COMPRESS_OFF); 10463 } 10464 10465 /* checksum what we're about to write */ 10466 fletcher_4_native(tmpbuf, asize, NULL, 10467 &l2dhdr->dh_start_lbps[0].lbp_cksum); 10468 10469 abd_put(abd_buf->abd); 10470 10471 /* perform the write itself */ 10472 abd_buf->abd = abd_get_from_buf(tmpbuf, sizeof (*lb)); 10473 abd_take_ownership_of_buf(abd_buf->abd, B_TRUE); 10474 wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand, 10475 asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10476 ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE); 10477 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio); 10478 (void) zio_nowait(wzio); 10479 10480 dev->l2ad_hand += asize; 10481 /* 10482 * Include the committed log block's pointer in the list of pointers 10483 * to log blocks present in the L2ARC device. 10484 */ 10485 bcopy(&l2dhdr->dh_start_lbps[0], lb_ptr_buf->lb_ptr, 10486 sizeof (l2arc_log_blkptr_t)); 10487 mutex_enter(&dev->l2ad_mtx); 10488 list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf); 10489 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 10490 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 10491 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 10492 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 10493 mutex_exit(&dev->l2ad_mtx); 10494 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10495 10496 /* bump the kstats */ 10497 ARCSTAT_INCR(arcstat_l2_write_bytes, asize); 10498 ARCSTAT_BUMP(arcstat_l2_log_blk_writes); 10499 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize); 10500 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, 10501 dev->l2ad_log_blk_payload_asize / asize); 10502 10503 /* start a new log block */ 10504 dev->l2ad_log_ent_idx = 0; 10505 dev->l2ad_log_blk_payload_asize = 0; 10506 dev->l2ad_log_blk_payload_start = 0; 10507 } 10508 10509 /* 10510 * Validates an L2ARC log block address to make sure that it can be read 10511 * from the provided L2ARC device. 10512 */ 10513 boolean_t 10514 l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp) 10515 { 10516 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10517 uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10518 uint64_t end = lbp->lbp_daddr + asize - 1; 10519 uint64_t start = lbp->lbp_payload_start; 10520 boolean_t evicted = B_FALSE; 10521 10522 /* 10523 * A log block is valid if all of the following conditions are true: 10524 * - it fits entirely (including its payload) between l2ad_start and 10525 * l2ad_end 10526 * - it has a valid size 10527 * - neither the log block itself nor part of its payload was evicted 10528 * by l2arc_evict(): 10529 * 10530 * l2ad_hand l2ad_evict 10531 * | | lbp_daddr 10532 * | start | | end 10533 * | | | | | 10534 * V V V V V 10535 * l2ad_start ============================================ l2ad_end 10536 * --------------------------|||| 10537 * ^ ^ 10538 * | log block 10539 * payload 10540 */ 10541 10542 evicted = 10543 l2arc_range_check_overlap(start, end, dev->l2ad_hand) || 10544 l2arc_range_check_overlap(start, end, dev->l2ad_evict) || 10545 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) || 10546 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end); 10547 10548 return (start >= dev->l2ad_start && end <= dev->l2ad_end && 10549 asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) && 10550 (!evicted || dev->l2ad_first)); 10551 } 10552 10553 /* 10554 * Inserts ARC buffer header `hdr' into the current L2ARC log block on 10555 * the device. The buffer being inserted must be present in L2ARC. 10556 * Returns B_TRUE if the L2ARC log block is full and needs to be committed 10557 * to L2ARC, or B_FALSE if it still has room for more ARC buffers. 10558 */ 10559 static boolean_t 10560 l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr) 10561 { 10562 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10563 l2arc_log_ent_phys_t *le; 10564 10565 if (dev->l2ad_log_entries == 0) 10566 return (B_FALSE); 10567 10568 int index = dev->l2ad_log_ent_idx++; 10569 10570 ASSERT3S(index, <, dev->l2ad_log_entries); 10571 ASSERT(HDR_HAS_L2HDR(hdr)); 10572 10573 le = &lb->lb_entries[index]; 10574 bzero(le, sizeof (*le)); 10575 le->le_dva = hdr->b_dva; 10576 le->le_birth = hdr->b_birth; 10577 le->le_daddr = hdr->b_l2hdr.b_daddr; 10578 if (index == 0) 10579 dev->l2ad_log_blk_payload_start = le->le_daddr; 10580 L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr)); 10581 L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr)); 10582 L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr)); 10583 le->le_complevel = hdr->b_complevel; 10584 L2BLK_SET_TYPE((le)->le_prop, hdr->b_type); 10585 L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr))); 10586 L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr))); 10587 L2BLK_SET_STATE((le)->le_prop, hdr->b_l1hdr.b_state->arcs_state); 10588 10589 dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev, 10590 HDR_GET_PSIZE(hdr)); 10591 10592 return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries); 10593 } 10594 10595 /* 10596 * Checks whether a given L2ARC device address sits in a time-sequential 10597 * range. The trick here is that the L2ARC is a rotary buffer, so we can't 10598 * just do a range comparison, we need to handle the situation in which the 10599 * range wraps around the end of the L2ARC device. Arguments: 10600 * bottom -- Lower end of the range to check (written to earlier). 10601 * top -- Upper end of the range to check (written to later). 10602 * check -- The address for which we want to determine if it sits in 10603 * between the top and bottom. 10604 * 10605 * The 3-way conditional below represents the following cases: 10606 * 10607 * bottom < top : Sequentially ordered case: 10608 * <check>--------+-------------------+ 10609 * | (overlap here?) | 10610 * L2ARC dev V V 10611 * |---------------<bottom>============<top>--------------| 10612 * 10613 * bottom > top: Looped-around case: 10614 * <check>--------+------------------+ 10615 * | (overlap here?) | 10616 * L2ARC dev V V 10617 * |===============<top>---------------<bottom>===========| 10618 * ^ ^ 10619 * | (or here?) | 10620 * +---------------+---------<check> 10621 * 10622 * top == bottom : Just a single address comparison. 10623 */ 10624 boolean_t 10625 l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check) 10626 { 10627 if (bottom < top) 10628 return (bottom <= check && check <= top); 10629 else if (bottom > top) 10630 return (check <= top || bottom <= check); 10631 else 10632 return (check == top); 10633 } 10634 10635 EXPORT_SYMBOL(arc_buf_size); 10636 EXPORT_SYMBOL(arc_write); 10637 EXPORT_SYMBOL(arc_read); 10638 EXPORT_SYMBOL(arc_buf_info); 10639 EXPORT_SYMBOL(arc_getbuf_func); 10640 EXPORT_SYMBOL(arc_add_prune_callback); 10641 EXPORT_SYMBOL(arc_remove_prune_callback); 10642 10643 /* BEGIN CSTYLED */ 10644 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_long, 10645 param_get_long, ZMOD_RW, "Min arc size"); 10646 10647 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_long, 10648 param_get_long, ZMOD_RW, "Max arc size"); 10649 10650 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit, param_set_arc_long, 10651 param_get_long, ZMOD_RW, "Metadata limit for arc size"); 10652 10653 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_limit_percent, 10654 param_set_arc_long, param_get_long, ZMOD_RW, 10655 "Percent of arc size for arc meta limit"); 10656 10657 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, meta_min, param_set_arc_long, 10658 param_get_long, ZMOD_RW, "Min arc metadata"); 10659 10660 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_prune, INT, ZMOD_RW, 10661 "Meta objects to scan for prune"); 10662 10663 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_adjust_restarts, INT, ZMOD_RW, 10664 "Limit number of restarts in arc_evict_meta"); 10665 10666 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_strategy, INT, ZMOD_RW, 10667 "Meta reclaim strategy"); 10668 10669 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, grow_retry, param_set_arc_int, 10670 param_get_int, ZMOD_RW, "Seconds before growing arc size"); 10671 10672 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, p_dampener_disable, INT, ZMOD_RW, 10673 "Disable arc_p adapt dampener"); 10674 10675 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, shrink_shift, param_set_arc_int, 10676 param_get_int, ZMOD_RW, "log2(fraction of arc to reclaim)"); 10677 10678 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, pc_percent, UINT, ZMOD_RW, 10679 "Percent of pagecache to reclaim arc to"); 10680 10681 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, p_min_shift, param_set_arc_int, 10682 param_get_int, ZMOD_RW, "arc_c shift to calc min/max arc_p"); 10683 10684 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, average_blocksize, INT, ZMOD_RD, 10685 "Target average block size"); 10686 10687 ZFS_MODULE_PARAM(zfs, zfs_, compressed_arc_enabled, INT, ZMOD_RW, 10688 "Disable compressed arc buffers"); 10689 10690 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prefetch_ms, param_set_arc_int, 10691 param_get_int, ZMOD_RW, "Min life of prefetch block in ms"); 10692 10693 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms, 10694 param_set_arc_int, param_get_int, ZMOD_RW, 10695 "Min life of prescient prefetched block in ms"); 10696 10697 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, ULONG, ZMOD_RW, 10698 "Max write bytes per interval"); 10699 10700 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, ULONG, ZMOD_RW, 10701 "Extra write bytes during device warmup"); 10702 10703 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, ULONG, ZMOD_RW, 10704 "Number of max device writes to precache"); 10705 10706 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, ULONG, ZMOD_RW, 10707 "Compressed l2arc_headroom multiplier"); 10708 10709 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, ULONG, ZMOD_RW, 10710 "TRIM ahead L2ARC write size multiplier"); 10711 10712 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, ULONG, ZMOD_RW, 10713 "Seconds between L2ARC writing"); 10714 10715 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, ULONG, ZMOD_RW, 10716 "Min feed interval in milliseconds"); 10717 10718 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW, 10719 "Skip caching prefetched buffers"); 10720 10721 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_again, INT, ZMOD_RW, 10722 "Turbo L2ARC warmup"); 10723 10724 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, norw, INT, ZMOD_RW, 10725 "No reads during writes"); 10726 10727 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, INT, ZMOD_RW, 10728 "Percent of ARC size allowed for L2ARC-only headers"); 10729 10730 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW, 10731 "Rebuild the L2ARC when importing a pool"); 10732 10733 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, ULONG, ZMOD_RW, 10734 "Min size in bytes to write rebuild log blocks in L2ARC"); 10735 10736 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW, 10737 "Cache only MFU data from ARC into L2ARC"); 10738 10739 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int, 10740 param_get_int, ZMOD_RW, "System free memory I/O throttle in bytes"); 10741 10742 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_long, 10743 param_get_long, ZMOD_RW, "System free memory target size in bytes"); 10744 10745 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_long, 10746 param_get_long, ZMOD_RW, "Minimum bytes of dnodes in arc"); 10747 10748 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent, 10749 param_set_arc_long, param_get_long, ZMOD_RW, 10750 "Percent of ARC meta buffers for dnodes"); 10751 10752 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, ULONG, ZMOD_RW, 10753 "Percentage of excess dnodes to try to unpin"); 10754 10755 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, INT, ZMOD_RW, 10756 "When full, ARC allocation waits for eviction of this % of alloc size"); 10757 10758 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_batch_limit, INT, ZMOD_RW, 10759 "The number of headers to evict per sublist before moving to the next"); 10760 /* END CSTYLED */ 10761