1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or https://opensource.org/licenses/CDDL-1.0. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 23 * Copyright (c) 2018, Joyent, Inc. 24 * Copyright (c) 2011, 2020, Delphix. All rights reserved. 25 * Copyright (c) 2014, Saso Kiselkov. All rights reserved. 26 * Copyright (c) 2017, Nexenta Systems, Inc. All rights reserved. 27 * Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved. 28 * Copyright (c) 2020, George Amanakis. All rights reserved. 29 * Copyright (c) 2019, Klara Inc. 30 * Copyright (c) 2019, Allan Jude 31 * Copyright (c) 2020, The FreeBSD Foundation [1] 32 * 33 * [1] Portions of this software were developed by Allan Jude 34 * under sponsorship from the FreeBSD Foundation. 35 */ 36 37 /* 38 * DVA-based Adjustable Replacement Cache 39 * 40 * While much of the theory of operation used here is 41 * based on the self-tuning, low overhead replacement cache 42 * presented by Megiddo and Modha at FAST 2003, there are some 43 * significant differences: 44 * 45 * 1. The Megiddo and Modha model assumes any page is evictable. 46 * Pages in its cache cannot be "locked" into memory. This makes 47 * the eviction algorithm simple: evict the last page in the list. 48 * This also make the performance characteristics easy to reason 49 * about. Our cache is not so simple. At any given moment, some 50 * subset of the blocks in the cache are un-evictable because we 51 * have handed out a reference to them. Blocks are only evictable 52 * when there are no external references active. This makes 53 * eviction far more problematic: we choose to evict the evictable 54 * blocks that are the "lowest" in the list. 55 * 56 * There are times when it is not possible to evict the requested 57 * space. In these circumstances we are unable to adjust the cache 58 * size. To prevent the cache growing unbounded at these times we 59 * implement a "cache throttle" that slows the flow of new data 60 * into the cache until we can make space available. 61 * 62 * 2. The Megiddo and Modha model assumes a fixed cache size. 63 * Pages are evicted when the cache is full and there is a cache 64 * miss. Our model has a variable sized cache. It grows with 65 * high use, but also tries to react to memory pressure from the 66 * operating system: decreasing its size when system memory is 67 * tight. 68 * 69 * 3. The Megiddo and Modha model assumes a fixed page size. All 70 * elements of the cache are therefore exactly the same size. So 71 * when adjusting the cache size following a cache miss, its simply 72 * a matter of choosing a single page to evict. In our model, we 73 * have variable sized cache blocks (ranging from 512 bytes to 74 * 128K bytes). We therefore choose a set of blocks to evict to make 75 * space for a cache miss that approximates as closely as possible 76 * the space used by the new block. 77 * 78 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache" 79 * by N. Megiddo & D. Modha, FAST 2003 80 */ 81 82 /* 83 * The locking model: 84 * 85 * A new reference to a cache buffer can be obtained in two 86 * ways: 1) via a hash table lookup using the DVA as a key, 87 * or 2) via one of the ARC lists. The arc_read() interface 88 * uses method 1, while the internal ARC algorithms for 89 * adjusting the cache use method 2. We therefore provide two 90 * types of locks: 1) the hash table lock array, and 2) the 91 * ARC list locks. 92 * 93 * Buffers do not have their own mutexes, rather they rely on the 94 * hash table mutexes for the bulk of their protection (i.e. most 95 * fields in the arc_buf_hdr_t are protected by these mutexes). 96 * 97 * buf_hash_find() returns the appropriate mutex (held) when it 98 * locates the requested buffer in the hash table. It returns 99 * NULL for the mutex if the buffer was not in the table. 100 * 101 * buf_hash_remove() expects the appropriate hash mutex to be 102 * already held before it is invoked. 103 * 104 * Each ARC state also has a mutex which is used to protect the 105 * buffer list associated with the state. When attempting to 106 * obtain a hash table lock while holding an ARC list lock you 107 * must use: mutex_tryenter() to avoid deadlock. Also note that 108 * the active state mutex must be held before the ghost state mutex. 109 * 110 * It as also possible to register a callback which is run when the 111 * metadata limit is reached and no buffers can be safely evicted. In 112 * this case the arc user should drop a reference on some arc buffers so 113 * they can be reclaimed. For example, when using the ZPL each dentry 114 * holds a references on a znode. These dentries must be pruned before 115 * the arc buffer holding the znode can be safely evicted. 116 * 117 * Note that the majority of the performance stats are manipulated 118 * with atomic operations. 119 * 120 * The L2ARC uses the l2ad_mtx on each vdev for the following: 121 * 122 * - L2ARC buflist creation 123 * - L2ARC buflist eviction 124 * - L2ARC write completion, which walks L2ARC buflists 125 * - ARC header destruction, as it removes from L2ARC buflists 126 * - ARC header release, as it removes from L2ARC buflists 127 */ 128 129 /* 130 * ARC operation: 131 * 132 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure. 133 * This structure can point either to a block that is still in the cache or to 134 * one that is only accessible in an L2 ARC device, or it can provide 135 * information about a block that was recently evicted. If a block is 136 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough 137 * information to retrieve it from the L2ARC device. This information is 138 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block 139 * that is in this state cannot access the data directly. 140 * 141 * Blocks that are actively being referenced or have not been evicted 142 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within 143 * the arc_buf_hdr_t that will point to the data block in memory. A block can 144 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC 145 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and 146 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd). 147 * 148 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the 149 * ability to store the physical data (b_pabd) associated with the DVA of the 150 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block, 151 * it will match its on-disk compression characteristics. This behavior can be 152 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the 153 * compressed ARC functionality is disabled, the b_pabd will point to an 154 * uncompressed version of the on-disk data. 155 * 156 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each 157 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it. 158 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC 159 * consumer. The ARC will provide references to this data and will keep it 160 * cached until it is no longer in use. The ARC caches only the L1ARC's physical 161 * data block and will evict any arc_buf_t that is no longer referenced. The 162 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the 163 * "overhead_size" kstat. 164 * 165 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or 166 * compressed form. The typical case is that consumers will want uncompressed 167 * data, and when that happens a new data buffer is allocated where the data is 168 * decompressed for them to use. Currently the only consumer who wants 169 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it 170 * exists on disk. When this happens, the arc_buf_t's data buffer is shared 171 * with the arc_buf_hdr_t. 172 * 173 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The 174 * first one is owned by a compressed send consumer (and therefore references 175 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be 176 * used by any other consumer (and has its own uncompressed copy of the data 177 * buffer). 178 * 179 * arc_buf_hdr_t 180 * +-----------+ 181 * | fields | 182 * | common to | 183 * | L1- and | 184 * | L2ARC | 185 * +-----------+ 186 * | l2arc_buf_hdr_t 187 * | | 188 * +-----------+ 189 * | l1arc_buf_hdr_t 190 * | | arc_buf_t 191 * | b_buf +------------>+-----------+ arc_buf_t 192 * | b_pabd +-+ |b_next +---->+-----------+ 193 * +-----------+ | |-----------| |b_next +-->NULL 194 * | |b_comp = T | +-----------+ 195 * | |b_data +-+ |b_comp = F | 196 * | +-----------+ | |b_data +-+ 197 * +->+------+ | +-----------+ | 198 * compressed | | | | 199 * data | |<--------------+ | uncompressed 200 * +------+ compressed, | data 201 * shared +-->+------+ 202 * data | | 203 * | | 204 * +------+ 205 * 206 * When a consumer reads a block, the ARC must first look to see if the 207 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new 208 * arc_buf_t and either copies uncompressed data into a new data buffer from an 209 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a 210 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the 211 * hdr is compressed and the desired compression characteristics of the 212 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the 213 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be 214 * the last buffer in the hdr's b_buf list, however a shared compressed buf can 215 * be anywhere in the hdr's list. 216 * 217 * The diagram below shows an example of an uncompressed ARC hdr that is 218 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is 219 * the last element in the buf list): 220 * 221 * arc_buf_hdr_t 222 * +-----------+ 223 * | | 224 * | | 225 * | | 226 * +-----------+ 227 * l2arc_buf_hdr_t| | 228 * | | 229 * +-----------+ 230 * l1arc_buf_hdr_t| | 231 * | | arc_buf_t (shared) 232 * | b_buf +------------>+---------+ arc_buf_t 233 * | | |b_next +---->+---------+ 234 * | b_pabd +-+ |---------| |b_next +-->NULL 235 * +-----------+ | | | +---------+ 236 * | |b_data +-+ | | 237 * | +---------+ | |b_data +-+ 238 * +->+------+ | +---------+ | 239 * | | | | 240 * uncompressed | | | | 241 * data +------+ | | 242 * ^ +->+------+ | 243 * | uncompressed | | | 244 * | data | | | 245 * | +------+ | 246 * +---------------------------------+ 247 * 248 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd 249 * since the physical block is about to be rewritten. The new data contents 250 * will be contained in the arc_buf_t. As the I/O pipeline performs the write, 251 * it may compress the data before writing it to disk. The ARC will be called 252 * with the transformed data and will memcpy the transformed on-disk block into 253 * a newly allocated b_pabd. Writes are always done into buffers which have 254 * either been loaned (and hence are new and don't have other readers) or 255 * buffers which have been released (and hence have their own hdr, if there 256 * were originally other readers of the buf's original hdr). This ensures that 257 * the ARC only needs to update a single buf and its hdr after a write occurs. 258 * 259 * When the L2ARC is in use, it will also take advantage of the b_pabd. The 260 * L2ARC will always write the contents of b_pabd to the L2ARC. This means 261 * that when compressed ARC is enabled that the L2ARC blocks are identical 262 * to the on-disk block in the main data pool. This provides a significant 263 * advantage since the ARC can leverage the bp's checksum when reading from the 264 * L2ARC to determine if the contents are valid. However, if the compressed 265 * ARC is disabled, then the L2ARC's block must be transformed to look 266 * like the physical block in the main data pool before comparing the 267 * checksum and determining its validity. 268 * 269 * The L1ARC has a slightly different system for storing encrypted data. 270 * Raw (encrypted + possibly compressed) data has a few subtle differences from 271 * data that is just compressed. The biggest difference is that it is not 272 * possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded. 273 * The other difference is that encryption cannot be treated as a suggestion. 274 * If a caller would prefer compressed data, but they actually wind up with 275 * uncompressed data the worst thing that could happen is there might be a 276 * performance hit. If the caller requests encrypted data, however, we must be 277 * sure they actually get it or else secret information could be leaked. Raw 278 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore, 279 * may have both an encrypted version and a decrypted version of its data at 280 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is 281 * copied out of this header. To avoid complications with b_pabd, raw buffers 282 * cannot be shared. 283 */ 284 285 #include <sys/spa.h> 286 #include <sys/zio.h> 287 #include <sys/spa_impl.h> 288 #include <sys/zio_compress.h> 289 #include <sys/zio_checksum.h> 290 #include <sys/zfs_context.h> 291 #include <sys/arc.h> 292 #include <sys/zfs_refcount.h> 293 #include <sys/vdev.h> 294 #include <sys/vdev_impl.h> 295 #include <sys/dsl_pool.h> 296 #include <sys/multilist.h> 297 #include <sys/abd.h> 298 #include <sys/zil.h> 299 #include <sys/fm/fs/zfs.h> 300 #include <sys/callb.h> 301 #include <sys/kstat.h> 302 #include <sys/zthr.h> 303 #include <zfs_fletcher.h> 304 #include <sys/arc_impl.h> 305 #include <sys/trace_zfs.h> 306 #include <sys/aggsum.h> 307 #include <sys/wmsum.h> 308 #include <cityhash.h> 309 #include <sys/vdev_trim.h> 310 #include <sys/zfs_racct.h> 311 #include <sys/zstd/zstd.h> 312 313 #ifndef _KERNEL 314 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */ 315 boolean_t arc_watch = B_FALSE; 316 #endif 317 318 /* 319 * This thread's job is to keep enough free memory in the system, by 320 * calling arc_kmem_reap_soon() plus arc_reduce_target_size(), which improves 321 * arc_available_memory(). 322 */ 323 static zthr_t *arc_reap_zthr; 324 325 /* 326 * This thread's job is to keep arc_size under arc_c, by calling 327 * arc_evict(), which improves arc_is_overflowing(). 328 */ 329 static zthr_t *arc_evict_zthr; 330 static arc_buf_hdr_t **arc_state_evict_markers; 331 static int arc_state_evict_marker_count; 332 333 static kmutex_t arc_evict_lock; 334 static boolean_t arc_evict_needed = B_FALSE; 335 static clock_t arc_last_uncached_flush; 336 337 /* 338 * Count of bytes evicted since boot. 339 */ 340 static uint64_t arc_evict_count; 341 342 /* 343 * List of arc_evict_waiter_t's, representing threads waiting for the 344 * arc_evict_count to reach specific values. 345 */ 346 static list_t arc_evict_waiters; 347 348 /* 349 * When arc_is_overflowing(), arc_get_data_impl() waits for this percent of 350 * the requested amount of data to be evicted. For example, by default for 351 * every 2KB that's evicted, 1KB of it may be "reused" by a new allocation. 352 * Since this is above 100%, it ensures that progress is made towards getting 353 * arc_size under arc_c. Since this is finite, it ensures that allocations 354 * can still happen, even during the potentially long time that arc_size is 355 * more than arc_c. 356 */ 357 static uint_t zfs_arc_eviction_pct = 200; 358 359 /* 360 * The number of headers to evict in arc_evict_state_impl() before 361 * dropping the sublist lock and evicting from another sublist. A lower 362 * value means we're more likely to evict the "correct" header (i.e. the 363 * oldest header in the arc state), but comes with higher overhead 364 * (i.e. more invocations of arc_evict_state_impl()). 365 */ 366 static uint_t zfs_arc_evict_batch_limit = 10; 367 368 /* number of seconds before growing cache again */ 369 uint_t arc_grow_retry = 5; 370 371 /* 372 * Minimum time between calls to arc_kmem_reap_soon(). 373 */ 374 static const int arc_kmem_cache_reap_retry_ms = 1000; 375 376 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */ 377 static int zfs_arc_overflow_shift = 8; 378 379 /* log2(fraction of arc to reclaim) */ 380 uint_t arc_shrink_shift = 7; 381 382 /* percent of pagecache to reclaim arc to */ 383 #ifdef _KERNEL 384 uint_t zfs_arc_pc_percent = 0; 385 #endif 386 387 /* 388 * log2(fraction of ARC which must be free to allow growing). 389 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory, 390 * when reading a new block into the ARC, we will evict an equal-sized block 391 * from the ARC. 392 * 393 * This must be less than arc_shrink_shift, so that when we shrink the ARC, 394 * we will still not allow it to grow. 395 */ 396 uint_t arc_no_grow_shift = 5; 397 398 399 /* 400 * minimum lifespan of a prefetch block in clock ticks 401 * (initialized in arc_init()) 402 */ 403 static uint_t arc_min_prefetch_ms; 404 static uint_t arc_min_prescient_prefetch_ms; 405 406 /* 407 * If this percent of memory is free, don't throttle. 408 */ 409 uint_t arc_lotsfree_percent = 10; 410 411 /* 412 * The arc has filled available memory and has now warmed up. 413 */ 414 boolean_t arc_warm; 415 416 /* 417 * These tunables are for performance analysis. 418 */ 419 uint64_t zfs_arc_max = 0; 420 uint64_t zfs_arc_min = 0; 421 static uint64_t zfs_arc_dnode_limit = 0; 422 static uint_t zfs_arc_dnode_reduce_percent = 10; 423 static uint_t zfs_arc_grow_retry = 0; 424 static uint_t zfs_arc_shrink_shift = 0; 425 uint_t zfs_arc_average_blocksize = 8 * 1024; /* 8KB */ 426 427 /* 428 * ARC dirty data constraints for arc_tempreserve_space() throttle: 429 * * total dirty data limit 430 * * anon block dirty limit 431 * * each pool's anon allowance 432 */ 433 static const unsigned long zfs_arc_dirty_limit_percent = 50; 434 static const unsigned long zfs_arc_anon_limit_percent = 25; 435 static const unsigned long zfs_arc_pool_dirty_percent = 20; 436 437 /* 438 * Enable or disable compressed arc buffers. 439 */ 440 int zfs_compressed_arc_enabled = B_TRUE; 441 442 /* 443 * Balance between metadata and data on ghost hits. Values above 100 444 * increase metadata caching by proportionally reducing effect of ghost 445 * data hits on target data/metadata rate. 446 */ 447 static uint_t zfs_arc_meta_balance = 500; 448 449 /* 450 * Percentage that can be consumed by dnodes of ARC meta buffers. 451 */ 452 static uint_t zfs_arc_dnode_limit_percent = 10; 453 454 /* 455 * These tunables are Linux-specific 456 */ 457 static uint64_t zfs_arc_sys_free = 0; 458 static uint_t zfs_arc_min_prefetch_ms = 0; 459 static uint_t zfs_arc_min_prescient_prefetch_ms = 0; 460 static uint_t zfs_arc_lotsfree_percent = 10; 461 462 /* 463 * Number of arc_prune threads 464 */ 465 static int zfs_arc_prune_task_threads = 1; 466 467 /* The 7 states: */ 468 arc_state_t ARC_anon; 469 arc_state_t ARC_mru; 470 arc_state_t ARC_mru_ghost; 471 arc_state_t ARC_mfu; 472 arc_state_t ARC_mfu_ghost; 473 arc_state_t ARC_l2c_only; 474 arc_state_t ARC_uncached; 475 476 arc_stats_t arc_stats = { 477 { "hits", KSTAT_DATA_UINT64 }, 478 { "iohits", KSTAT_DATA_UINT64 }, 479 { "misses", KSTAT_DATA_UINT64 }, 480 { "demand_data_hits", KSTAT_DATA_UINT64 }, 481 { "demand_data_iohits", KSTAT_DATA_UINT64 }, 482 { "demand_data_misses", KSTAT_DATA_UINT64 }, 483 { "demand_metadata_hits", KSTAT_DATA_UINT64 }, 484 { "demand_metadata_iohits", KSTAT_DATA_UINT64 }, 485 { "demand_metadata_misses", KSTAT_DATA_UINT64 }, 486 { "prefetch_data_hits", KSTAT_DATA_UINT64 }, 487 { "prefetch_data_iohits", KSTAT_DATA_UINT64 }, 488 { "prefetch_data_misses", KSTAT_DATA_UINT64 }, 489 { "prefetch_metadata_hits", KSTAT_DATA_UINT64 }, 490 { "prefetch_metadata_iohits", KSTAT_DATA_UINT64 }, 491 { "prefetch_metadata_misses", KSTAT_DATA_UINT64 }, 492 { "mru_hits", KSTAT_DATA_UINT64 }, 493 { "mru_ghost_hits", KSTAT_DATA_UINT64 }, 494 { "mfu_hits", KSTAT_DATA_UINT64 }, 495 { "mfu_ghost_hits", KSTAT_DATA_UINT64 }, 496 { "uncached_hits", KSTAT_DATA_UINT64 }, 497 { "deleted", KSTAT_DATA_UINT64 }, 498 { "mutex_miss", KSTAT_DATA_UINT64 }, 499 { "access_skip", KSTAT_DATA_UINT64 }, 500 { "evict_skip", KSTAT_DATA_UINT64 }, 501 { "evict_not_enough", KSTAT_DATA_UINT64 }, 502 { "evict_l2_cached", KSTAT_DATA_UINT64 }, 503 { "evict_l2_eligible", KSTAT_DATA_UINT64 }, 504 { "evict_l2_eligible_mfu", KSTAT_DATA_UINT64 }, 505 { "evict_l2_eligible_mru", KSTAT_DATA_UINT64 }, 506 { "evict_l2_ineligible", KSTAT_DATA_UINT64 }, 507 { "evict_l2_skip", KSTAT_DATA_UINT64 }, 508 { "hash_elements", KSTAT_DATA_UINT64 }, 509 { "hash_elements_max", KSTAT_DATA_UINT64 }, 510 { "hash_collisions", KSTAT_DATA_UINT64 }, 511 { "hash_chains", KSTAT_DATA_UINT64 }, 512 { "hash_chain_max", KSTAT_DATA_UINT64 }, 513 { "meta", KSTAT_DATA_UINT64 }, 514 { "pd", KSTAT_DATA_UINT64 }, 515 { "pm", KSTAT_DATA_UINT64 }, 516 { "c", KSTAT_DATA_UINT64 }, 517 { "c_min", KSTAT_DATA_UINT64 }, 518 { "c_max", KSTAT_DATA_UINT64 }, 519 { "size", KSTAT_DATA_UINT64 }, 520 { "compressed_size", KSTAT_DATA_UINT64 }, 521 { "uncompressed_size", KSTAT_DATA_UINT64 }, 522 { "overhead_size", KSTAT_DATA_UINT64 }, 523 { "hdr_size", KSTAT_DATA_UINT64 }, 524 { "data_size", KSTAT_DATA_UINT64 }, 525 { "metadata_size", KSTAT_DATA_UINT64 }, 526 { "dbuf_size", KSTAT_DATA_UINT64 }, 527 { "dnode_size", KSTAT_DATA_UINT64 }, 528 { "bonus_size", KSTAT_DATA_UINT64 }, 529 #if defined(COMPAT_FREEBSD11) 530 { "other_size", KSTAT_DATA_UINT64 }, 531 #endif 532 { "anon_size", KSTAT_DATA_UINT64 }, 533 { "anon_data", KSTAT_DATA_UINT64 }, 534 { "anon_metadata", KSTAT_DATA_UINT64 }, 535 { "anon_evictable_data", KSTAT_DATA_UINT64 }, 536 { "anon_evictable_metadata", KSTAT_DATA_UINT64 }, 537 { "mru_size", KSTAT_DATA_UINT64 }, 538 { "mru_data", KSTAT_DATA_UINT64 }, 539 { "mru_metadata", KSTAT_DATA_UINT64 }, 540 { "mru_evictable_data", KSTAT_DATA_UINT64 }, 541 { "mru_evictable_metadata", KSTAT_DATA_UINT64 }, 542 { "mru_ghost_size", KSTAT_DATA_UINT64 }, 543 { "mru_ghost_data", KSTAT_DATA_UINT64 }, 544 { "mru_ghost_metadata", KSTAT_DATA_UINT64 }, 545 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 }, 546 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 547 { "mfu_size", KSTAT_DATA_UINT64 }, 548 { "mfu_data", KSTAT_DATA_UINT64 }, 549 { "mfu_metadata", KSTAT_DATA_UINT64 }, 550 { "mfu_evictable_data", KSTAT_DATA_UINT64 }, 551 { "mfu_evictable_metadata", KSTAT_DATA_UINT64 }, 552 { "mfu_ghost_size", KSTAT_DATA_UINT64 }, 553 { "mfu_ghost_data", KSTAT_DATA_UINT64 }, 554 { "mfu_ghost_metadata", KSTAT_DATA_UINT64 }, 555 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 }, 556 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 557 { "uncached_size", KSTAT_DATA_UINT64 }, 558 { "uncached_data", KSTAT_DATA_UINT64 }, 559 { "uncached_metadata", KSTAT_DATA_UINT64 }, 560 { "uncached_evictable_data", KSTAT_DATA_UINT64 }, 561 { "uncached_evictable_metadata", KSTAT_DATA_UINT64 }, 562 { "l2_hits", KSTAT_DATA_UINT64 }, 563 { "l2_misses", KSTAT_DATA_UINT64 }, 564 { "l2_prefetch_asize", KSTAT_DATA_UINT64 }, 565 { "l2_mru_asize", KSTAT_DATA_UINT64 }, 566 { "l2_mfu_asize", KSTAT_DATA_UINT64 }, 567 { "l2_bufc_data_asize", KSTAT_DATA_UINT64 }, 568 { "l2_bufc_metadata_asize", KSTAT_DATA_UINT64 }, 569 { "l2_feeds", KSTAT_DATA_UINT64 }, 570 { "l2_rw_clash", KSTAT_DATA_UINT64 }, 571 { "l2_read_bytes", KSTAT_DATA_UINT64 }, 572 { "l2_write_bytes", KSTAT_DATA_UINT64 }, 573 { "l2_writes_sent", KSTAT_DATA_UINT64 }, 574 { "l2_writes_done", KSTAT_DATA_UINT64 }, 575 { "l2_writes_error", KSTAT_DATA_UINT64 }, 576 { "l2_writes_lock_retry", KSTAT_DATA_UINT64 }, 577 { "l2_evict_lock_retry", KSTAT_DATA_UINT64 }, 578 { "l2_evict_reading", KSTAT_DATA_UINT64 }, 579 { "l2_evict_l1cached", KSTAT_DATA_UINT64 }, 580 { "l2_free_on_write", KSTAT_DATA_UINT64 }, 581 { "l2_abort_lowmem", KSTAT_DATA_UINT64 }, 582 { "l2_cksum_bad", KSTAT_DATA_UINT64 }, 583 { "l2_io_error", KSTAT_DATA_UINT64 }, 584 { "l2_size", KSTAT_DATA_UINT64 }, 585 { "l2_asize", KSTAT_DATA_UINT64 }, 586 { "l2_hdr_size", KSTAT_DATA_UINT64 }, 587 { "l2_log_blk_writes", KSTAT_DATA_UINT64 }, 588 { "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 }, 589 { "l2_log_blk_asize", KSTAT_DATA_UINT64 }, 590 { "l2_log_blk_count", KSTAT_DATA_UINT64 }, 591 { "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 }, 592 { "l2_rebuild_success", KSTAT_DATA_UINT64 }, 593 { "l2_rebuild_unsupported", KSTAT_DATA_UINT64 }, 594 { "l2_rebuild_io_errors", KSTAT_DATA_UINT64 }, 595 { "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 }, 596 { "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 }, 597 { "l2_rebuild_lowmem", KSTAT_DATA_UINT64 }, 598 { "l2_rebuild_size", KSTAT_DATA_UINT64 }, 599 { "l2_rebuild_asize", KSTAT_DATA_UINT64 }, 600 { "l2_rebuild_bufs", KSTAT_DATA_UINT64 }, 601 { "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 }, 602 { "l2_rebuild_log_blks", KSTAT_DATA_UINT64 }, 603 { "memory_throttle_count", KSTAT_DATA_UINT64 }, 604 { "memory_direct_count", KSTAT_DATA_UINT64 }, 605 { "memory_indirect_count", KSTAT_DATA_UINT64 }, 606 { "memory_all_bytes", KSTAT_DATA_UINT64 }, 607 { "memory_free_bytes", KSTAT_DATA_UINT64 }, 608 { "memory_available_bytes", KSTAT_DATA_INT64 }, 609 { "arc_no_grow", KSTAT_DATA_UINT64 }, 610 { "arc_tempreserve", KSTAT_DATA_UINT64 }, 611 { "arc_loaned_bytes", KSTAT_DATA_UINT64 }, 612 { "arc_prune", KSTAT_DATA_UINT64 }, 613 { "arc_meta_used", KSTAT_DATA_UINT64 }, 614 { "arc_dnode_limit", KSTAT_DATA_UINT64 }, 615 { "async_upgrade_sync", KSTAT_DATA_UINT64 }, 616 { "predictive_prefetch", KSTAT_DATA_UINT64 }, 617 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 }, 618 { "demand_iohit_predictive_prefetch", KSTAT_DATA_UINT64 }, 619 { "prescient_prefetch", KSTAT_DATA_UINT64 }, 620 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 }, 621 { "demand_iohit_prescient_prefetch", KSTAT_DATA_UINT64 }, 622 { "arc_need_free", KSTAT_DATA_UINT64 }, 623 { "arc_sys_free", KSTAT_DATA_UINT64 }, 624 { "arc_raw_size", KSTAT_DATA_UINT64 }, 625 { "cached_only_in_progress", KSTAT_DATA_UINT64 }, 626 { "abd_chunk_waste_size", KSTAT_DATA_UINT64 }, 627 }; 628 629 arc_sums_t arc_sums; 630 631 #define ARCSTAT_MAX(stat, val) { \ 632 uint64_t m; \ 633 while ((val) > (m = arc_stats.stat.value.ui64) && \ 634 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \ 635 continue; \ 636 } 637 638 /* 639 * We define a macro to allow ARC hits/misses to be easily broken down by 640 * two separate conditions, giving a total of four different subtypes for 641 * each of hits and misses (so eight statistics total). 642 */ 643 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \ 644 if (cond1) { \ 645 if (cond2) { \ 646 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \ 647 } else { \ 648 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \ 649 } \ 650 } else { \ 651 if (cond2) { \ 652 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \ 653 } else { \ 654 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\ 655 } \ 656 } 657 658 /* 659 * This macro allows us to use kstats as floating averages. Each time we 660 * update this kstat, we first factor it and the update value by 661 * ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall 662 * average. This macro assumes that integer loads and stores are atomic, but 663 * is not safe for multiple writers updating the kstat in parallel (only the 664 * last writer's update will remain). 665 */ 666 #define ARCSTAT_F_AVG_FACTOR 3 667 #define ARCSTAT_F_AVG(stat, value) \ 668 do { \ 669 uint64_t x = ARCSTAT(stat); \ 670 x = x - x / ARCSTAT_F_AVG_FACTOR + \ 671 (value) / ARCSTAT_F_AVG_FACTOR; \ 672 ARCSTAT(stat) = x; \ 673 } while (0) 674 675 static kstat_t *arc_ksp; 676 677 /* 678 * There are several ARC variables that are critical to export as kstats -- 679 * but we don't want to have to grovel around in the kstat whenever we wish to 680 * manipulate them. For these variables, we therefore define them to be in 681 * terms of the statistic variable. This assures that we are not introducing 682 * the possibility of inconsistency by having shadow copies of the variables, 683 * while still allowing the code to be readable. 684 */ 685 #define arc_tempreserve ARCSTAT(arcstat_tempreserve) 686 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes) 687 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */ 688 #define arc_need_free ARCSTAT(arcstat_need_free) /* waiting to be evicted */ 689 690 hrtime_t arc_growtime; 691 list_t arc_prune_list; 692 kmutex_t arc_prune_mtx; 693 taskq_t *arc_prune_taskq; 694 695 #define GHOST_STATE(state) \ 696 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \ 697 (state) == arc_l2c_only) 698 699 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE) 700 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) 701 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR) 702 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH) 703 #define HDR_PRESCIENT_PREFETCH(hdr) \ 704 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) 705 #define HDR_COMPRESSION_ENABLED(hdr) \ 706 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC) 707 708 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE) 709 #define HDR_UNCACHED(hdr) ((hdr)->b_flags & ARC_FLAG_UNCACHED) 710 #define HDR_L2_READING(hdr) \ 711 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \ 712 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)) 713 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING) 714 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED) 715 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD) 716 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED) 717 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH) 718 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA) 719 720 #define HDR_ISTYPE_METADATA(hdr) \ 721 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA) 722 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr)) 723 724 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR) 725 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR) 726 #define HDR_HAS_RABD(hdr) \ 727 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \ 728 (hdr)->b_crypt_hdr.b_rabd != NULL) 729 #define HDR_ENCRYPTED(hdr) \ 730 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 731 #define HDR_AUTHENTICATED(hdr) \ 732 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 733 734 /* For storing compression mode in b_flags */ 735 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1) 736 737 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \ 738 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS)) 739 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \ 740 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp)); 741 742 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL) 743 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED) 744 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED) 745 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED) 746 747 /* 748 * Other sizes 749 */ 750 751 #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t)) 752 #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr)) 753 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr)) 754 755 /* 756 * Hash table routines 757 */ 758 759 #define BUF_LOCKS 2048 760 typedef struct buf_hash_table { 761 uint64_t ht_mask; 762 arc_buf_hdr_t **ht_table; 763 kmutex_t ht_locks[BUF_LOCKS] ____cacheline_aligned; 764 } buf_hash_table_t; 765 766 static buf_hash_table_t buf_hash_table; 767 768 #define BUF_HASH_INDEX(spa, dva, birth) \ 769 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask) 770 #define BUF_HASH_LOCK(idx) (&buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)]) 771 #define HDR_LOCK(hdr) \ 772 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth))) 773 774 uint64_t zfs_crc64_table[256]; 775 776 /* 777 * Level 2 ARC 778 */ 779 780 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */ 781 #define L2ARC_HEADROOM 2 /* num of writes */ 782 783 /* 784 * If we discover during ARC scan any buffers to be compressed, we boost 785 * our headroom for the next scanning cycle by this percentage multiple. 786 */ 787 #define L2ARC_HEADROOM_BOOST 200 788 #define L2ARC_FEED_SECS 1 /* caching interval secs */ 789 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */ 790 791 /* 792 * We can feed L2ARC from two states of ARC buffers, mru and mfu, 793 * and each of the state has two types: data and metadata. 794 */ 795 #define L2ARC_FEED_TYPES 4 796 797 /* L2ARC Performance Tunables */ 798 uint64_t l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */ 799 uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */ 800 uint64_t l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */ 801 uint64_t l2arc_headroom_boost = L2ARC_HEADROOM_BOOST; 802 uint64_t l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */ 803 uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */ 804 int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */ 805 int l2arc_feed_again = B_TRUE; /* turbo warmup */ 806 int l2arc_norw = B_FALSE; /* no reads during writes */ 807 static uint_t l2arc_meta_percent = 33; /* limit on headers size */ 808 809 /* 810 * L2ARC Internals 811 */ 812 static list_t L2ARC_dev_list; /* device list */ 813 static list_t *l2arc_dev_list; /* device list pointer */ 814 static kmutex_t l2arc_dev_mtx; /* device list mutex */ 815 static l2arc_dev_t *l2arc_dev_last; /* last device used */ 816 static list_t L2ARC_free_on_write; /* free after write buf list */ 817 static list_t *l2arc_free_on_write; /* free after write list ptr */ 818 static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */ 819 static uint64_t l2arc_ndev; /* number of devices */ 820 821 typedef struct l2arc_read_callback { 822 arc_buf_hdr_t *l2rcb_hdr; /* read header */ 823 blkptr_t l2rcb_bp; /* original blkptr */ 824 zbookmark_phys_t l2rcb_zb; /* original bookmark */ 825 int l2rcb_flags; /* original flags */ 826 abd_t *l2rcb_abd; /* temporary buffer */ 827 } l2arc_read_callback_t; 828 829 typedef struct l2arc_data_free { 830 /* protected by l2arc_free_on_write_mtx */ 831 abd_t *l2df_abd; 832 size_t l2df_size; 833 arc_buf_contents_t l2df_type; 834 list_node_t l2df_list_node; 835 } l2arc_data_free_t; 836 837 typedef enum arc_fill_flags { 838 ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */ 839 ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */ 840 ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */ 841 ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */ 842 ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */ 843 } arc_fill_flags_t; 844 845 typedef enum arc_ovf_level { 846 ARC_OVF_NONE, /* ARC within target size. */ 847 ARC_OVF_SOME, /* ARC is slightly overflowed. */ 848 ARC_OVF_SEVERE /* ARC is severely overflowed. */ 849 } arc_ovf_level_t; 850 851 static kmutex_t l2arc_feed_thr_lock; 852 static kcondvar_t l2arc_feed_thr_cv; 853 static uint8_t l2arc_thread_exit; 854 855 static kmutex_t l2arc_rebuild_thr_lock; 856 static kcondvar_t l2arc_rebuild_thr_cv; 857 858 enum arc_hdr_alloc_flags { 859 ARC_HDR_ALLOC_RDATA = 0x1, 860 ARC_HDR_USE_RESERVE = 0x4, 861 ARC_HDR_ALLOC_LINEAR = 0x8, 862 }; 863 864 865 static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, const void *, int); 866 static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, const void *); 867 static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, const void *, int); 868 static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, const void *); 869 static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, const void *); 870 static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, 871 const void *tag); 872 static void arc_hdr_free_abd(arc_buf_hdr_t *, boolean_t); 873 static void arc_hdr_alloc_abd(arc_buf_hdr_t *, int); 874 static void arc_hdr_destroy(arc_buf_hdr_t *); 875 static void arc_access(arc_buf_hdr_t *, arc_flags_t, boolean_t); 876 static void arc_buf_watch(arc_buf_t *); 877 static void arc_change_state(arc_state_t *, arc_buf_hdr_t *); 878 879 static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *); 880 static uint32_t arc_bufc_to_flags(arc_buf_contents_t); 881 static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 882 static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 883 884 static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *); 885 static void l2arc_read_done(zio_t *); 886 static void l2arc_do_free_on_write(void); 887 static void l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 888 boolean_t state_only); 889 890 #define l2arc_hdr_arcstats_increment(hdr) \ 891 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_FALSE) 892 #define l2arc_hdr_arcstats_decrement(hdr) \ 893 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_FALSE) 894 #define l2arc_hdr_arcstats_increment_state(hdr) \ 895 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_TRUE) 896 #define l2arc_hdr_arcstats_decrement_state(hdr) \ 897 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_TRUE) 898 899 /* 900 * l2arc_exclude_special : A zfs module parameter that controls whether buffers 901 * present on special vdevs are eligibile for caching in L2ARC. If 902 * set to 1, exclude dbufs on special vdevs from being cached to 903 * L2ARC. 904 */ 905 int l2arc_exclude_special = 0; 906 907 /* 908 * l2arc_mfuonly : A ZFS module parameter that controls whether only MFU 909 * metadata and data are cached from ARC into L2ARC. 910 */ 911 static int l2arc_mfuonly = 0; 912 913 /* 914 * L2ARC TRIM 915 * l2arc_trim_ahead : A ZFS module parameter that controls how much ahead of 916 * the current write size (l2arc_write_max) we should TRIM if we 917 * have filled the device. It is defined as a percentage of the 918 * write size. If set to 100 we trim twice the space required to 919 * accommodate upcoming writes. A minimum of 64MB will be trimmed. 920 * It also enables TRIM of the whole L2ARC device upon creation or 921 * addition to an existing pool or if the header of the device is 922 * invalid upon importing a pool or onlining a cache device. The 923 * default is 0, which disables TRIM on L2ARC altogether as it can 924 * put significant stress on the underlying storage devices. This 925 * will vary depending of how well the specific device handles 926 * these commands. 927 */ 928 static uint64_t l2arc_trim_ahead = 0; 929 930 /* 931 * Performance tuning of L2ARC persistence: 932 * 933 * l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding 934 * an L2ARC device (either at pool import or later) will attempt 935 * to rebuild L2ARC buffer contents. 936 * l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls 937 * whether log blocks are written to the L2ARC device. If the L2ARC 938 * device is less than 1GB, the amount of data l2arc_evict() 939 * evicts is significant compared to the amount of restored L2ARC 940 * data. In this case do not write log blocks in L2ARC in order 941 * not to waste space. 942 */ 943 static int l2arc_rebuild_enabled = B_TRUE; 944 static uint64_t l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024; 945 946 /* L2ARC persistence rebuild control routines. */ 947 void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen); 948 static __attribute__((noreturn)) void l2arc_dev_rebuild_thread(void *arg); 949 static int l2arc_rebuild(l2arc_dev_t *dev); 950 951 /* L2ARC persistence read I/O routines. */ 952 static int l2arc_dev_hdr_read(l2arc_dev_t *dev); 953 static int l2arc_log_blk_read(l2arc_dev_t *dev, 954 const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp, 955 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 956 zio_t *this_io, zio_t **next_io); 957 static zio_t *l2arc_log_blk_fetch(vdev_t *vd, 958 const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb); 959 static void l2arc_log_blk_fetch_abort(zio_t *zio); 960 961 /* L2ARC persistence block restoration routines. */ 962 static void l2arc_log_blk_restore(l2arc_dev_t *dev, 963 const l2arc_log_blk_phys_t *lb, uint64_t lb_asize); 964 static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, 965 l2arc_dev_t *dev); 966 967 /* L2ARC persistence write I/O routines. */ 968 static void l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, 969 l2arc_write_callback_t *cb); 970 971 /* L2ARC persistence auxiliary routines. */ 972 boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev, 973 const l2arc_log_blkptr_t *lbp); 974 static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev, 975 const arc_buf_hdr_t *ab); 976 boolean_t l2arc_range_check_overlap(uint64_t bottom, 977 uint64_t top, uint64_t check); 978 static void l2arc_blk_fetch_done(zio_t *zio); 979 static inline uint64_t 980 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev); 981 982 /* 983 * We use Cityhash for this. It's fast, and has good hash properties without 984 * requiring any large static buffers. 985 */ 986 static uint64_t 987 buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth) 988 { 989 return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth)); 990 } 991 992 #define HDR_EMPTY(hdr) \ 993 ((hdr)->b_dva.dva_word[0] == 0 && \ 994 (hdr)->b_dva.dva_word[1] == 0) 995 996 #define HDR_EMPTY_OR_LOCKED(hdr) \ 997 (HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr))) 998 999 #define HDR_EQUAL(spa, dva, birth, hdr) \ 1000 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \ 1001 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \ 1002 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa) 1003 1004 static void 1005 buf_discard_identity(arc_buf_hdr_t *hdr) 1006 { 1007 hdr->b_dva.dva_word[0] = 0; 1008 hdr->b_dva.dva_word[1] = 0; 1009 hdr->b_birth = 0; 1010 } 1011 1012 static arc_buf_hdr_t * 1013 buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp) 1014 { 1015 const dva_t *dva = BP_IDENTITY(bp); 1016 uint64_t birth = BP_PHYSICAL_BIRTH(bp); 1017 uint64_t idx = BUF_HASH_INDEX(spa, dva, birth); 1018 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1019 arc_buf_hdr_t *hdr; 1020 1021 mutex_enter(hash_lock); 1022 for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL; 1023 hdr = hdr->b_hash_next) { 1024 if (HDR_EQUAL(spa, dva, birth, hdr)) { 1025 *lockp = hash_lock; 1026 return (hdr); 1027 } 1028 } 1029 mutex_exit(hash_lock); 1030 *lockp = NULL; 1031 return (NULL); 1032 } 1033 1034 /* 1035 * Insert an entry into the hash table. If there is already an element 1036 * equal to elem in the hash table, then the already existing element 1037 * will be returned and the new element will not be inserted. 1038 * Otherwise returns NULL. 1039 * If lockp == NULL, the caller is assumed to already hold the hash lock. 1040 */ 1041 static arc_buf_hdr_t * 1042 buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp) 1043 { 1044 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1045 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1046 arc_buf_hdr_t *fhdr; 1047 uint32_t i; 1048 1049 ASSERT(!DVA_IS_EMPTY(&hdr->b_dva)); 1050 ASSERT(hdr->b_birth != 0); 1051 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 1052 1053 if (lockp != NULL) { 1054 *lockp = hash_lock; 1055 mutex_enter(hash_lock); 1056 } else { 1057 ASSERT(MUTEX_HELD(hash_lock)); 1058 } 1059 1060 for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL; 1061 fhdr = fhdr->b_hash_next, i++) { 1062 if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr)) 1063 return (fhdr); 1064 } 1065 1066 hdr->b_hash_next = buf_hash_table.ht_table[idx]; 1067 buf_hash_table.ht_table[idx] = hdr; 1068 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1069 1070 /* collect some hash table performance data */ 1071 if (i > 0) { 1072 ARCSTAT_BUMP(arcstat_hash_collisions); 1073 if (i == 1) 1074 ARCSTAT_BUMP(arcstat_hash_chains); 1075 1076 ARCSTAT_MAX(arcstat_hash_chain_max, i); 1077 } 1078 uint64_t he = atomic_inc_64_nv( 1079 &arc_stats.arcstat_hash_elements.value.ui64); 1080 ARCSTAT_MAX(arcstat_hash_elements_max, he); 1081 1082 return (NULL); 1083 } 1084 1085 static void 1086 buf_hash_remove(arc_buf_hdr_t *hdr) 1087 { 1088 arc_buf_hdr_t *fhdr, **hdrp; 1089 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1090 1091 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx))); 1092 ASSERT(HDR_IN_HASH_TABLE(hdr)); 1093 1094 hdrp = &buf_hash_table.ht_table[idx]; 1095 while ((fhdr = *hdrp) != hdr) { 1096 ASSERT3P(fhdr, !=, NULL); 1097 hdrp = &fhdr->b_hash_next; 1098 } 1099 *hdrp = hdr->b_hash_next; 1100 hdr->b_hash_next = NULL; 1101 arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1102 1103 /* collect some hash table performance data */ 1104 atomic_dec_64(&arc_stats.arcstat_hash_elements.value.ui64); 1105 1106 if (buf_hash_table.ht_table[idx] && 1107 buf_hash_table.ht_table[idx]->b_hash_next == NULL) 1108 ARCSTAT_BUMPDOWN(arcstat_hash_chains); 1109 } 1110 1111 /* 1112 * Global data structures and functions for the buf kmem cache. 1113 */ 1114 1115 static kmem_cache_t *hdr_full_cache; 1116 static kmem_cache_t *hdr_full_crypt_cache; 1117 static kmem_cache_t *hdr_l2only_cache; 1118 static kmem_cache_t *buf_cache; 1119 1120 static void 1121 buf_fini(void) 1122 { 1123 #if defined(_KERNEL) 1124 /* 1125 * Large allocations which do not require contiguous pages 1126 * should be using vmem_free() in the linux kernel\ 1127 */ 1128 vmem_free(buf_hash_table.ht_table, 1129 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1130 #else 1131 kmem_free(buf_hash_table.ht_table, 1132 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1133 #endif 1134 for (int i = 0; i < BUF_LOCKS; i++) 1135 mutex_destroy(BUF_HASH_LOCK(i)); 1136 kmem_cache_destroy(hdr_full_cache); 1137 kmem_cache_destroy(hdr_full_crypt_cache); 1138 kmem_cache_destroy(hdr_l2only_cache); 1139 kmem_cache_destroy(buf_cache); 1140 } 1141 1142 /* 1143 * Constructor callback - called when the cache is empty 1144 * and a new buf is requested. 1145 */ 1146 static int 1147 hdr_full_cons(void *vbuf, void *unused, int kmflag) 1148 { 1149 (void) unused, (void) kmflag; 1150 arc_buf_hdr_t *hdr = vbuf; 1151 1152 memset(hdr, 0, HDR_FULL_SIZE); 1153 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 1154 cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL); 1155 zfs_refcount_create(&hdr->b_l1hdr.b_refcnt); 1156 #ifdef ZFS_DEBUG 1157 mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL); 1158 #endif 1159 multilist_link_init(&hdr->b_l1hdr.b_arc_node); 1160 list_link_init(&hdr->b_l2hdr.b_l2node); 1161 arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1162 1163 return (0); 1164 } 1165 1166 static int 1167 hdr_full_crypt_cons(void *vbuf, void *unused, int kmflag) 1168 { 1169 (void) unused; 1170 arc_buf_hdr_t *hdr = vbuf; 1171 1172 hdr_full_cons(vbuf, unused, kmflag); 1173 memset(&hdr->b_crypt_hdr, 0, sizeof (hdr->b_crypt_hdr)); 1174 arc_space_consume(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); 1175 1176 return (0); 1177 } 1178 1179 static int 1180 hdr_l2only_cons(void *vbuf, void *unused, int kmflag) 1181 { 1182 (void) unused, (void) kmflag; 1183 arc_buf_hdr_t *hdr = vbuf; 1184 1185 memset(hdr, 0, HDR_L2ONLY_SIZE); 1186 arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1187 1188 return (0); 1189 } 1190 1191 static int 1192 buf_cons(void *vbuf, void *unused, int kmflag) 1193 { 1194 (void) unused, (void) kmflag; 1195 arc_buf_t *buf = vbuf; 1196 1197 memset(buf, 0, sizeof (arc_buf_t)); 1198 arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1199 1200 return (0); 1201 } 1202 1203 /* 1204 * Destructor callback - called when a cached buf is 1205 * no longer required. 1206 */ 1207 static void 1208 hdr_full_dest(void *vbuf, void *unused) 1209 { 1210 (void) unused; 1211 arc_buf_hdr_t *hdr = vbuf; 1212 1213 ASSERT(HDR_EMPTY(hdr)); 1214 cv_destroy(&hdr->b_l1hdr.b_cv); 1215 zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt); 1216 #ifdef ZFS_DEBUG 1217 mutex_destroy(&hdr->b_l1hdr.b_freeze_lock); 1218 #endif 1219 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 1220 arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1221 } 1222 1223 static void 1224 hdr_full_crypt_dest(void *vbuf, void *unused) 1225 { 1226 (void) vbuf, (void) unused; 1227 1228 hdr_full_dest(vbuf, unused); 1229 arc_space_return(sizeof (((arc_buf_hdr_t *)NULL)->b_crypt_hdr), 1230 ARC_SPACE_HDRS); 1231 } 1232 1233 static void 1234 hdr_l2only_dest(void *vbuf, void *unused) 1235 { 1236 (void) unused; 1237 arc_buf_hdr_t *hdr = vbuf; 1238 1239 ASSERT(HDR_EMPTY(hdr)); 1240 arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1241 } 1242 1243 static void 1244 buf_dest(void *vbuf, void *unused) 1245 { 1246 (void) unused; 1247 (void) vbuf; 1248 1249 arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1250 } 1251 1252 static void 1253 buf_init(void) 1254 { 1255 uint64_t *ct = NULL; 1256 uint64_t hsize = 1ULL << 12; 1257 int i, j; 1258 1259 /* 1260 * The hash table is big enough to fill all of physical memory 1261 * with an average block size of zfs_arc_average_blocksize (default 8K). 1262 * By default, the table will take up 1263 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers). 1264 */ 1265 while (hsize * zfs_arc_average_blocksize < arc_all_memory()) 1266 hsize <<= 1; 1267 retry: 1268 buf_hash_table.ht_mask = hsize - 1; 1269 #if defined(_KERNEL) 1270 /* 1271 * Large allocations which do not require contiguous pages 1272 * should be using vmem_alloc() in the linux kernel 1273 */ 1274 buf_hash_table.ht_table = 1275 vmem_zalloc(hsize * sizeof (void*), KM_SLEEP); 1276 #else 1277 buf_hash_table.ht_table = 1278 kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP); 1279 #endif 1280 if (buf_hash_table.ht_table == NULL) { 1281 ASSERT(hsize > (1ULL << 8)); 1282 hsize >>= 1; 1283 goto retry; 1284 } 1285 1286 hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE, 1287 0, hdr_full_cons, hdr_full_dest, NULL, NULL, NULL, 0); 1288 hdr_full_crypt_cache = kmem_cache_create("arc_buf_hdr_t_full_crypt", 1289 HDR_FULL_CRYPT_SIZE, 0, hdr_full_crypt_cons, hdr_full_crypt_dest, 1290 NULL, NULL, NULL, 0); 1291 hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only", 1292 HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, NULL, 1293 NULL, NULL, 0); 1294 buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t), 1295 0, buf_cons, buf_dest, NULL, NULL, NULL, 0); 1296 1297 for (i = 0; i < 256; i++) 1298 for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--) 1299 *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY); 1300 1301 for (i = 0; i < BUF_LOCKS; i++) 1302 mutex_init(BUF_HASH_LOCK(i), NULL, MUTEX_DEFAULT, NULL); 1303 } 1304 1305 #define ARC_MINTIME (hz>>4) /* 62 ms */ 1306 1307 /* 1308 * This is the size that the buf occupies in memory. If the buf is compressed, 1309 * it will correspond to the compressed size. You should use this method of 1310 * getting the buf size unless you explicitly need the logical size. 1311 */ 1312 uint64_t 1313 arc_buf_size(arc_buf_t *buf) 1314 { 1315 return (ARC_BUF_COMPRESSED(buf) ? 1316 HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr)); 1317 } 1318 1319 uint64_t 1320 arc_buf_lsize(arc_buf_t *buf) 1321 { 1322 return (HDR_GET_LSIZE(buf->b_hdr)); 1323 } 1324 1325 /* 1326 * This function will return B_TRUE if the buffer is encrypted in memory. 1327 * This buffer can be decrypted by calling arc_untransform(). 1328 */ 1329 boolean_t 1330 arc_is_encrypted(arc_buf_t *buf) 1331 { 1332 return (ARC_BUF_ENCRYPTED(buf) != 0); 1333 } 1334 1335 /* 1336 * Returns B_TRUE if the buffer represents data that has not had its MAC 1337 * verified yet. 1338 */ 1339 boolean_t 1340 arc_is_unauthenticated(arc_buf_t *buf) 1341 { 1342 return (HDR_NOAUTH(buf->b_hdr) != 0); 1343 } 1344 1345 void 1346 arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt, 1347 uint8_t *iv, uint8_t *mac) 1348 { 1349 arc_buf_hdr_t *hdr = buf->b_hdr; 1350 1351 ASSERT(HDR_PROTECTED(hdr)); 1352 1353 memcpy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 1354 memcpy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 1355 memcpy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 1356 *byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 1357 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 1358 } 1359 1360 /* 1361 * Indicates how this buffer is compressed in memory. If it is not compressed 1362 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with 1363 * arc_untransform() as long as it is also unencrypted. 1364 */ 1365 enum zio_compress 1366 arc_get_compression(arc_buf_t *buf) 1367 { 1368 return (ARC_BUF_COMPRESSED(buf) ? 1369 HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF); 1370 } 1371 1372 /* 1373 * Return the compression algorithm used to store this data in the ARC. If ARC 1374 * compression is enabled or this is an encrypted block, this will be the same 1375 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF. 1376 */ 1377 static inline enum zio_compress 1378 arc_hdr_get_compress(arc_buf_hdr_t *hdr) 1379 { 1380 return (HDR_COMPRESSION_ENABLED(hdr) ? 1381 HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF); 1382 } 1383 1384 uint8_t 1385 arc_get_complevel(arc_buf_t *buf) 1386 { 1387 return (buf->b_hdr->b_complevel); 1388 } 1389 1390 static inline boolean_t 1391 arc_buf_is_shared(arc_buf_t *buf) 1392 { 1393 boolean_t shared = (buf->b_data != NULL && 1394 buf->b_hdr->b_l1hdr.b_pabd != NULL && 1395 abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) && 1396 buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd)); 1397 IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr)); 1398 IMPLY(shared, ARC_BUF_SHARED(buf)); 1399 IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf)); 1400 1401 /* 1402 * It would be nice to assert arc_can_share() too, but the "hdr isn't 1403 * already being shared" requirement prevents us from doing that. 1404 */ 1405 1406 return (shared); 1407 } 1408 1409 /* 1410 * Free the checksum associated with this header. If there is no checksum, this 1411 * is a no-op. 1412 */ 1413 static inline void 1414 arc_cksum_free(arc_buf_hdr_t *hdr) 1415 { 1416 #ifdef ZFS_DEBUG 1417 ASSERT(HDR_HAS_L1HDR(hdr)); 1418 1419 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1420 if (hdr->b_l1hdr.b_freeze_cksum != NULL) { 1421 kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t)); 1422 hdr->b_l1hdr.b_freeze_cksum = NULL; 1423 } 1424 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1425 #endif 1426 } 1427 1428 /* 1429 * Return true iff at least one of the bufs on hdr is not compressed. 1430 * Encrypted buffers count as compressed. 1431 */ 1432 static boolean_t 1433 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr) 1434 { 1435 ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr)); 1436 1437 for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) { 1438 if (!ARC_BUF_COMPRESSED(b)) { 1439 return (B_TRUE); 1440 } 1441 } 1442 return (B_FALSE); 1443 } 1444 1445 1446 /* 1447 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data 1448 * matches the checksum that is stored in the hdr. If there is no checksum, 1449 * or if the buf is compressed, this is a no-op. 1450 */ 1451 static void 1452 arc_cksum_verify(arc_buf_t *buf) 1453 { 1454 #ifdef ZFS_DEBUG 1455 arc_buf_hdr_t *hdr = buf->b_hdr; 1456 zio_cksum_t zc; 1457 1458 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1459 return; 1460 1461 if (ARC_BUF_COMPRESSED(buf)) 1462 return; 1463 1464 ASSERT(HDR_HAS_L1HDR(hdr)); 1465 1466 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1467 1468 if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) { 1469 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1470 return; 1471 } 1472 1473 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc); 1474 if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc)) 1475 panic("buffer modified while frozen!"); 1476 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1477 #endif 1478 } 1479 1480 /* 1481 * This function makes the assumption that data stored in the L2ARC 1482 * will be transformed exactly as it is in the main pool. Because of 1483 * this we can verify the checksum against the reading process's bp. 1484 */ 1485 static boolean_t 1486 arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio) 1487 { 1488 ASSERT(!BP_IS_EMBEDDED(zio->io_bp)); 1489 VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr)); 1490 1491 /* 1492 * Block pointers always store the checksum for the logical data. 1493 * If the block pointer has the gang bit set, then the checksum 1494 * it represents is for the reconstituted data and not for an 1495 * individual gang member. The zio pipeline, however, must be able to 1496 * determine the checksum of each of the gang constituents so it 1497 * treats the checksum comparison differently than what we need 1498 * for l2arc blocks. This prevents us from using the 1499 * zio_checksum_error() interface directly. Instead we must call the 1500 * zio_checksum_error_impl() so that we can ensure the checksum is 1501 * generated using the correct checksum algorithm and accounts for the 1502 * logical I/O size and not just a gang fragment. 1503 */ 1504 return (zio_checksum_error_impl(zio->io_spa, zio->io_bp, 1505 BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size, 1506 zio->io_offset, NULL) == 0); 1507 } 1508 1509 /* 1510 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a 1511 * checksum and attaches it to the buf's hdr so that we can ensure that the buf 1512 * isn't modified later on. If buf is compressed or there is already a checksum 1513 * on the hdr, this is a no-op (we only checksum uncompressed bufs). 1514 */ 1515 static void 1516 arc_cksum_compute(arc_buf_t *buf) 1517 { 1518 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1519 return; 1520 1521 #ifdef ZFS_DEBUG 1522 arc_buf_hdr_t *hdr = buf->b_hdr; 1523 ASSERT(HDR_HAS_L1HDR(hdr)); 1524 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1525 if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) { 1526 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1527 return; 1528 } 1529 1530 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 1531 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1532 hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t), 1533 KM_SLEEP); 1534 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, 1535 hdr->b_l1hdr.b_freeze_cksum); 1536 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1537 #endif 1538 arc_buf_watch(buf); 1539 } 1540 1541 #ifndef _KERNEL 1542 void 1543 arc_buf_sigsegv(int sig, siginfo_t *si, void *unused) 1544 { 1545 (void) sig, (void) unused; 1546 panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr); 1547 } 1548 #endif 1549 1550 static void 1551 arc_buf_unwatch(arc_buf_t *buf) 1552 { 1553 #ifndef _KERNEL 1554 if (arc_watch) { 1555 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1556 PROT_READ | PROT_WRITE)); 1557 } 1558 #else 1559 (void) buf; 1560 #endif 1561 } 1562 1563 static void 1564 arc_buf_watch(arc_buf_t *buf) 1565 { 1566 #ifndef _KERNEL 1567 if (arc_watch) 1568 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1569 PROT_READ)); 1570 #else 1571 (void) buf; 1572 #endif 1573 } 1574 1575 static arc_buf_contents_t 1576 arc_buf_type(arc_buf_hdr_t *hdr) 1577 { 1578 arc_buf_contents_t type; 1579 if (HDR_ISTYPE_METADATA(hdr)) { 1580 type = ARC_BUFC_METADATA; 1581 } else { 1582 type = ARC_BUFC_DATA; 1583 } 1584 VERIFY3U(hdr->b_type, ==, type); 1585 return (type); 1586 } 1587 1588 boolean_t 1589 arc_is_metadata(arc_buf_t *buf) 1590 { 1591 return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0); 1592 } 1593 1594 static uint32_t 1595 arc_bufc_to_flags(arc_buf_contents_t type) 1596 { 1597 switch (type) { 1598 case ARC_BUFC_DATA: 1599 /* metadata field is 0 if buffer contains normal data */ 1600 return (0); 1601 case ARC_BUFC_METADATA: 1602 return (ARC_FLAG_BUFC_METADATA); 1603 default: 1604 break; 1605 } 1606 panic("undefined ARC buffer type!"); 1607 return ((uint32_t)-1); 1608 } 1609 1610 void 1611 arc_buf_thaw(arc_buf_t *buf) 1612 { 1613 arc_buf_hdr_t *hdr = buf->b_hdr; 1614 1615 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 1616 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 1617 1618 arc_cksum_verify(buf); 1619 1620 /* 1621 * Compressed buffers do not manipulate the b_freeze_cksum. 1622 */ 1623 if (ARC_BUF_COMPRESSED(buf)) 1624 return; 1625 1626 ASSERT(HDR_HAS_L1HDR(hdr)); 1627 arc_cksum_free(hdr); 1628 arc_buf_unwatch(buf); 1629 } 1630 1631 void 1632 arc_buf_freeze(arc_buf_t *buf) 1633 { 1634 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1635 return; 1636 1637 if (ARC_BUF_COMPRESSED(buf)) 1638 return; 1639 1640 ASSERT(HDR_HAS_L1HDR(buf->b_hdr)); 1641 arc_cksum_compute(buf); 1642 } 1643 1644 /* 1645 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead, 1646 * the following functions should be used to ensure that the flags are 1647 * updated in a thread-safe way. When manipulating the flags either 1648 * the hash_lock must be held or the hdr must be undiscoverable. This 1649 * ensures that we're not racing with any other threads when updating 1650 * the flags. 1651 */ 1652 static inline void 1653 arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1654 { 1655 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1656 hdr->b_flags |= flags; 1657 } 1658 1659 static inline void 1660 arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1661 { 1662 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1663 hdr->b_flags &= ~flags; 1664 } 1665 1666 /* 1667 * Setting the compression bits in the arc_buf_hdr_t's b_flags is 1668 * done in a special way since we have to clear and set bits 1669 * at the same time. Consumers that wish to set the compression bits 1670 * must use this function to ensure that the flags are updated in 1671 * thread-safe manner. 1672 */ 1673 static void 1674 arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp) 1675 { 1676 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1677 1678 /* 1679 * Holes and embedded blocks will always have a psize = 0 so 1680 * we ignore the compression of the blkptr and set the 1681 * want to uncompress them. Mark them as uncompressed. 1682 */ 1683 if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) { 1684 arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1685 ASSERT(!HDR_COMPRESSION_ENABLED(hdr)); 1686 } else { 1687 arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1688 ASSERT(HDR_COMPRESSION_ENABLED(hdr)); 1689 } 1690 1691 HDR_SET_COMPRESS(hdr, cmp); 1692 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp); 1693 } 1694 1695 /* 1696 * Looks for another buf on the same hdr which has the data decompressed, copies 1697 * from it, and returns true. If no such buf exists, returns false. 1698 */ 1699 static boolean_t 1700 arc_buf_try_copy_decompressed_data(arc_buf_t *buf) 1701 { 1702 arc_buf_hdr_t *hdr = buf->b_hdr; 1703 boolean_t copied = B_FALSE; 1704 1705 ASSERT(HDR_HAS_L1HDR(hdr)); 1706 ASSERT3P(buf->b_data, !=, NULL); 1707 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1708 1709 for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL; 1710 from = from->b_next) { 1711 /* can't use our own data buffer */ 1712 if (from == buf) { 1713 continue; 1714 } 1715 1716 if (!ARC_BUF_COMPRESSED(from)) { 1717 memcpy(buf->b_data, from->b_data, arc_buf_size(buf)); 1718 copied = B_TRUE; 1719 break; 1720 } 1721 } 1722 1723 #ifdef ZFS_DEBUG 1724 /* 1725 * There were no decompressed bufs, so there should not be a 1726 * checksum on the hdr either. 1727 */ 1728 if (zfs_flags & ZFS_DEBUG_MODIFY) 1729 EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL); 1730 #endif 1731 1732 return (copied); 1733 } 1734 1735 /* 1736 * Allocates an ARC buf header that's in an evicted & L2-cached state. 1737 * This is used during l2arc reconstruction to make empty ARC buffers 1738 * which circumvent the regular disk->arc->l2arc path and instead come 1739 * into being in the reverse order, i.e. l2arc->arc. 1740 */ 1741 static arc_buf_hdr_t * 1742 arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev, 1743 dva_t dva, uint64_t daddr, int32_t psize, uint64_t birth, 1744 enum zio_compress compress, uint8_t complevel, boolean_t protected, 1745 boolean_t prefetch, arc_state_type_t arcs_state) 1746 { 1747 arc_buf_hdr_t *hdr; 1748 1749 ASSERT(size != 0); 1750 hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP); 1751 hdr->b_birth = birth; 1752 hdr->b_type = type; 1753 hdr->b_flags = 0; 1754 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR); 1755 HDR_SET_LSIZE(hdr, size); 1756 HDR_SET_PSIZE(hdr, psize); 1757 arc_hdr_set_compress(hdr, compress); 1758 hdr->b_complevel = complevel; 1759 if (protected) 1760 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 1761 if (prefetch) 1762 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 1763 hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa); 1764 1765 hdr->b_dva = dva; 1766 1767 hdr->b_l2hdr.b_dev = dev; 1768 hdr->b_l2hdr.b_daddr = daddr; 1769 hdr->b_l2hdr.b_arcs_state = arcs_state; 1770 1771 return (hdr); 1772 } 1773 1774 /* 1775 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t. 1776 */ 1777 static uint64_t 1778 arc_hdr_size(arc_buf_hdr_t *hdr) 1779 { 1780 uint64_t size; 1781 1782 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 1783 HDR_GET_PSIZE(hdr) > 0) { 1784 size = HDR_GET_PSIZE(hdr); 1785 } else { 1786 ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0); 1787 size = HDR_GET_LSIZE(hdr); 1788 } 1789 return (size); 1790 } 1791 1792 static int 1793 arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj) 1794 { 1795 int ret; 1796 uint64_t csize; 1797 uint64_t lsize = HDR_GET_LSIZE(hdr); 1798 uint64_t psize = HDR_GET_PSIZE(hdr); 1799 void *tmpbuf = NULL; 1800 abd_t *abd = hdr->b_l1hdr.b_pabd; 1801 1802 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1803 ASSERT(HDR_AUTHENTICATED(hdr)); 1804 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1805 1806 /* 1807 * The MAC is calculated on the compressed data that is stored on disk. 1808 * However, if compressed arc is disabled we will only have the 1809 * decompressed data available to us now. Compress it into a temporary 1810 * abd so we can verify the MAC. The performance overhead of this will 1811 * be relatively low, since most objects in an encrypted objset will 1812 * be encrypted (instead of authenticated) anyway. 1813 */ 1814 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1815 !HDR_COMPRESSION_ENABLED(hdr)) { 1816 1817 csize = zio_compress_data(HDR_GET_COMPRESS(hdr), 1818 hdr->b_l1hdr.b_pabd, &tmpbuf, lsize, hdr->b_complevel); 1819 ASSERT3P(tmpbuf, !=, NULL); 1820 ASSERT3U(csize, <=, psize); 1821 abd = abd_get_from_buf(tmpbuf, lsize); 1822 abd_take_ownership_of_buf(abd, B_TRUE); 1823 abd_zero_off(abd, csize, psize - csize); 1824 } 1825 1826 /* 1827 * Authentication is best effort. We authenticate whenever the key is 1828 * available. If we succeed we clear ARC_FLAG_NOAUTH. 1829 */ 1830 if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) { 1831 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); 1832 ASSERT3U(lsize, ==, psize); 1833 ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd, 1834 psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1835 } else { 1836 ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize, 1837 hdr->b_crypt_hdr.b_mac); 1838 } 1839 1840 if (ret == 0) 1841 arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH); 1842 else if (ret != ENOENT) 1843 goto error; 1844 1845 if (tmpbuf != NULL) 1846 abd_free(abd); 1847 1848 return (0); 1849 1850 error: 1851 if (tmpbuf != NULL) 1852 abd_free(abd); 1853 1854 return (ret); 1855 } 1856 1857 /* 1858 * This function will take a header that only has raw encrypted data in 1859 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in 1860 * b_l1hdr.b_pabd. If designated in the header flags, this function will 1861 * also decompress the data. 1862 */ 1863 static int 1864 arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb) 1865 { 1866 int ret; 1867 abd_t *cabd = NULL; 1868 void *tmp = NULL; 1869 boolean_t no_crypt = B_FALSE; 1870 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1871 1872 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1873 ASSERT(HDR_ENCRYPTED(hdr)); 1874 1875 arc_hdr_alloc_abd(hdr, 0); 1876 1877 ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot, 1878 B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv, 1879 hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd, 1880 hdr->b_crypt_hdr.b_rabd, &no_crypt); 1881 if (ret != 0) 1882 goto error; 1883 1884 if (no_crypt) { 1885 abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd, 1886 HDR_GET_PSIZE(hdr)); 1887 } 1888 1889 /* 1890 * If this header has disabled arc compression but the b_pabd is 1891 * compressed after decrypting it, we need to decompress the newly 1892 * decrypted data. 1893 */ 1894 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1895 !HDR_COMPRESSION_ENABLED(hdr)) { 1896 /* 1897 * We want to make sure that we are correctly honoring the 1898 * zfs_abd_scatter_enabled setting, so we allocate an abd here 1899 * and then loan a buffer from it, rather than allocating a 1900 * linear buffer and wrapping it in an abd later. 1901 */ 1902 cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 0); 1903 tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 1904 1905 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 1906 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 1907 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 1908 if (ret != 0) { 1909 abd_return_buf(cabd, tmp, arc_hdr_size(hdr)); 1910 goto error; 1911 } 1912 1913 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 1914 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 1915 arc_hdr_size(hdr), hdr); 1916 hdr->b_l1hdr.b_pabd = cabd; 1917 } 1918 1919 return (0); 1920 1921 error: 1922 arc_hdr_free_abd(hdr, B_FALSE); 1923 if (cabd != NULL) 1924 arc_free_data_buf(hdr, cabd, arc_hdr_size(hdr), hdr); 1925 1926 return (ret); 1927 } 1928 1929 /* 1930 * This function is called during arc_buf_fill() to prepare the header's 1931 * abd plaintext pointer for use. This involves authenticated protected 1932 * data and decrypting encrypted data into the plaintext abd. 1933 */ 1934 static int 1935 arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa, 1936 const zbookmark_phys_t *zb, boolean_t noauth) 1937 { 1938 int ret; 1939 1940 ASSERT(HDR_PROTECTED(hdr)); 1941 1942 if (hash_lock != NULL) 1943 mutex_enter(hash_lock); 1944 1945 if (HDR_NOAUTH(hdr) && !noauth) { 1946 /* 1947 * The caller requested authenticated data but our data has 1948 * not been authenticated yet. Verify the MAC now if we can. 1949 */ 1950 ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset); 1951 if (ret != 0) 1952 goto error; 1953 } else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) { 1954 /* 1955 * If we only have the encrypted version of the data, but the 1956 * unencrypted version was requested we take this opportunity 1957 * to store the decrypted version in the header for future use. 1958 */ 1959 ret = arc_hdr_decrypt(hdr, spa, zb); 1960 if (ret != 0) 1961 goto error; 1962 } 1963 1964 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1965 1966 if (hash_lock != NULL) 1967 mutex_exit(hash_lock); 1968 1969 return (0); 1970 1971 error: 1972 if (hash_lock != NULL) 1973 mutex_exit(hash_lock); 1974 1975 return (ret); 1976 } 1977 1978 /* 1979 * This function is used by the dbuf code to decrypt bonus buffers in place. 1980 * The dbuf code itself doesn't have any locking for decrypting a shared dnode 1981 * block, so we use the hash lock here to protect against concurrent calls to 1982 * arc_buf_fill(). 1983 */ 1984 static void 1985 arc_buf_untransform_in_place(arc_buf_t *buf) 1986 { 1987 arc_buf_hdr_t *hdr = buf->b_hdr; 1988 1989 ASSERT(HDR_ENCRYPTED(hdr)); 1990 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 1991 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1992 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1993 1994 zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data, 1995 arc_buf_size(buf)); 1996 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 1997 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 1998 hdr->b_crypt_hdr.b_ebufcnt -= 1; 1999 } 2000 2001 /* 2002 * Given a buf that has a data buffer attached to it, this function will 2003 * efficiently fill the buf with data of the specified compression setting from 2004 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr 2005 * are already sharing a data buf, no copy is performed. 2006 * 2007 * If the buf is marked as compressed but uncompressed data was requested, this 2008 * will allocate a new data buffer for the buf, remove that flag, and fill the 2009 * buf with uncompressed data. You can't request a compressed buf on a hdr with 2010 * uncompressed data, and (since we haven't added support for it yet) if you 2011 * want compressed data your buf must already be marked as compressed and have 2012 * the correct-sized data buffer. 2013 */ 2014 static int 2015 arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2016 arc_fill_flags_t flags) 2017 { 2018 int error = 0; 2019 arc_buf_hdr_t *hdr = buf->b_hdr; 2020 boolean_t hdr_compressed = 2021 (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 2022 boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0; 2023 boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0; 2024 dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap; 2025 kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr); 2026 2027 ASSERT3P(buf->b_data, !=, NULL); 2028 IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf)); 2029 IMPLY(compressed, ARC_BUF_COMPRESSED(buf)); 2030 IMPLY(encrypted, HDR_ENCRYPTED(hdr)); 2031 IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf)); 2032 IMPLY(encrypted, ARC_BUF_COMPRESSED(buf)); 2033 IMPLY(encrypted, !ARC_BUF_SHARED(buf)); 2034 2035 /* 2036 * If the caller wanted encrypted data we just need to copy it from 2037 * b_rabd and potentially byteswap it. We won't be able to do any 2038 * further transforms on it. 2039 */ 2040 if (encrypted) { 2041 ASSERT(HDR_HAS_RABD(hdr)); 2042 abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd, 2043 HDR_GET_PSIZE(hdr)); 2044 goto byteswap; 2045 } 2046 2047 /* 2048 * Adjust encrypted and authenticated headers to accommodate 2049 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are 2050 * allowed to fail decryption due to keys not being loaded 2051 * without being marked as an IO error. 2052 */ 2053 if (HDR_PROTECTED(hdr)) { 2054 error = arc_fill_hdr_crypt(hdr, hash_lock, spa, 2055 zb, !!(flags & ARC_FILL_NOAUTH)); 2056 if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) { 2057 return (error); 2058 } else if (error != 0) { 2059 if (hash_lock != NULL) 2060 mutex_enter(hash_lock); 2061 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2062 if (hash_lock != NULL) 2063 mutex_exit(hash_lock); 2064 return (error); 2065 } 2066 } 2067 2068 /* 2069 * There is a special case here for dnode blocks which are 2070 * decrypting their bonus buffers. These blocks may request to 2071 * be decrypted in-place. This is necessary because there may 2072 * be many dnodes pointing into this buffer and there is 2073 * currently no method to synchronize replacing the backing 2074 * b_data buffer and updating all of the pointers. Here we use 2075 * the hash lock to ensure there are no races. If the need 2076 * arises for other types to be decrypted in-place, they must 2077 * add handling here as well. 2078 */ 2079 if ((flags & ARC_FILL_IN_PLACE) != 0) { 2080 ASSERT(!hdr_compressed); 2081 ASSERT(!compressed); 2082 ASSERT(!encrypted); 2083 2084 if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) { 2085 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 2086 2087 if (hash_lock != NULL) 2088 mutex_enter(hash_lock); 2089 arc_buf_untransform_in_place(buf); 2090 if (hash_lock != NULL) 2091 mutex_exit(hash_lock); 2092 2093 /* Compute the hdr's checksum if necessary */ 2094 arc_cksum_compute(buf); 2095 } 2096 2097 return (0); 2098 } 2099 2100 if (hdr_compressed == compressed) { 2101 if (!arc_buf_is_shared(buf)) { 2102 abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd, 2103 arc_buf_size(buf)); 2104 } 2105 } else { 2106 ASSERT(hdr_compressed); 2107 ASSERT(!compressed); 2108 2109 /* 2110 * If the buf is sharing its data with the hdr, unlink it and 2111 * allocate a new data buffer for the buf. 2112 */ 2113 if (arc_buf_is_shared(buf)) { 2114 ASSERT(ARC_BUF_COMPRESSED(buf)); 2115 2116 /* We need to give the buf its own b_data */ 2117 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 2118 buf->b_data = 2119 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2120 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2121 2122 /* Previously overhead was 0; just add new overhead */ 2123 ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr)); 2124 } else if (ARC_BUF_COMPRESSED(buf)) { 2125 /* We need to reallocate the buf's b_data */ 2126 arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr), 2127 buf); 2128 buf->b_data = 2129 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2130 2131 /* We increased the size of b_data; update overhead */ 2132 ARCSTAT_INCR(arcstat_overhead_size, 2133 HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr)); 2134 } 2135 2136 /* 2137 * Regardless of the buf's previous compression settings, it 2138 * should not be compressed at the end of this function. 2139 */ 2140 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 2141 2142 /* 2143 * Try copying the data from another buf which already has a 2144 * decompressed version. If that's not possible, it's time to 2145 * bite the bullet and decompress the data from the hdr. 2146 */ 2147 if (arc_buf_try_copy_decompressed_data(buf)) { 2148 /* Skip byteswapping and checksumming (already done) */ 2149 return (0); 2150 } else { 2151 error = zio_decompress_data(HDR_GET_COMPRESS(hdr), 2152 hdr->b_l1hdr.b_pabd, buf->b_data, 2153 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr), 2154 &hdr->b_complevel); 2155 2156 /* 2157 * Absent hardware errors or software bugs, this should 2158 * be impossible, but log it anyway so we can debug it. 2159 */ 2160 if (error != 0) { 2161 zfs_dbgmsg( 2162 "hdr %px, compress %d, psize %d, lsize %d", 2163 hdr, arc_hdr_get_compress(hdr), 2164 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); 2165 if (hash_lock != NULL) 2166 mutex_enter(hash_lock); 2167 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2168 if (hash_lock != NULL) 2169 mutex_exit(hash_lock); 2170 return (SET_ERROR(EIO)); 2171 } 2172 } 2173 } 2174 2175 byteswap: 2176 /* Byteswap the buf's data if necessary */ 2177 if (bswap != DMU_BSWAP_NUMFUNCS) { 2178 ASSERT(!HDR_SHARED_DATA(hdr)); 2179 ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS); 2180 dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr)); 2181 } 2182 2183 /* Compute the hdr's checksum if necessary */ 2184 arc_cksum_compute(buf); 2185 2186 return (0); 2187 } 2188 2189 /* 2190 * If this function is being called to decrypt an encrypted buffer or verify an 2191 * authenticated one, the key must be loaded and a mapping must be made 2192 * available in the keystore via spa_keystore_create_mapping() or one of its 2193 * callers. 2194 */ 2195 int 2196 arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2197 boolean_t in_place) 2198 { 2199 int ret; 2200 arc_fill_flags_t flags = 0; 2201 2202 if (in_place) 2203 flags |= ARC_FILL_IN_PLACE; 2204 2205 ret = arc_buf_fill(buf, spa, zb, flags); 2206 if (ret == ECKSUM) { 2207 /* 2208 * Convert authentication and decryption errors to EIO 2209 * (and generate an ereport) before leaving the ARC. 2210 */ 2211 ret = SET_ERROR(EIO); 2212 spa_log_error(spa, zb, &buf->b_hdr->b_birth); 2213 (void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION, 2214 spa, NULL, zb, NULL, 0); 2215 } 2216 2217 return (ret); 2218 } 2219 2220 /* 2221 * Increment the amount of evictable space in the arc_state_t's refcount. 2222 * We account for the space used by the hdr and the arc buf individually 2223 * so that we can add and remove them from the refcount individually. 2224 */ 2225 static void 2226 arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state) 2227 { 2228 arc_buf_contents_t type = arc_buf_type(hdr); 2229 2230 ASSERT(HDR_HAS_L1HDR(hdr)); 2231 2232 if (GHOST_STATE(state)) { 2233 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2234 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2235 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2236 ASSERT(!HDR_HAS_RABD(hdr)); 2237 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2238 HDR_GET_LSIZE(hdr), hdr); 2239 return; 2240 } 2241 2242 if (hdr->b_l1hdr.b_pabd != NULL) { 2243 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2244 arc_hdr_size(hdr), hdr); 2245 } 2246 if (HDR_HAS_RABD(hdr)) { 2247 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2248 HDR_GET_PSIZE(hdr), hdr); 2249 } 2250 2251 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2252 buf = buf->b_next) { 2253 if (arc_buf_is_shared(buf)) 2254 continue; 2255 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2256 arc_buf_size(buf), buf); 2257 } 2258 } 2259 2260 /* 2261 * Decrement the amount of evictable space in the arc_state_t's refcount. 2262 * We account for the space used by the hdr and the arc buf individually 2263 * so that we can add and remove them from the refcount individually. 2264 */ 2265 static void 2266 arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state) 2267 { 2268 arc_buf_contents_t type = arc_buf_type(hdr); 2269 2270 ASSERT(HDR_HAS_L1HDR(hdr)); 2271 2272 if (GHOST_STATE(state)) { 2273 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2274 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2275 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2276 ASSERT(!HDR_HAS_RABD(hdr)); 2277 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2278 HDR_GET_LSIZE(hdr), hdr); 2279 return; 2280 } 2281 2282 if (hdr->b_l1hdr.b_pabd != NULL) { 2283 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2284 arc_hdr_size(hdr), hdr); 2285 } 2286 if (HDR_HAS_RABD(hdr)) { 2287 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2288 HDR_GET_PSIZE(hdr), hdr); 2289 } 2290 2291 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2292 buf = buf->b_next) { 2293 if (arc_buf_is_shared(buf)) 2294 continue; 2295 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2296 arc_buf_size(buf), buf); 2297 } 2298 } 2299 2300 /* 2301 * Add a reference to this hdr indicating that someone is actively 2302 * referencing that memory. When the refcount transitions from 0 to 1, 2303 * we remove it from the respective arc_state_t list to indicate that 2304 * it is not evictable. 2305 */ 2306 static void 2307 add_reference(arc_buf_hdr_t *hdr, const void *tag) 2308 { 2309 arc_state_t *state = hdr->b_l1hdr.b_state; 2310 2311 ASSERT(HDR_HAS_L1HDR(hdr)); 2312 if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) { 2313 ASSERT(state == arc_anon); 2314 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2315 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2316 } 2317 2318 if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) && 2319 state != arc_anon && state != arc_l2c_only) { 2320 /* We don't use the L2-only state list. */ 2321 multilist_remove(&state->arcs_list[arc_buf_type(hdr)], hdr); 2322 arc_evictable_space_decrement(hdr, state); 2323 } 2324 } 2325 2326 /* 2327 * Remove a reference from this hdr. When the reference transitions from 2328 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's 2329 * list making it eligible for eviction. 2330 */ 2331 static int 2332 remove_reference(arc_buf_hdr_t *hdr, const void *tag) 2333 { 2334 int cnt; 2335 arc_state_t *state = hdr->b_l1hdr.b_state; 2336 2337 ASSERT(HDR_HAS_L1HDR(hdr)); 2338 ASSERT(state == arc_anon || MUTEX_HELD(HDR_LOCK(hdr))); 2339 ASSERT(!GHOST_STATE(state)); /* arc_l2c_only counts as a ghost. */ 2340 2341 if ((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) != 0) 2342 return (cnt); 2343 2344 if (state == arc_anon) { 2345 arc_hdr_destroy(hdr); 2346 return (0); 2347 } 2348 if (state == arc_uncached && !HDR_PREFETCH(hdr)) { 2349 arc_change_state(arc_anon, hdr); 2350 arc_hdr_destroy(hdr); 2351 return (0); 2352 } 2353 multilist_insert(&state->arcs_list[arc_buf_type(hdr)], hdr); 2354 arc_evictable_space_increment(hdr, state); 2355 return (0); 2356 } 2357 2358 /* 2359 * Returns detailed information about a specific arc buffer. When the 2360 * state_index argument is set the function will calculate the arc header 2361 * list position for its arc state. Since this requires a linear traversal 2362 * callers are strongly encourage not to do this. However, it can be helpful 2363 * for targeted analysis so the functionality is provided. 2364 */ 2365 void 2366 arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index) 2367 { 2368 (void) state_index; 2369 arc_buf_hdr_t *hdr = ab->b_hdr; 2370 l1arc_buf_hdr_t *l1hdr = NULL; 2371 l2arc_buf_hdr_t *l2hdr = NULL; 2372 arc_state_t *state = NULL; 2373 2374 memset(abi, 0, sizeof (arc_buf_info_t)); 2375 2376 if (hdr == NULL) 2377 return; 2378 2379 abi->abi_flags = hdr->b_flags; 2380 2381 if (HDR_HAS_L1HDR(hdr)) { 2382 l1hdr = &hdr->b_l1hdr; 2383 state = l1hdr->b_state; 2384 } 2385 if (HDR_HAS_L2HDR(hdr)) 2386 l2hdr = &hdr->b_l2hdr; 2387 2388 if (l1hdr) { 2389 abi->abi_bufcnt = l1hdr->b_bufcnt; 2390 abi->abi_access = l1hdr->b_arc_access; 2391 abi->abi_mru_hits = l1hdr->b_mru_hits; 2392 abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits; 2393 abi->abi_mfu_hits = l1hdr->b_mfu_hits; 2394 abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits; 2395 abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt); 2396 } 2397 2398 if (l2hdr) { 2399 abi->abi_l2arc_dattr = l2hdr->b_daddr; 2400 abi->abi_l2arc_hits = l2hdr->b_hits; 2401 } 2402 2403 abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON; 2404 abi->abi_state_contents = arc_buf_type(hdr); 2405 abi->abi_size = arc_hdr_size(hdr); 2406 } 2407 2408 /* 2409 * Move the supplied buffer to the indicated state. The hash lock 2410 * for the buffer must be held by the caller. 2411 */ 2412 static void 2413 arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr) 2414 { 2415 arc_state_t *old_state; 2416 int64_t refcnt; 2417 uint32_t bufcnt; 2418 boolean_t update_old, update_new; 2419 arc_buf_contents_t type = arc_buf_type(hdr); 2420 2421 /* 2422 * We almost always have an L1 hdr here, since we call arc_hdr_realloc() 2423 * in arc_read() when bringing a buffer out of the L2ARC. However, the 2424 * L1 hdr doesn't always exist when we change state to arc_anon before 2425 * destroying a header, in which case reallocating to add the L1 hdr is 2426 * pointless. 2427 */ 2428 if (HDR_HAS_L1HDR(hdr)) { 2429 old_state = hdr->b_l1hdr.b_state; 2430 refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt); 2431 bufcnt = hdr->b_l1hdr.b_bufcnt; 2432 update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL || 2433 HDR_HAS_RABD(hdr)); 2434 2435 IMPLY(GHOST_STATE(old_state), bufcnt == 0); 2436 IMPLY(GHOST_STATE(new_state), bufcnt == 0); 2437 IMPLY(GHOST_STATE(old_state), hdr->b_l1hdr.b_buf == NULL); 2438 IMPLY(GHOST_STATE(new_state), hdr->b_l1hdr.b_buf == NULL); 2439 IMPLY(old_state == arc_anon, bufcnt <= 1); 2440 } else { 2441 old_state = arc_l2c_only; 2442 refcnt = 0; 2443 bufcnt = 0; 2444 update_old = B_FALSE; 2445 } 2446 update_new = update_old; 2447 if (GHOST_STATE(old_state)) 2448 update_old = B_TRUE; 2449 if (GHOST_STATE(new_state)) 2450 update_new = B_TRUE; 2451 2452 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 2453 ASSERT3P(new_state, !=, old_state); 2454 2455 /* 2456 * If this buffer is evictable, transfer it from the 2457 * old state list to the new state list. 2458 */ 2459 if (refcnt == 0) { 2460 if (old_state != arc_anon && old_state != arc_l2c_only) { 2461 ASSERT(HDR_HAS_L1HDR(hdr)); 2462 /* remove_reference() saves on insert. */ 2463 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2464 multilist_remove(&old_state->arcs_list[type], 2465 hdr); 2466 arc_evictable_space_decrement(hdr, old_state); 2467 } 2468 } 2469 if (new_state != arc_anon && new_state != arc_l2c_only) { 2470 /* 2471 * An L1 header always exists here, since if we're 2472 * moving to some L1-cached state (i.e. not l2c_only or 2473 * anonymous), we realloc the header to add an L1hdr 2474 * beforehand. 2475 */ 2476 ASSERT(HDR_HAS_L1HDR(hdr)); 2477 multilist_insert(&new_state->arcs_list[type], hdr); 2478 arc_evictable_space_increment(hdr, new_state); 2479 } 2480 } 2481 2482 ASSERT(!HDR_EMPTY(hdr)); 2483 if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr)) 2484 buf_hash_remove(hdr); 2485 2486 /* adjust state sizes (ignore arc_l2c_only) */ 2487 2488 if (update_new && new_state != arc_l2c_only) { 2489 ASSERT(HDR_HAS_L1HDR(hdr)); 2490 if (GHOST_STATE(new_state)) { 2491 ASSERT0(bufcnt); 2492 2493 /* 2494 * When moving a header to a ghost state, we first 2495 * remove all arc buffers. Thus, we'll have a 2496 * bufcnt of zero, and no arc buffer to use for 2497 * the reference. As a result, we use the arc 2498 * header pointer for the reference. 2499 */ 2500 (void) zfs_refcount_add_many( 2501 &new_state->arcs_size[type], 2502 HDR_GET_LSIZE(hdr), hdr); 2503 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2504 ASSERT(!HDR_HAS_RABD(hdr)); 2505 } else { 2506 uint32_t buffers = 0; 2507 2508 /* 2509 * Each individual buffer holds a unique reference, 2510 * thus we must remove each of these references one 2511 * at a time. 2512 */ 2513 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2514 buf = buf->b_next) { 2515 ASSERT3U(bufcnt, !=, 0); 2516 buffers++; 2517 2518 /* 2519 * When the arc_buf_t is sharing the data 2520 * block with the hdr, the owner of the 2521 * reference belongs to the hdr. Only 2522 * add to the refcount if the arc_buf_t is 2523 * not shared. 2524 */ 2525 if (arc_buf_is_shared(buf)) 2526 continue; 2527 2528 (void) zfs_refcount_add_many( 2529 &new_state->arcs_size[type], 2530 arc_buf_size(buf), buf); 2531 } 2532 ASSERT3U(bufcnt, ==, buffers); 2533 2534 if (hdr->b_l1hdr.b_pabd != NULL) { 2535 (void) zfs_refcount_add_many( 2536 &new_state->arcs_size[type], 2537 arc_hdr_size(hdr), hdr); 2538 } 2539 2540 if (HDR_HAS_RABD(hdr)) { 2541 (void) zfs_refcount_add_many( 2542 &new_state->arcs_size[type], 2543 HDR_GET_PSIZE(hdr), hdr); 2544 } 2545 } 2546 } 2547 2548 if (update_old && old_state != arc_l2c_only) { 2549 ASSERT(HDR_HAS_L1HDR(hdr)); 2550 if (GHOST_STATE(old_state)) { 2551 ASSERT0(bufcnt); 2552 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2553 ASSERT(!HDR_HAS_RABD(hdr)); 2554 2555 /* 2556 * When moving a header off of a ghost state, 2557 * the header will not contain any arc buffers. 2558 * We use the arc header pointer for the reference 2559 * which is exactly what we did when we put the 2560 * header on the ghost state. 2561 */ 2562 2563 (void) zfs_refcount_remove_many( 2564 &old_state->arcs_size[type], 2565 HDR_GET_LSIZE(hdr), hdr); 2566 } else { 2567 uint32_t buffers = 0; 2568 2569 /* 2570 * Each individual buffer holds a unique reference, 2571 * thus we must remove each of these references one 2572 * at a time. 2573 */ 2574 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2575 buf = buf->b_next) { 2576 ASSERT3U(bufcnt, !=, 0); 2577 buffers++; 2578 2579 /* 2580 * When the arc_buf_t is sharing the data 2581 * block with the hdr, the owner of the 2582 * reference belongs to the hdr. Only 2583 * add to the refcount if the arc_buf_t is 2584 * not shared. 2585 */ 2586 if (arc_buf_is_shared(buf)) 2587 continue; 2588 2589 (void) zfs_refcount_remove_many( 2590 &old_state->arcs_size[type], 2591 arc_buf_size(buf), buf); 2592 } 2593 ASSERT3U(bufcnt, ==, buffers); 2594 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 2595 HDR_HAS_RABD(hdr)); 2596 2597 if (hdr->b_l1hdr.b_pabd != NULL) { 2598 (void) zfs_refcount_remove_many( 2599 &old_state->arcs_size[type], 2600 arc_hdr_size(hdr), hdr); 2601 } 2602 2603 if (HDR_HAS_RABD(hdr)) { 2604 (void) zfs_refcount_remove_many( 2605 &old_state->arcs_size[type], 2606 HDR_GET_PSIZE(hdr), hdr); 2607 } 2608 } 2609 } 2610 2611 if (HDR_HAS_L1HDR(hdr)) { 2612 hdr->b_l1hdr.b_state = new_state; 2613 2614 if (HDR_HAS_L2HDR(hdr) && new_state != arc_l2c_only) { 2615 l2arc_hdr_arcstats_decrement_state(hdr); 2616 hdr->b_l2hdr.b_arcs_state = new_state->arcs_state; 2617 l2arc_hdr_arcstats_increment_state(hdr); 2618 } 2619 } 2620 } 2621 2622 void 2623 arc_space_consume(uint64_t space, arc_space_type_t type) 2624 { 2625 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2626 2627 switch (type) { 2628 default: 2629 break; 2630 case ARC_SPACE_DATA: 2631 ARCSTAT_INCR(arcstat_data_size, space); 2632 break; 2633 case ARC_SPACE_META: 2634 ARCSTAT_INCR(arcstat_metadata_size, space); 2635 break; 2636 case ARC_SPACE_BONUS: 2637 ARCSTAT_INCR(arcstat_bonus_size, space); 2638 break; 2639 case ARC_SPACE_DNODE: 2640 ARCSTAT_INCR(arcstat_dnode_size, space); 2641 break; 2642 case ARC_SPACE_DBUF: 2643 ARCSTAT_INCR(arcstat_dbuf_size, space); 2644 break; 2645 case ARC_SPACE_HDRS: 2646 ARCSTAT_INCR(arcstat_hdr_size, space); 2647 break; 2648 case ARC_SPACE_L2HDRS: 2649 aggsum_add(&arc_sums.arcstat_l2_hdr_size, space); 2650 break; 2651 case ARC_SPACE_ABD_CHUNK_WASTE: 2652 /* 2653 * Note: this includes space wasted by all scatter ABD's, not 2654 * just those allocated by the ARC. But the vast majority of 2655 * scatter ABD's come from the ARC, because other users are 2656 * very short-lived. 2657 */ 2658 ARCSTAT_INCR(arcstat_abd_chunk_waste_size, space); 2659 break; 2660 } 2661 2662 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2663 ARCSTAT_INCR(arcstat_meta_used, space); 2664 2665 aggsum_add(&arc_sums.arcstat_size, space); 2666 } 2667 2668 void 2669 arc_space_return(uint64_t space, arc_space_type_t type) 2670 { 2671 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2672 2673 switch (type) { 2674 default: 2675 break; 2676 case ARC_SPACE_DATA: 2677 ARCSTAT_INCR(arcstat_data_size, -space); 2678 break; 2679 case ARC_SPACE_META: 2680 ARCSTAT_INCR(arcstat_metadata_size, -space); 2681 break; 2682 case ARC_SPACE_BONUS: 2683 ARCSTAT_INCR(arcstat_bonus_size, -space); 2684 break; 2685 case ARC_SPACE_DNODE: 2686 ARCSTAT_INCR(arcstat_dnode_size, -space); 2687 break; 2688 case ARC_SPACE_DBUF: 2689 ARCSTAT_INCR(arcstat_dbuf_size, -space); 2690 break; 2691 case ARC_SPACE_HDRS: 2692 ARCSTAT_INCR(arcstat_hdr_size, -space); 2693 break; 2694 case ARC_SPACE_L2HDRS: 2695 aggsum_add(&arc_sums.arcstat_l2_hdr_size, -space); 2696 break; 2697 case ARC_SPACE_ABD_CHUNK_WASTE: 2698 ARCSTAT_INCR(arcstat_abd_chunk_waste_size, -space); 2699 break; 2700 } 2701 2702 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2703 ARCSTAT_INCR(arcstat_meta_used, -space); 2704 2705 ASSERT(aggsum_compare(&arc_sums.arcstat_size, space) >= 0); 2706 aggsum_add(&arc_sums.arcstat_size, -space); 2707 } 2708 2709 /* 2710 * Given a hdr and a buf, returns whether that buf can share its b_data buffer 2711 * with the hdr's b_pabd. 2712 */ 2713 static boolean_t 2714 arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2715 { 2716 /* 2717 * The criteria for sharing a hdr's data are: 2718 * 1. the buffer is not encrypted 2719 * 2. the hdr's compression matches the buf's compression 2720 * 3. the hdr doesn't need to be byteswapped 2721 * 4. the hdr isn't already being shared 2722 * 5. the buf is either compressed or it is the last buf in the hdr list 2723 * 2724 * Criterion #5 maintains the invariant that shared uncompressed 2725 * bufs must be the final buf in the hdr's b_buf list. Reading this, you 2726 * might ask, "if a compressed buf is allocated first, won't that be the 2727 * last thing in the list?", but in that case it's impossible to create 2728 * a shared uncompressed buf anyway (because the hdr must be compressed 2729 * to have the compressed buf). You might also think that #3 is 2730 * sufficient to make this guarantee, however it's possible 2731 * (specifically in the rare L2ARC write race mentioned in 2732 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that 2733 * is shareable, but wasn't at the time of its allocation. Rather than 2734 * allow a new shared uncompressed buf to be created and then shuffle 2735 * the list around to make it the last element, this simply disallows 2736 * sharing if the new buf isn't the first to be added. 2737 */ 2738 ASSERT3P(buf->b_hdr, ==, hdr); 2739 boolean_t hdr_compressed = 2740 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF; 2741 boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0; 2742 return (!ARC_BUF_ENCRYPTED(buf) && 2743 buf_compressed == hdr_compressed && 2744 hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS && 2745 !HDR_SHARED_DATA(hdr) && 2746 (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf))); 2747 } 2748 2749 /* 2750 * Allocate a buf for this hdr. If you care about the data that's in the hdr, 2751 * or if you want a compressed buffer, pass those flags in. Returns 0 if the 2752 * copy was made successfully, or an error code otherwise. 2753 */ 2754 static int 2755 arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb, 2756 const void *tag, boolean_t encrypted, boolean_t compressed, 2757 boolean_t noauth, boolean_t fill, arc_buf_t **ret) 2758 { 2759 arc_buf_t *buf; 2760 arc_fill_flags_t flags = ARC_FILL_LOCKED; 2761 2762 ASSERT(HDR_HAS_L1HDR(hdr)); 2763 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 2764 VERIFY(hdr->b_type == ARC_BUFC_DATA || 2765 hdr->b_type == ARC_BUFC_METADATA); 2766 ASSERT3P(ret, !=, NULL); 2767 ASSERT3P(*ret, ==, NULL); 2768 IMPLY(encrypted, compressed); 2769 2770 buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE); 2771 buf->b_hdr = hdr; 2772 buf->b_data = NULL; 2773 buf->b_next = hdr->b_l1hdr.b_buf; 2774 buf->b_flags = 0; 2775 2776 add_reference(hdr, tag); 2777 2778 /* 2779 * We're about to change the hdr's b_flags. We must either 2780 * hold the hash_lock or be undiscoverable. 2781 */ 2782 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2783 2784 /* 2785 * Only honor requests for compressed bufs if the hdr is actually 2786 * compressed. This must be overridden if the buffer is encrypted since 2787 * encrypted buffers cannot be decompressed. 2788 */ 2789 if (encrypted) { 2790 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2791 buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED; 2792 flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED; 2793 } else if (compressed && 2794 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 2795 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2796 flags |= ARC_FILL_COMPRESSED; 2797 } 2798 2799 if (noauth) { 2800 ASSERT0(encrypted); 2801 flags |= ARC_FILL_NOAUTH; 2802 } 2803 2804 /* 2805 * If the hdr's data can be shared then we share the data buffer and 2806 * set the appropriate bit in the hdr's b_flags to indicate the hdr is 2807 * sharing it's b_pabd with the arc_buf_t. Otherwise, we allocate a new 2808 * buffer to store the buf's data. 2809 * 2810 * There are two additional restrictions here because we're sharing 2811 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be 2812 * actively involved in an L2ARC write, because if this buf is used by 2813 * an arc_write() then the hdr's data buffer will be released when the 2814 * write completes, even though the L2ARC write might still be using it. 2815 * Second, the hdr's ABD must be linear so that the buf's user doesn't 2816 * need to be ABD-aware. It must be allocated via 2817 * zio_[data_]buf_alloc(), not as a page, because we need to be able 2818 * to abd_release_ownership_of_buf(), which isn't allowed on "linear 2819 * page" buffers because the ABD code needs to handle freeing them 2820 * specially. 2821 */ 2822 boolean_t can_share = arc_can_share(hdr, buf) && 2823 !HDR_L2_WRITING(hdr) && 2824 hdr->b_l1hdr.b_pabd != NULL && 2825 abd_is_linear(hdr->b_l1hdr.b_pabd) && 2826 !abd_is_linear_page(hdr->b_l1hdr.b_pabd); 2827 2828 /* Set up b_data and sharing */ 2829 if (can_share) { 2830 buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd); 2831 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2832 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2833 } else { 2834 buf->b_data = 2835 arc_get_data_buf(hdr, arc_buf_size(buf), buf); 2836 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 2837 } 2838 VERIFY3P(buf->b_data, !=, NULL); 2839 2840 hdr->b_l1hdr.b_buf = buf; 2841 hdr->b_l1hdr.b_bufcnt += 1; 2842 if (encrypted) 2843 hdr->b_crypt_hdr.b_ebufcnt += 1; 2844 2845 /* 2846 * If the user wants the data from the hdr, we need to either copy or 2847 * decompress the data. 2848 */ 2849 if (fill) { 2850 ASSERT3P(zb, !=, NULL); 2851 return (arc_buf_fill(buf, spa, zb, flags)); 2852 } 2853 2854 return (0); 2855 } 2856 2857 static const char *arc_onloan_tag = "onloan"; 2858 2859 static inline void 2860 arc_loaned_bytes_update(int64_t delta) 2861 { 2862 atomic_add_64(&arc_loaned_bytes, delta); 2863 2864 /* assert that it did not wrap around */ 2865 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 2866 } 2867 2868 /* 2869 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in 2870 * flight data by arc_tempreserve_space() until they are "returned". Loaned 2871 * buffers must be returned to the arc before they can be used by the DMU or 2872 * freed. 2873 */ 2874 arc_buf_t * 2875 arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size) 2876 { 2877 arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag, 2878 is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size); 2879 2880 arc_loaned_bytes_update(arc_buf_size(buf)); 2881 2882 return (buf); 2883 } 2884 2885 arc_buf_t * 2886 arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize, 2887 enum zio_compress compression_type, uint8_t complevel) 2888 { 2889 arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag, 2890 psize, lsize, compression_type, complevel); 2891 2892 arc_loaned_bytes_update(arc_buf_size(buf)); 2893 2894 return (buf); 2895 } 2896 2897 arc_buf_t * 2898 arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder, 2899 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 2900 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 2901 enum zio_compress compression_type, uint8_t complevel) 2902 { 2903 arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj, 2904 byteorder, salt, iv, mac, ot, psize, lsize, compression_type, 2905 complevel); 2906 2907 atomic_add_64(&arc_loaned_bytes, psize); 2908 return (buf); 2909 } 2910 2911 2912 /* 2913 * Return a loaned arc buffer to the arc. 2914 */ 2915 void 2916 arc_return_buf(arc_buf_t *buf, const void *tag) 2917 { 2918 arc_buf_hdr_t *hdr = buf->b_hdr; 2919 2920 ASSERT3P(buf->b_data, !=, NULL); 2921 ASSERT(HDR_HAS_L1HDR(hdr)); 2922 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag); 2923 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2924 2925 arc_loaned_bytes_update(-arc_buf_size(buf)); 2926 } 2927 2928 /* Detach an arc_buf from a dbuf (tag) */ 2929 void 2930 arc_loan_inuse_buf(arc_buf_t *buf, const void *tag) 2931 { 2932 arc_buf_hdr_t *hdr = buf->b_hdr; 2933 2934 ASSERT3P(buf->b_data, !=, NULL); 2935 ASSERT(HDR_HAS_L1HDR(hdr)); 2936 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2937 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag); 2938 2939 arc_loaned_bytes_update(arc_buf_size(buf)); 2940 } 2941 2942 static void 2943 l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type) 2944 { 2945 l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP); 2946 2947 df->l2df_abd = abd; 2948 df->l2df_size = size; 2949 df->l2df_type = type; 2950 mutex_enter(&l2arc_free_on_write_mtx); 2951 list_insert_head(l2arc_free_on_write, df); 2952 mutex_exit(&l2arc_free_on_write_mtx); 2953 } 2954 2955 static void 2956 arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata) 2957 { 2958 arc_state_t *state = hdr->b_l1hdr.b_state; 2959 arc_buf_contents_t type = arc_buf_type(hdr); 2960 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 2961 2962 /* protected by hash lock, if in the hash table */ 2963 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2964 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2965 ASSERT(state != arc_anon && state != arc_l2c_only); 2966 2967 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2968 size, hdr); 2969 } 2970 (void) zfs_refcount_remove_many(&state->arcs_size[type], size, hdr); 2971 if (type == ARC_BUFC_METADATA) { 2972 arc_space_return(size, ARC_SPACE_META); 2973 } else { 2974 ASSERT(type == ARC_BUFC_DATA); 2975 arc_space_return(size, ARC_SPACE_DATA); 2976 } 2977 2978 if (free_rdata) { 2979 l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type); 2980 } else { 2981 l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type); 2982 } 2983 } 2984 2985 /* 2986 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the 2987 * data buffer, we transfer the refcount ownership to the hdr and update 2988 * the appropriate kstats. 2989 */ 2990 static void 2991 arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2992 { 2993 ASSERT(arc_can_share(hdr, buf)); 2994 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2995 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 2996 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2997 2998 /* 2999 * Start sharing the data buffer. We transfer the 3000 * refcount ownership to the hdr since it always owns 3001 * the refcount whenever an arc_buf_t is shared. 3002 */ 3003 zfs_refcount_transfer_ownership_many( 3004 &hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)], 3005 arc_hdr_size(hdr), buf, hdr); 3006 hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf)); 3007 abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd, 3008 HDR_ISTYPE_METADATA(hdr)); 3009 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 3010 buf->b_flags |= ARC_BUF_FLAG_SHARED; 3011 3012 /* 3013 * Since we've transferred ownership to the hdr we need 3014 * to increment its compressed and uncompressed kstats and 3015 * decrement the overhead size. 3016 */ 3017 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr)); 3018 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3019 ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf)); 3020 } 3021 3022 static void 3023 arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3024 { 3025 ASSERT(arc_buf_is_shared(buf)); 3026 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3027 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3028 3029 /* 3030 * We are no longer sharing this buffer so we need 3031 * to transfer its ownership to the rightful owner. 3032 */ 3033 zfs_refcount_transfer_ownership_many( 3034 &hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)], 3035 arc_hdr_size(hdr), hdr, buf); 3036 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3037 abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd); 3038 abd_free(hdr->b_l1hdr.b_pabd); 3039 hdr->b_l1hdr.b_pabd = NULL; 3040 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 3041 3042 /* 3043 * Since the buffer is no longer shared between 3044 * the arc buf and the hdr, count it as overhead. 3045 */ 3046 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr)); 3047 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3048 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 3049 } 3050 3051 /* 3052 * Remove an arc_buf_t from the hdr's buf list and return the last 3053 * arc_buf_t on the list. If no buffers remain on the list then return 3054 * NULL. 3055 */ 3056 static arc_buf_t * 3057 arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3058 { 3059 ASSERT(HDR_HAS_L1HDR(hdr)); 3060 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3061 3062 arc_buf_t **bufp = &hdr->b_l1hdr.b_buf; 3063 arc_buf_t *lastbuf = NULL; 3064 3065 /* 3066 * Remove the buf from the hdr list and locate the last 3067 * remaining buffer on the list. 3068 */ 3069 while (*bufp != NULL) { 3070 if (*bufp == buf) 3071 *bufp = buf->b_next; 3072 3073 /* 3074 * If we've removed a buffer in the middle of 3075 * the list then update the lastbuf and update 3076 * bufp. 3077 */ 3078 if (*bufp != NULL) { 3079 lastbuf = *bufp; 3080 bufp = &(*bufp)->b_next; 3081 } 3082 } 3083 buf->b_next = NULL; 3084 ASSERT3P(lastbuf, !=, buf); 3085 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, lastbuf != NULL); 3086 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, hdr->b_l1hdr.b_buf != NULL); 3087 IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf)); 3088 3089 return (lastbuf); 3090 } 3091 3092 /* 3093 * Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's 3094 * list and free it. 3095 */ 3096 static void 3097 arc_buf_destroy_impl(arc_buf_t *buf) 3098 { 3099 arc_buf_hdr_t *hdr = buf->b_hdr; 3100 3101 /* 3102 * Free up the data associated with the buf but only if we're not 3103 * sharing this with the hdr. If we are sharing it with the hdr, the 3104 * hdr is responsible for doing the free. 3105 */ 3106 if (buf->b_data != NULL) { 3107 /* 3108 * We're about to change the hdr's b_flags. We must either 3109 * hold the hash_lock or be undiscoverable. 3110 */ 3111 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3112 3113 arc_cksum_verify(buf); 3114 arc_buf_unwatch(buf); 3115 3116 if (arc_buf_is_shared(buf)) { 3117 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3118 } else { 3119 uint64_t size = arc_buf_size(buf); 3120 arc_free_data_buf(hdr, buf->b_data, size, buf); 3121 ARCSTAT_INCR(arcstat_overhead_size, -size); 3122 } 3123 buf->b_data = NULL; 3124 3125 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3126 hdr->b_l1hdr.b_bufcnt -= 1; 3127 3128 if (ARC_BUF_ENCRYPTED(buf)) { 3129 hdr->b_crypt_hdr.b_ebufcnt -= 1; 3130 3131 /* 3132 * If we have no more encrypted buffers and we've 3133 * already gotten a copy of the decrypted data we can 3134 * free b_rabd to save some space. 3135 */ 3136 if (hdr->b_crypt_hdr.b_ebufcnt == 0 && 3137 HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd != NULL && 3138 !HDR_IO_IN_PROGRESS(hdr)) { 3139 arc_hdr_free_abd(hdr, B_TRUE); 3140 } 3141 } 3142 } 3143 3144 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 3145 3146 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { 3147 /* 3148 * If the current arc_buf_t is sharing its data buffer with the 3149 * hdr, then reassign the hdr's b_pabd to share it with the new 3150 * buffer at the end of the list. The shared buffer is always 3151 * the last one on the hdr's buffer list. 3152 * 3153 * There is an equivalent case for compressed bufs, but since 3154 * they aren't guaranteed to be the last buf in the list and 3155 * that is an exceedingly rare case, we just allow that space be 3156 * wasted temporarily. We must also be careful not to share 3157 * encrypted buffers, since they cannot be shared. 3158 */ 3159 if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) { 3160 /* Only one buf can be shared at once */ 3161 VERIFY(!arc_buf_is_shared(lastbuf)); 3162 /* hdr is uncompressed so can't have compressed buf */ 3163 VERIFY(!ARC_BUF_COMPRESSED(lastbuf)); 3164 3165 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3166 arc_hdr_free_abd(hdr, B_FALSE); 3167 3168 /* 3169 * We must setup a new shared block between the 3170 * last buffer and the hdr. The data would have 3171 * been allocated by the arc buf so we need to transfer 3172 * ownership to the hdr since it's now being shared. 3173 */ 3174 arc_share_buf(hdr, lastbuf); 3175 } 3176 } else if (HDR_SHARED_DATA(hdr)) { 3177 /* 3178 * Uncompressed shared buffers are always at the end 3179 * of the list. Compressed buffers don't have the 3180 * same requirements. This makes it hard to 3181 * simply assert that the lastbuf is shared so 3182 * we rely on the hdr's compression flags to determine 3183 * if we have a compressed, shared buffer. 3184 */ 3185 ASSERT3P(lastbuf, !=, NULL); 3186 ASSERT(arc_buf_is_shared(lastbuf) || 3187 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 3188 } 3189 3190 /* 3191 * Free the checksum if we're removing the last uncompressed buf from 3192 * this hdr. 3193 */ 3194 if (!arc_hdr_has_uncompressed_buf(hdr)) { 3195 arc_cksum_free(hdr); 3196 } 3197 3198 /* clean up the buf */ 3199 buf->b_hdr = NULL; 3200 kmem_cache_free(buf_cache, buf); 3201 } 3202 3203 static void 3204 arc_hdr_alloc_abd(arc_buf_hdr_t *hdr, int alloc_flags) 3205 { 3206 uint64_t size; 3207 boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0); 3208 3209 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 3210 ASSERT(HDR_HAS_L1HDR(hdr)); 3211 ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata); 3212 IMPLY(alloc_rdata, HDR_PROTECTED(hdr)); 3213 3214 if (alloc_rdata) { 3215 size = HDR_GET_PSIZE(hdr); 3216 ASSERT3P(hdr->b_crypt_hdr.b_rabd, ==, NULL); 3217 hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr, 3218 alloc_flags); 3219 ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL); 3220 ARCSTAT_INCR(arcstat_raw_size, size); 3221 } else { 3222 size = arc_hdr_size(hdr); 3223 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3224 hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr, 3225 alloc_flags); 3226 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3227 } 3228 3229 ARCSTAT_INCR(arcstat_compressed_size, size); 3230 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3231 } 3232 3233 static void 3234 arc_hdr_free_abd(arc_buf_hdr_t *hdr, boolean_t free_rdata) 3235 { 3236 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 3237 3238 ASSERT(HDR_HAS_L1HDR(hdr)); 3239 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 3240 IMPLY(free_rdata, HDR_HAS_RABD(hdr)); 3241 3242 /* 3243 * If the hdr is currently being written to the l2arc then 3244 * we defer freeing the data by adding it to the l2arc_free_on_write 3245 * list. The l2arc will free the data once it's finished 3246 * writing it to the l2arc device. 3247 */ 3248 if (HDR_L2_WRITING(hdr)) { 3249 arc_hdr_free_on_write(hdr, free_rdata); 3250 ARCSTAT_BUMP(arcstat_l2_free_on_write); 3251 } else if (free_rdata) { 3252 arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr); 3253 } else { 3254 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, size, hdr); 3255 } 3256 3257 if (free_rdata) { 3258 hdr->b_crypt_hdr.b_rabd = NULL; 3259 ARCSTAT_INCR(arcstat_raw_size, -size); 3260 } else { 3261 hdr->b_l1hdr.b_pabd = NULL; 3262 } 3263 3264 if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr)) 3265 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 3266 3267 ARCSTAT_INCR(arcstat_compressed_size, -size); 3268 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3269 } 3270 3271 /* 3272 * Allocate empty anonymous ARC header. The header will get its identity 3273 * assigned and buffers attached later as part of read or write operations. 3274 * 3275 * In case of read arc_read() assigns header its identify (b_dva + b_birth), 3276 * inserts it into ARC hash to become globally visible and allocates physical 3277 * (b_pabd) or raw (b_rabd) ABD buffer to read into from disk. On disk read 3278 * completion arc_read_done() allocates ARC buffer(s) as needed, potentially 3279 * sharing one of them with the physical ABD buffer. 3280 * 3281 * In case of write arc_alloc_buf() allocates ARC buffer to be filled with 3282 * data. Then after compression and/or encryption arc_write_ready() allocates 3283 * and fills (or potentially shares) physical (b_pabd) or raw (b_rabd) ABD 3284 * buffer. On disk write completion arc_write_done() assigns the header its 3285 * new identity (b_dva + b_birth) and inserts into ARC hash. 3286 * 3287 * In case of partial overwrite the old data is read first as described. Then 3288 * arc_release() either allocates new anonymous ARC header and moves the ARC 3289 * buffer to it, or reuses the old ARC header by discarding its identity and 3290 * removing it from ARC hash. After buffer modification normal write process 3291 * follows as described. 3292 */ 3293 static arc_buf_hdr_t * 3294 arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize, 3295 boolean_t protected, enum zio_compress compression_type, uint8_t complevel, 3296 arc_buf_contents_t type) 3297 { 3298 arc_buf_hdr_t *hdr; 3299 3300 VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA); 3301 if (protected) { 3302 hdr = kmem_cache_alloc(hdr_full_crypt_cache, KM_PUSHPAGE); 3303 } else { 3304 hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE); 3305 } 3306 3307 ASSERT(HDR_EMPTY(hdr)); 3308 #ifdef ZFS_DEBUG 3309 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3310 #endif 3311 HDR_SET_PSIZE(hdr, psize); 3312 HDR_SET_LSIZE(hdr, lsize); 3313 hdr->b_spa = spa; 3314 hdr->b_type = type; 3315 hdr->b_flags = 0; 3316 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR); 3317 arc_hdr_set_compress(hdr, compression_type); 3318 hdr->b_complevel = complevel; 3319 if (protected) 3320 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3321 3322 hdr->b_l1hdr.b_state = arc_anon; 3323 hdr->b_l1hdr.b_arc_access = 0; 3324 hdr->b_l1hdr.b_mru_hits = 0; 3325 hdr->b_l1hdr.b_mru_ghost_hits = 0; 3326 hdr->b_l1hdr.b_mfu_hits = 0; 3327 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 3328 hdr->b_l1hdr.b_bufcnt = 0; 3329 hdr->b_l1hdr.b_buf = NULL; 3330 3331 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3332 3333 return (hdr); 3334 } 3335 3336 /* 3337 * Transition between the two allocation states for the arc_buf_hdr struct. 3338 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without 3339 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller 3340 * version is used when a cache buffer is only in the L2ARC in order to reduce 3341 * memory usage. 3342 */ 3343 static arc_buf_hdr_t * 3344 arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new) 3345 { 3346 ASSERT(HDR_HAS_L2HDR(hdr)); 3347 3348 arc_buf_hdr_t *nhdr; 3349 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3350 3351 ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) || 3352 (old == hdr_l2only_cache && new == hdr_full_cache)); 3353 3354 /* 3355 * if the caller wanted a new full header and the header is to be 3356 * encrypted we will actually allocate the header from the full crypt 3357 * cache instead. The same applies to freeing from the old cache. 3358 */ 3359 if (HDR_PROTECTED(hdr) && new == hdr_full_cache) 3360 new = hdr_full_crypt_cache; 3361 if (HDR_PROTECTED(hdr) && old == hdr_full_cache) 3362 old = hdr_full_crypt_cache; 3363 3364 nhdr = kmem_cache_alloc(new, KM_PUSHPAGE); 3365 3366 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3367 buf_hash_remove(hdr); 3368 3369 memcpy(nhdr, hdr, HDR_L2ONLY_SIZE); 3370 3371 if (new == hdr_full_cache || new == hdr_full_crypt_cache) { 3372 arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3373 /* 3374 * arc_access and arc_change_state need to be aware that a 3375 * header has just come out of L2ARC, so we set its state to 3376 * l2c_only even though it's about to change. 3377 */ 3378 nhdr->b_l1hdr.b_state = arc_l2c_only; 3379 3380 /* Verify previous threads set to NULL before freeing */ 3381 ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL); 3382 ASSERT(!HDR_HAS_RABD(hdr)); 3383 } else { 3384 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3385 ASSERT0(hdr->b_l1hdr.b_bufcnt); 3386 #ifdef ZFS_DEBUG 3387 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3388 #endif 3389 3390 /* 3391 * If we've reached here, We must have been called from 3392 * arc_evict_hdr(), as such we should have already been 3393 * removed from any ghost list we were previously on 3394 * (which protects us from racing with arc_evict_state), 3395 * thus no locking is needed during this check. 3396 */ 3397 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3398 3399 /* 3400 * A buffer must not be moved into the arc_l2c_only 3401 * state if it's not finished being written out to the 3402 * l2arc device. Otherwise, the b_l1hdr.b_pabd field 3403 * might try to be accessed, even though it was removed. 3404 */ 3405 VERIFY(!HDR_L2_WRITING(hdr)); 3406 VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3407 ASSERT(!HDR_HAS_RABD(hdr)); 3408 3409 arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3410 } 3411 /* 3412 * The header has been reallocated so we need to re-insert it into any 3413 * lists it was on. 3414 */ 3415 (void) buf_hash_insert(nhdr, NULL); 3416 3417 ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node)); 3418 3419 mutex_enter(&dev->l2ad_mtx); 3420 3421 /* 3422 * We must place the realloc'ed header back into the list at 3423 * the same spot. Otherwise, if it's placed earlier in the list, 3424 * l2arc_write_buffers() could find it during the function's 3425 * write phase, and try to write it out to the l2arc. 3426 */ 3427 list_insert_after(&dev->l2ad_buflist, hdr, nhdr); 3428 list_remove(&dev->l2ad_buflist, hdr); 3429 3430 mutex_exit(&dev->l2ad_mtx); 3431 3432 /* 3433 * Since we're using the pointer address as the tag when 3434 * incrementing and decrementing the l2ad_alloc refcount, we 3435 * must remove the old pointer (that we're about to destroy) and 3436 * add the new pointer to the refcount. Otherwise we'd remove 3437 * the wrong pointer address when calling arc_hdr_destroy() later. 3438 */ 3439 3440 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 3441 arc_hdr_size(hdr), hdr); 3442 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 3443 arc_hdr_size(nhdr), nhdr); 3444 3445 buf_discard_identity(hdr); 3446 kmem_cache_free(old, hdr); 3447 3448 return (nhdr); 3449 } 3450 3451 /* 3452 * This function allows an L1 header to be reallocated as a crypt 3453 * header and vice versa. If we are going to a crypt header, the 3454 * new fields will be zeroed out. 3455 */ 3456 static arc_buf_hdr_t * 3457 arc_hdr_realloc_crypt(arc_buf_hdr_t *hdr, boolean_t need_crypt) 3458 { 3459 arc_buf_hdr_t *nhdr; 3460 arc_buf_t *buf; 3461 kmem_cache_t *ncache, *ocache; 3462 3463 /* 3464 * This function requires that hdr is in the arc_anon state. 3465 * Therefore it won't have any L2ARC data for us to worry 3466 * about copying. 3467 */ 3468 ASSERT(HDR_HAS_L1HDR(hdr)); 3469 ASSERT(!HDR_HAS_L2HDR(hdr)); 3470 ASSERT3U(!!HDR_PROTECTED(hdr), !=, need_crypt); 3471 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3472 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3473 ASSERT(!list_link_active(&hdr->b_l2hdr.b_l2node)); 3474 ASSERT3P(hdr->b_hash_next, ==, NULL); 3475 3476 if (need_crypt) { 3477 ncache = hdr_full_crypt_cache; 3478 ocache = hdr_full_cache; 3479 } else { 3480 ncache = hdr_full_cache; 3481 ocache = hdr_full_crypt_cache; 3482 } 3483 3484 nhdr = kmem_cache_alloc(ncache, KM_PUSHPAGE); 3485 3486 /* 3487 * Copy all members that aren't locks or condvars to the new header. 3488 * No lists are pointing to us (as we asserted above), so we don't 3489 * need to worry about the list nodes. 3490 */ 3491 nhdr->b_dva = hdr->b_dva; 3492 nhdr->b_birth = hdr->b_birth; 3493 nhdr->b_type = hdr->b_type; 3494 nhdr->b_flags = hdr->b_flags; 3495 nhdr->b_psize = hdr->b_psize; 3496 nhdr->b_lsize = hdr->b_lsize; 3497 nhdr->b_spa = hdr->b_spa; 3498 #ifdef ZFS_DEBUG 3499 nhdr->b_l1hdr.b_freeze_cksum = hdr->b_l1hdr.b_freeze_cksum; 3500 #endif 3501 nhdr->b_l1hdr.b_bufcnt = hdr->b_l1hdr.b_bufcnt; 3502 nhdr->b_l1hdr.b_byteswap = hdr->b_l1hdr.b_byteswap; 3503 nhdr->b_l1hdr.b_state = hdr->b_l1hdr.b_state; 3504 nhdr->b_l1hdr.b_arc_access = hdr->b_l1hdr.b_arc_access; 3505 nhdr->b_l1hdr.b_mru_hits = hdr->b_l1hdr.b_mru_hits; 3506 nhdr->b_l1hdr.b_mru_ghost_hits = hdr->b_l1hdr.b_mru_ghost_hits; 3507 nhdr->b_l1hdr.b_mfu_hits = hdr->b_l1hdr.b_mfu_hits; 3508 nhdr->b_l1hdr.b_mfu_ghost_hits = hdr->b_l1hdr.b_mfu_ghost_hits; 3509 nhdr->b_l1hdr.b_acb = hdr->b_l1hdr.b_acb; 3510 nhdr->b_l1hdr.b_pabd = hdr->b_l1hdr.b_pabd; 3511 3512 /* 3513 * This zfs_refcount_add() exists only to ensure that the individual 3514 * arc buffers always point to a header that is referenced, avoiding 3515 * a small race condition that could trigger ASSERTs. 3516 */ 3517 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, FTAG); 3518 nhdr->b_l1hdr.b_buf = hdr->b_l1hdr.b_buf; 3519 for (buf = nhdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) 3520 buf->b_hdr = nhdr; 3521 3522 zfs_refcount_transfer(&nhdr->b_l1hdr.b_refcnt, &hdr->b_l1hdr.b_refcnt); 3523 (void) zfs_refcount_remove(&nhdr->b_l1hdr.b_refcnt, FTAG); 3524 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3525 3526 if (need_crypt) { 3527 arc_hdr_set_flags(nhdr, ARC_FLAG_PROTECTED); 3528 } else { 3529 arc_hdr_clear_flags(nhdr, ARC_FLAG_PROTECTED); 3530 } 3531 3532 /* unset all members of the original hdr */ 3533 memset(&hdr->b_dva, 0, sizeof (dva_t)); 3534 hdr->b_birth = 0; 3535 hdr->b_type = 0; 3536 hdr->b_flags = 0; 3537 hdr->b_psize = 0; 3538 hdr->b_lsize = 0; 3539 hdr->b_spa = 0; 3540 #ifdef ZFS_DEBUG 3541 hdr->b_l1hdr.b_freeze_cksum = NULL; 3542 #endif 3543 hdr->b_l1hdr.b_buf = NULL; 3544 hdr->b_l1hdr.b_bufcnt = 0; 3545 hdr->b_l1hdr.b_byteswap = 0; 3546 hdr->b_l1hdr.b_state = NULL; 3547 hdr->b_l1hdr.b_arc_access = 0; 3548 hdr->b_l1hdr.b_mru_hits = 0; 3549 hdr->b_l1hdr.b_mru_ghost_hits = 0; 3550 hdr->b_l1hdr.b_mfu_hits = 0; 3551 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 3552 hdr->b_l1hdr.b_acb = NULL; 3553 hdr->b_l1hdr.b_pabd = NULL; 3554 3555 if (ocache == hdr_full_crypt_cache) { 3556 ASSERT(!HDR_HAS_RABD(hdr)); 3557 hdr->b_crypt_hdr.b_ot = DMU_OT_NONE; 3558 hdr->b_crypt_hdr.b_ebufcnt = 0; 3559 hdr->b_crypt_hdr.b_dsobj = 0; 3560 memset(hdr->b_crypt_hdr.b_salt, 0, ZIO_DATA_SALT_LEN); 3561 memset(hdr->b_crypt_hdr.b_iv, 0, ZIO_DATA_IV_LEN); 3562 memset(hdr->b_crypt_hdr.b_mac, 0, ZIO_DATA_MAC_LEN); 3563 } 3564 3565 buf_discard_identity(hdr); 3566 kmem_cache_free(ocache, hdr); 3567 3568 return (nhdr); 3569 } 3570 3571 /* 3572 * This function is used by the send / receive code to convert a newly 3573 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It 3574 * is also used to allow the root objset block to be updated without altering 3575 * its embedded MACs. Both block types will always be uncompressed so we do not 3576 * have to worry about compression type or psize. 3577 */ 3578 void 3579 arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder, 3580 dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv, 3581 const uint8_t *mac) 3582 { 3583 arc_buf_hdr_t *hdr = buf->b_hdr; 3584 3585 ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET); 3586 ASSERT(HDR_HAS_L1HDR(hdr)); 3587 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3588 3589 buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED); 3590 if (!HDR_PROTECTED(hdr)) 3591 hdr = arc_hdr_realloc_crypt(hdr, B_TRUE); 3592 hdr->b_crypt_hdr.b_dsobj = dsobj; 3593 hdr->b_crypt_hdr.b_ot = ot; 3594 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3595 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3596 if (!arc_hdr_has_uncompressed_buf(hdr)) 3597 arc_cksum_free(hdr); 3598 3599 if (salt != NULL) 3600 memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 3601 if (iv != NULL) 3602 memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 3603 if (mac != NULL) 3604 memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 3605 } 3606 3607 /* 3608 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller. 3609 * The buf is returned thawed since we expect the consumer to modify it. 3610 */ 3611 arc_buf_t * 3612 arc_alloc_buf(spa_t *spa, const void *tag, arc_buf_contents_t type, 3613 int32_t size) 3614 { 3615 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size, 3616 B_FALSE, ZIO_COMPRESS_OFF, 0, type); 3617 3618 arc_buf_t *buf = NULL; 3619 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE, 3620 B_FALSE, B_FALSE, &buf)); 3621 arc_buf_thaw(buf); 3622 3623 return (buf); 3624 } 3625 3626 /* 3627 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this 3628 * for bufs containing metadata. 3629 */ 3630 arc_buf_t * 3631 arc_alloc_compressed_buf(spa_t *spa, const void *tag, uint64_t psize, 3632 uint64_t lsize, enum zio_compress compression_type, uint8_t complevel) 3633 { 3634 ASSERT3U(lsize, >, 0); 3635 ASSERT3U(lsize, >=, psize); 3636 ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF); 3637 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3638 3639 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 3640 B_FALSE, compression_type, complevel, ARC_BUFC_DATA); 3641 3642 arc_buf_t *buf = NULL; 3643 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, 3644 B_TRUE, B_FALSE, B_FALSE, &buf)); 3645 arc_buf_thaw(buf); 3646 3647 /* 3648 * To ensure that the hdr has the correct data in it if we call 3649 * arc_untransform() on this buf before it's been written to disk, 3650 * it's easiest if we just set up sharing between the buf and the hdr. 3651 */ 3652 arc_share_buf(hdr, buf); 3653 3654 return (buf); 3655 } 3656 3657 arc_buf_t * 3658 arc_alloc_raw_buf(spa_t *spa, const void *tag, uint64_t dsobj, 3659 boolean_t byteorder, const uint8_t *salt, const uint8_t *iv, 3660 const uint8_t *mac, dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 3661 enum zio_compress compression_type, uint8_t complevel) 3662 { 3663 arc_buf_hdr_t *hdr; 3664 arc_buf_t *buf; 3665 arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ? 3666 ARC_BUFC_METADATA : ARC_BUFC_DATA; 3667 3668 ASSERT3U(lsize, >, 0); 3669 ASSERT3U(lsize, >=, psize); 3670 ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF); 3671 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3672 3673 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE, 3674 compression_type, complevel, type); 3675 3676 hdr->b_crypt_hdr.b_dsobj = dsobj; 3677 hdr->b_crypt_hdr.b_ot = ot; 3678 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3679 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3680 memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 3681 memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 3682 memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 3683 3684 /* 3685 * This buffer will be considered encrypted even if the ot is not an 3686 * encrypted type. It will become authenticated instead in 3687 * arc_write_ready(). 3688 */ 3689 buf = NULL; 3690 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE, 3691 B_FALSE, B_FALSE, &buf)); 3692 arc_buf_thaw(buf); 3693 3694 return (buf); 3695 } 3696 3697 static void 3698 l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 3699 boolean_t state_only) 3700 { 3701 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3702 l2arc_dev_t *dev = l2hdr->b_dev; 3703 uint64_t lsize = HDR_GET_LSIZE(hdr); 3704 uint64_t psize = HDR_GET_PSIZE(hdr); 3705 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3706 arc_buf_contents_t type = hdr->b_type; 3707 int64_t lsize_s; 3708 int64_t psize_s; 3709 int64_t asize_s; 3710 3711 if (incr) { 3712 lsize_s = lsize; 3713 psize_s = psize; 3714 asize_s = asize; 3715 } else { 3716 lsize_s = -lsize; 3717 psize_s = -psize; 3718 asize_s = -asize; 3719 } 3720 3721 /* If the buffer is a prefetch, count it as such. */ 3722 if (HDR_PREFETCH(hdr)) { 3723 ARCSTAT_INCR(arcstat_l2_prefetch_asize, asize_s); 3724 } else { 3725 /* 3726 * We use the value stored in the L2 header upon initial 3727 * caching in L2ARC. This value will be updated in case 3728 * an MRU/MRU_ghost buffer transitions to MFU but the L2ARC 3729 * metadata (log entry) cannot currently be updated. Having 3730 * the ARC state in the L2 header solves the problem of a 3731 * possibly absent L1 header (apparent in buffers restored 3732 * from persistent L2ARC). 3733 */ 3734 switch (hdr->b_l2hdr.b_arcs_state) { 3735 case ARC_STATE_MRU_GHOST: 3736 case ARC_STATE_MRU: 3737 ARCSTAT_INCR(arcstat_l2_mru_asize, asize_s); 3738 break; 3739 case ARC_STATE_MFU_GHOST: 3740 case ARC_STATE_MFU: 3741 ARCSTAT_INCR(arcstat_l2_mfu_asize, asize_s); 3742 break; 3743 default: 3744 break; 3745 } 3746 } 3747 3748 if (state_only) 3749 return; 3750 3751 ARCSTAT_INCR(arcstat_l2_psize, psize_s); 3752 ARCSTAT_INCR(arcstat_l2_lsize, lsize_s); 3753 3754 switch (type) { 3755 case ARC_BUFC_DATA: 3756 ARCSTAT_INCR(arcstat_l2_bufc_data_asize, asize_s); 3757 break; 3758 case ARC_BUFC_METADATA: 3759 ARCSTAT_INCR(arcstat_l2_bufc_metadata_asize, asize_s); 3760 break; 3761 default: 3762 break; 3763 } 3764 } 3765 3766 3767 static void 3768 arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr) 3769 { 3770 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3771 l2arc_dev_t *dev = l2hdr->b_dev; 3772 uint64_t psize = HDR_GET_PSIZE(hdr); 3773 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3774 3775 ASSERT(MUTEX_HELD(&dev->l2ad_mtx)); 3776 ASSERT(HDR_HAS_L2HDR(hdr)); 3777 3778 list_remove(&dev->l2ad_buflist, hdr); 3779 3780 l2arc_hdr_arcstats_decrement(hdr); 3781 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); 3782 3783 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), 3784 hdr); 3785 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 3786 } 3787 3788 static void 3789 arc_hdr_destroy(arc_buf_hdr_t *hdr) 3790 { 3791 if (HDR_HAS_L1HDR(hdr)) { 3792 ASSERT(hdr->b_l1hdr.b_buf == NULL || 3793 hdr->b_l1hdr.b_bufcnt > 0); 3794 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3795 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3796 } 3797 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3798 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 3799 3800 if (HDR_HAS_L2HDR(hdr)) { 3801 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3802 boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx); 3803 3804 if (!buflist_held) 3805 mutex_enter(&dev->l2ad_mtx); 3806 3807 /* 3808 * Even though we checked this conditional above, we 3809 * need to check this again now that we have the 3810 * l2ad_mtx. This is because we could be racing with 3811 * another thread calling l2arc_evict() which might have 3812 * destroyed this header's L2 portion as we were waiting 3813 * to acquire the l2ad_mtx. If that happens, we don't 3814 * want to re-destroy the header's L2 portion. 3815 */ 3816 if (HDR_HAS_L2HDR(hdr)) { 3817 3818 if (!HDR_EMPTY(hdr)) 3819 buf_discard_identity(hdr); 3820 3821 arc_hdr_l2hdr_destroy(hdr); 3822 } 3823 3824 if (!buflist_held) 3825 mutex_exit(&dev->l2ad_mtx); 3826 } 3827 3828 /* 3829 * The header's identify can only be safely discarded once it is no 3830 * longer discoverable. This requires removing it from the hash table 3831 * and the l2arc header list. After this point the hash lock can not 3832 * be used to protect the header. 3833 */ 3834 if (!HDR_EMPTY(hdr)) 3835 buf_discard_identity(hdr); 3836 3837 if (HDR_HAS_L1HDR(hdr)) { 3838 arc_cksum_free(hdr); 3839 3840 while (hdr->b_l1hdr.b_buf != NULL) 3841 arc_buf_destroy_impl(hdr->b_l1hdr.b_buf); 3842 3843 if (hdr->b_l1hdr.b_pabd != NULL) 3844 arc_hdr_free_abd(hdr, B_FALSE); 3845 3846 if (HDR_HAS_RABD(hdr)) 3847 arc_hdr_free_abd(hdr, B_TRUE); 3848 } 3849 3850 ASSERT3P(hdr->b_hash_next, ==, NULL); 3851 if (HDR_HAS_L1HDR(hdr)) { 3852 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3853 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 3854 #ifdef ZFS_DEBUG 3855 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3856 #endif 3857 3858 if (!HDR_PROTECTED(hdr)) { 3859 kmem_cache_free(hdr_full_cache, hdr); 3860 } else { 3861 kmem_cache_free(hdr_full_crypt_cache, hdr); 3862 } 3863 } else { 3864 kmem_cache_free(hdr_l2only_cache, hdr); 3865 } 3866 } 3867 3868 void 3869 arc_buf_destroy(arc_buf_t *buf, const void *tag) 3870 { 3871 arc_buf_hdr_t *hdr = buf->b_hdr; 3872 3873 if (hdr->b_l1hdr.b_state == arc_anon) { 3874 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 3875 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3876 VERIFY0(remove_reference(hdr, tag)); 3877 return; 3878 } 3879 3880 kmutex_t *hash_lock = HDR_LOCK(hdr); 3881 mutex_enter(hash_lock); 3882 3883 ASSERT3P(hdr, ==, buf->b_hdr); 3884 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3885 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 3886 ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon); 3887 ASSERT3P(buf->b_data, !=, NULL); 3888 3889 arc_buf_destroy_impl(buf); 3890 (void) remove_reference(hdr, tag); 3891 mutex_exit(hash_lock); 3892 } 3893 3894 /* 3895 * Evict the arc_buf_hdr that is provided as a parameter. The resultant 3896 * state of the header is dependent on its state prior to entering this 3897 * function. The following transitions are possible: 3898 * 3899 * - arc_mru -> arc_mru_ghost 3900 * - arc_mfu -> arc_mfu_ghost 3901 * - arc_mru_ghost -> arc_l2c_only 3902 * - arc_mru_ghost -> deleted 3903 * - arc_mfu_ghost -> arc_l2c_only 3904 * - arc_mfu_ghost -> deleted 3905 * - arc_uncached -> deleted 3906 * 3907 * Return total size of evicted data buffers for eviction progress tracking. 3908 * When evicting from ghost states return logical buffer size to make eviction 3909 * progress at the same (or at least comparable) rate as from non-ghost states. 3910 * 3911 * Return *real_evicted for actual ARC size reduction to wake up threads 3912 * waiting for it. For non-ghost states it includes size of evicted data 3913 * buffers (the headers are not freed there). For ghost states it includes 3914 * only the evicted headers size. 3915 */ 3916 static int64_t 3917 arc_evict_hdr(arc_buf_hdr_t *hdr, uint64_t *real_evicted) 3918 { 3919 arc_state_t *evicted_state, *state; 3920 int64_t bytes_evicted = 0; 3921 uint_t min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ? 3922 arc_min_prescient_prefetch_ms : arc_min_prefetch_ms; 3923 3924 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3925 ASSERT(HDR_HAS_L1HDR(hdr)); 3926 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3927 ASSERT0(hdr->b_l1hdr.b_bufcnt); 3928 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3929 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3930 3931 *real_evicted = 0; 3932 state = hdr->b_l1hdr.b_state; 3933 if (GHOST_STATE(state)) { 3934 3935 /* 3936 * l2arc_write_buffers() relies on a header's L1 portion 3937 * (i.e. its b_pabd field) during it's write phase. 3938 * Thus, we cannot push a header onto the arc_l2c_only 3939 * state (removing its L1 piece) until the header is 3940 * done being written to the l2arc. 3941 */ 3942 if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) { 3943 ARCSTAT_BUMP(arcstat_evict_l2_skip); 3944 return (bytes_evicted); 3945 } 3946 3947 ARCSTAT_BUMP(arcstat_deleted); 3948 bytes_evicted += HDR_GET_LSIZE(hdr); 3949 3950 DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr); 3951 3952 if (HDR_HAS_L2HDR(hdr)) { 3953 ASSERT(hdr->b_l1hdr.b_pabd == NULL); 3954 ASSERT(!HDR_HAS_RABD(hdr)); 3955 /* 3956 * This buffer is cached on the 2nd Level ARC; 3957 * don't destroy the header. 3958 */ 3959 arc_change_state(arc_l2c_only, hdr); 3960 /* 3961 * dropping from L1+L2 cached to L2-only, 3962 * realloc to remove the L1 header. 3963 */ 3964 (void) arc_hdr_realloc(hdr, hdr_full_cache, 3965 hdr_l2only_cache); 3966 *real_evicted += HDR_FULL_SIZE - HDR_L2ONLY_SIZE; 3967 } else { 3968 arc_change_state(arc_anon, hdr); 3969 arc_hdr_destroy(hdr); 3970 *real_evicted += HDR_FULL_SIZE; 3971 } 3972 return (bytes_evicted); 3973 } 3974 3975 ASSERT(state == arc_mru || state == arc_mfu || state == arc_uncached); 3976 evicted_state = (state == arc_uncached) ? arc_anon : 3977 ((state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost); 3978 3979 /* prefetch buffers have a minimum lifespan */ 3980 if ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) && 3981 ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access < 3982 MSEC_TO_TICK(min_lifetime)) { 3983 ARCSTAT_BUMP(arcstat_evict_skip); 3984 return (bytes_evicted); 3985 } 3986 3987 if (HDR_HAS_L2HDR(hdr)) { 3988 ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr)); 3989 } else { 3990 if (l2arc_write_eligible(hdr->b_spa, hdr)) { 3991 ARCSTAT_INCR(arcstat_evict_l2_eligible, 3992 HDR_GET_LSIZE(hdr)); 3993 3994 switch (state->arcs_state) { 3995 case ARC_STATE_MRU: 3996 ARCSTAT_INCR( 3997 arcstat_evict_l2_eligible_mru, 3998 HDR_GET_LSIZE(hdr)); 3999 break; 4000 case ARC_STATE_MFU: 4001 ARCSTAT_INCR( 4002 arcstat_evict_l2_eligible_mfu, 4003 HDR_GET_LSIZE(hdr)); 4004 break; 4005 default: 4006 break; 4007 } 4008 } else { 4009 ARCSTAT_INCR(arcstat_evict_l2_ineligible, 4010 HDR_GET_LSIZE(hdr)); 4011 } 4012 } 4013 4014 bytes_evicted += arc_hdr_size(hdr); 4015 *real_evicted += arc_hdr_size(hdr); 4016 4017 /* 4018 * If this hdr is being evicted and has a compressed buffer then we 4019 * discard it here before we change states. This ensures that the 4020 * accounting is updated correctly in arc_free_data_impl(). 4021 */ 4022 if (hdr->b_l1hdr.b_pabd != NULL) 4023 arc_hdr_free_abd(hdr, B_FALSE); 4024 4025 if (HDR_HAS_RABD(hdr)) 4026 arc_hdr_free_abd(hdr, B_TRUE); 4027 4028 arc_change_state(evicted_state, hdr); 4029 DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr); 4030 if (evicted_state == arc_anon) { 4031 arc_hdr_destroy(hdr); 4032 *real_evicted += HDR_FULL_SIZE; 4033 } else { 4034 ASSERT(HDR_IN_HASH_TABLE(hdr)); 4035 } 4036 4037 return (bytes_evicted); 4038 } 4039 4040 static void 4041 arc_set_need_free(void) 4042 { 4043 ASSERT(MUTEX_HELD(&arc_evict_lock)); 4044 int64_t remaining = arc_free_memory() - arc_sys_free / 2; 4045 arc_evict_waiter_t *aw = list_tail(&arc_evict_waiters); 4046 if (aw == NULL) { 4047 arc_need_free = MAX(-remaining, 0); 4048 } else { 4049 arc_need_free = 4050 MAX(-remaining, (int64_t)(aw->aew_count - arc_evict_count)); 4051 } 4052 } 4053 4054 static uint64_t 4055 arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker, 4056 uint64_t spa, uint64_t bytes) 4057 { 4058 multilist_sublist_t *mls; 4059 uint64_t bytes_evicted = 0, real_evicted = 0; 4060 arc_buf_hdr_t *hdr; 4061 kmutex_t *hash_lock; 4062 uint_t evict_count = zfs_arc_evict_batch_limit; 4063 4064 ASSERT3P(marker, !=, NULL); 4065 4066 mls = multilist_sublist_lock(ml, idx); 4067 4068 for (hdr = multilist_sublist_prev(mls, marker); likely(hdr != NULL); 4069 hdr = multilist_sublist_prev(mls, marker)) { 4070 if ((evict_count == 0) || (bytes_evicted >= bytes)) 4071 break; 4072 4073 /* 4074 * To keep our iteration location, move the marker 4075 * forward. Since we're not holding hdr's hash lock, we 4076 * must be very careful and not remove 'hdr' from the 4077 * sublist. Otherwise, other consumers might mistake the 4078 * 'hdr' as not being on a sublist when they call the 4079 * multilist_link_active() function (they all rely on 4080 * the hash lock protecting concurrent insertions and 4081 * removals). multilist_sublist_move_forward() was 4082 * specifically implemented to ensure this is the case 4083 * (only 'marker' will be removed and re-inserted). 4084 */ 4085 multilist_sublist_move_forward(mls, marker); 4086 4087 /* 4088 * The only case where the b_spa field should ever be 4089 * zero, is the marker headers inserted by 4090 * arc_evict_state(). It's possible for multiple threads 4091 * to be calling arc_evict_state() concurrently (e.g. 4092 * dsl_pool_close() and zio_inject_fault()), so we must 4093 * skip any markers we see from these other threads. 4094 */ 4095 if (hdr->b_spa == 0) 4096 continue; 4097 4098 /* we're only interested in evicting buffers of a certain spa */ 4099 if (spa != 0 && hdr->b_spa != spa) { 4100 ARCSTAT_BUMP(arcstat_evict_skip); 4101 continue; 4102 } 4103 4104 hash_lock = HDR_LOCK(hdr); 4105 4106 /* 4107 * We aren't calling this function from any code path 4108 * that would already be holding a hash lock, so we're 4109 * asserting on this assumption to be defensive in case 4110 * this ever changes. Without this check, it would be 4111 * possible to incorrectly increment arcstat_mutex_miss 4112 * below (e.g. if the code changed such that we called 4113 * this function with a hash lock held). 4114 */ 4115 ASSERT(!MUTEX_HELD(hash_lock)); 4116 4117 if (mutex_tryenter(hash_lock)) { 4118 uint64_t revicted; 4119 uint64_t evicted = arc_evict_hdr(hdr, &revicted); 4120 mutex_exit(hash_lock); 4121 4122 bytes_evicted += evicted; 4123 real_evicted += revicted; 4124 4125 /* 4126 * If evicted is zero, arc_evict_hdr() must have 4127 * decided to skip this header, don't increment 4128 * evict_count in this case. 4129 */ 4130 if (evicted != 0) 4131 evict_count--; 4132 4133 } else { 4134 ARCSTAT_BUMP(arcstat_mutex_miss); 4135 } 4136 } 4137 4138 multilist_sublist_unlock(mls); 4139 4140 /* 4141 * Increment the count of evicted bytes, and wake up any threads that 4142 * are waiting for the count to reach this value. Since the list is 4143 * ordered by ascending aew_count, we pop off the beginning of the 4144 * list until we reach the end, or a waiter that's past the current 4145 * "count". Doing this outside the loop reduces the number of times 4146 * we need to acquire the global arc_evict_lock. 4147 * 4148 * Only wake when there's sufficient free memory in the system 4149 * (specifically, arc_sys_free/2, which by default is a bit more than 4150 * 1/64th of RAM). See the comments in arc_wait_for_eviction(). 4151 */ 4152 mutex_enter(&arc_evict_lock); 4153 arc_evict_count += real_evicted; 4154 4155 if (arc_free_memory() > arc_sys_free / 2) { 4156 arc_evict_waiter_t *aw; 4157 while ((aw = list_head(&arc_evict_waiters)) != NULL && 4158 aw->aew_count <= arc_evict_count) { 4159 list_remove(&arc_evict_waiters, aw); 4160 cv_broadcast(&aw->aew_cv); 4161 } 4162 } 4163 arc_set_need_free(); 4164 mutex_exit(&arc_evict_lock); 4165 4166 /* 4167 * If the ARC size is reduced from arc_c_max to arc_c_min (especially 4168 * if the average cached block is small), eviction can be on-CPU for 4169 * many seconds. To ensure that other threads that may be bound to 4170 * this CPU are able to make progress, make a voluntary preemption 4171 * call here. 4172 */ 4173 kpreempt(KPREEMPT_SYNC); 4174 4175 return (bytes_evicted); 4176 } 4177 4178 /* 4179 * Allocate an array of buffer headers used as placeholders during arc state 4180 * eviction. 4181 */ 4182 static arc_buf_hdr_t ** 4183 arc_state_alloc_markers(int count) 4184 { 4185 arc_buf_hdr_t **markers; 4186 4187 markers = kmem_zalloc(sizeof (*markers) * count, KM_SLEEP); 4188 for (int i = 0; i < count; i++) { 4189 markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP); 4190 4191 /* 4192 * A b_spa of 0 is used to indicate that this header is 4193 * a marker. This fact is used in arc_evict_state_impl(). 4194 */ 4195 markers[i]->b_spa = 0; 4196 4197 } 4198 return (markers); 4199 } 4200 4201 static void 4202 arc_state_free_markers(arc_buf_hdr_t **markers, int count) 4203 { 4204 for (int i = 0; i < count; i++) 4205 kmem_cache_free(hdr_full_cache, markers[i]); 4206 kmem_free(markers, sizeof (*markers) * count); 4207 } 4208 4209 /* 4210 * Evict buffers from the given arc state, until we've removed the 4211 * specified number of bytes. Move the removed buffers to the 4212 * appropriate evict state. 4213 * 4214 * This function makes a "best effort". It skips over any buffers 4215 * it can't get a hash_lock on, and so, may not catch all candidates. 4216 * It may also return without evicting as much space as requested. 4217 * 4218 * If bytes is specified using the special value ARC_EVICT_ALL, this 4219 * will evict all available (i.e. unlocked and evictable) buffers from 4220 * the given arc state; which is used by arc_flush(). 4221 */ 4222 static uint64_t 4223 arc_evict_state(arc_state_t *state, arc_buf_contents_t type, uint64_t spa, 4224 uint64_t bytes) 4225 { 4226 uint64_t total_evicted = 0; 4227 multilist_t *ml = &state->arcs_list[type]; 4228 int num_sublists; 4229 arc_buf_hdr_t **markers; 4230 4231 num_sublists = multilist_get_num_sublists(ml); 4232 4233 /* 4234 * If we've tried to evict from each sublist, made some 4235 * progress, but still have not hit the target number of bytes 4236 * to evict, we want to keep trying. The markers allow us to 4237 * pick up where we left off for each individual sublist, rather 4238 * than starting from the tail each time. 4239 */ 4240 if (zthr_iscurthread(arc_evict_zthr)) { 4241 markers = arc_state_evict_markers; 4242 ASSERT3S(num_sublists, <=, arc_state_evict_marker_count); 4243 } else { 4244 markers = arc_state_alloc_markers(num_sublists); 4245 } 4246 for (int i = 0; i < num_sublists; i++) { 4247 multilist_sublist_t *mls; 4248 4249 mls = multilist_sublist_lock(ml, i); 4250 multilist_sublist_insert_tail(mls, markers[i]); 4251 multilist_sublist_unlock(mls); 4252 } 4253 4254 /* 4255 * While we haven't hit our target number of bytes to evict, or 4256 * we're evicting all available buffers. 4257 */ 4258 while (total_evicted < bytes) { 4259 int sublist_idx = multilist_get_random_index(ml); 4260 uint64_t scan_evicted = 0; 4261 4262 /* 4263 * Start eviction using a randomly selected sublist, 4264 * this is to try and evenly balance eviction across all 4265 * sublists. Always starting at the same sublist 4266 * (e.g. index 0) would cause evictions to favor certain 4267 * sublists over others. 4268 */ 4269 for (int i = 0; i < num_sublists; i++) { 4270 uint64_t bytes_remaining; 4271 uint64_t bytes_evicted; 4272 4273 if (total_evicted < bytes) 4274 bytes_remaining = bytes - total_evicted; 4275 else 4276 break; 4277 4278 bytes_evicted = arc_evict_state_impl(ml, sublist_idx, 4279 markers[sublist_idx], spa, bytes_remaining); 4280 4281 scan_evicted += bytes_evicted; 4282 total_evicted += bytes_evicted; 4283 4284 /* we've reached the end, wrap to the beginning */ 4285 if (++sublist_idx >= num_sublists) 4286 sublist_idx = 0; 4287 } 4288 4289 /* 4290 * If we didn't evict anything during this scan, we have 4291 * no reason to believe we'll evict more during another 4292 * scan, so break the loop. 4293 */ 4294 if (scan_evicted == 0) { 4295 /* This isn't possible, let's make that obvious */ 4296 ASSERT3S(bytes, !=, 0); 4297 4298 /* 4299 * When bytes is ARC_EVICT_ALL, the only way to 4300 * break the loop is when scan_evicted is zero. 4301 * In that case, we actually have evicted enough, 4302 * so we don't want to increment the kstat. 4303 */ 4304 if (bytes != ARC_EVICT_ALL) { 4305 ASSERT3S(total_evicted, <, bytes); 4306 ARCSTAT_BUMP(arcstat_evict_not_enough); 4307 } 4308 4309 break; 4310 } 4311 } 4312 4313 for (int i = 0; i < num_sublists; i++) { 4314 multilist_sublist_t *mls = multilist_sublist_lock(ml, i); 4315 multilist_sublist_remove(mls, markers[i]); 4316 multilist_sublist_unlock(mls); 4317 } 4318 if (markers != arc_state_evict_markers) 4319 arc_state_free_markers(markers, num_sublists); 4320 4321 return (total_evicted); 4322 } 4323 4324 /* 4325 * Flush all "evictable" data of the given type from the arc state 4326 * specified. This will not evict any "active" buffers (i.e. referenced). 4327 * 4328 * When 'retry' is set to B_FALSE, the function will make a single pass 4329 * over the state and evict any buffers that it can. Since it doesn't 4330 * continually retry the eviction, it might end up leaving some buffers 4331 * in the ARC due to lock misses. 4332 * 4333 * When 'retry' is set to B_TRUE, the function will continually retry the 4334 * eviction until *all* evictable buffers have been removed from the 4335 * state. As a result, if concurrent insertions into the state are 4336 * allowed (e.g. if the ARC isn't shutting down), this function might 4337 * wind up in an infinite loop, continually trying to evict buffers. 4338 */ 4339 static uint64_t 4340 arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, 4341 boolean_t retry) 4342 { 4343 uint64_t evicted = 0; 4344 4345 while (zfs_refcount_count(&state->arcs_esize[type]) != 0) { 4346 evicted += arc_evict_state(state, type, spa, ARC_EVICT_ALL); 4347 4348 if (!retry) 4349 break; 4350 } 4351 4352 return (evicted); 4353 } 4354 4355 /* 4356 * Evict the specified number of bytes from the state specified. This 4357 * function prevents us from trying to evict more from a state's list 4358 * than is "evictable", and to skip evicting altogether when passed a 4359 * negative value for "bytes". In contrast, arc_evict_state() will 4360 * evict everything it can, when passed a negative value for "bytes". 4361 */ 4362 static uint64_t 4363 arc_evict_impl(arc_state_t *state, arc_buf_contents_t type, int64_t bytes) 4364 { 4365 uint64_t delta; 4366 4367 if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) { 4368 delta = MIN(zfs_refcount_count(&state->arcs_esize[type]), 4369 bytes); 4370 return (arc_evict_state(state, type, 0, delta)); 4371 } 4372 4373 return (0); 4374 } 4375 4376 /* 4377 * Adjust specified fraction, taking into account initial ghost state(s) size, 4378 * ghost hit bytes towards increasing the fraction, ghost hit bytes towards 4379 * decreasing it, plus a balance factor, controlling the decrease rate, used 4380 * to balance metadata vs data. 4381 */ 4382 static uint64_t 4383 arc_evict_adj(uint64_t frac, uint64_t total, uint64_t up, uint64_t down, 4384 uint_t balance) 4385 { 4386 if (total < 8 || up + down == 0) 4387 return (frac); 4388 4389 /* 4390 * We should not have more ghost hits than ghost size, but they 4391 * may get close. Restrict maximum adjustment in that case. 4392 */ 4393 if (up + down >= total / 4) { 4394 uint64_t scale = (up + down) / (total / 8); 4395 up /= scale; 4396 down /= scale; 4397 } 4398 4399 /* Get maximal dynamic range by choosing optimal shifts. */ 4400 int s = highbit64(total); 4401 s = MIN(64 - s, 32); 4402 4403 uint64_t ofrac = (1ULL << 32) - frac; 4404 4405 if (frac >= 4 * ofrac) 4406 up /= frac / (2 * ofrac + 1); 4407 up = (up << s) / (total >> (32 - s)); 4408 if (ofrac >= 4 * frac) 4409 down /= ofrac / (2 * frac + 1); 4410 down = (down << s) / (total >> (32 - s)); 4411 down = down * 100 / balance; 4412 4413 return (frac + up - down); 4414 } 4415 4416 /* 4417 * Evict buffers from the cache, such that arcstat_size is capped by arc_c. 4418 */ 4419 static uint64_t 4420 arc_evict(void) 4421 { 4422 uint64_t asize, bytes, total_evicted = 0; 4423 int64_t e, mrud, mrum, mfud, mfum, w; 4424 static uint64_t ogrd, ogrm, ogfd, ogfm; 4425 static uint64_t gsrd, gsrm, gsfd, gsfm; 4426 uint64_t ngrd, ngrm, ngfd, ngfm; 4427 4428 /* Get current size of ARC states we can evict from. */ 4429 mrud = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_DATA]) + 4430 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]); 4431 mrum = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) + 4432 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 4433 mfud = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 4434 mfum = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 4435 uint64_t d = mrud + mfud; 4436 uint64_t m = mrum + mfum; 4437 uint64_t t = d + m; 4438 4439 /* Get ARC ghost hits since last eviction. */ 4440 ngrd = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]); 4441 uint64_t grd = ngrd - ogrd; 4442 ogrd = ngrd; 4443 ngrm = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]); 4444 uint64_t grm = ngrm - ogrm; 4445 ogrm = ngrm; 4446 ngfd = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]); 4447 uint64_t gfd = ngfd - ogfd; 4448 ogfd = ngfd; 4449 ngfm = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]); 4450 uint64_t gfm = ngfm - ogfm; 4451 ogfm = ngfm; 4452 4453 /* Adjust ARC states balance based on ghost hits. */ 4454 arc_meta = arc_evict_adj(arc_meta, gsrd + gsrm + gsfd + gsfm, 4455 grm + gfm, grd + gfd, zfs_arc_meta_balance); 4456 arc_pd = arc_evict_adj(arc_pd, gsrd + gsfd, grd, gfd, 100); 4457 arc_pm = arc_evict_adj(arc_pm, gsrm + gsfm, grm, gfm, 100); 4458 4459 asize = aggsum_value(&arc_sums.arcstat_size); 4460 int64_t wt = t - (asize - arc_c); 4461 4462 /* 4463 * Try to reduce pinned dnodes if more than 3/4 of wanted metadata 4464 * target is not evictable or if they go over arc_dnode_limit. 4465 */ 4466 int64_t prune = 0; 4467 int64_t dn = wmsum_value(&arc_sums.arcstat_dnode_size); 4468 w = wt * (arc_meta >> 16) >> 16; 4469 if (zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) + 4470 zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]) - 4471 zfs_refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA]) - 4472 zfs_refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]) > 4473 w * 3 / 4) { 4474 prune = dn / sizeof (dnode_t) * 4475 zfs_arc_dnode_reduce_percent / 100; 4476 } else if (dn > arc_dnode_limit) { 4477 prune = (dn - arc_dnode_limit) / sizeof (dnode_t) * 4478 zfs_arc_dnode_reduce_percent / 100; 4479 } 4480 if (prune > 0) 4481 arc_prune_async(prune); 4482 4483 /* Evict MRU metadata. */ 4484 w = wt * (arc_meta * arc_pm >> 48) >> 16; 4485 e = MIN((int64_t)(asize - arc_c), (int64_t)(mrum - w)); 4486 bytes = arc_evict_impl(arc_mru, ARC_BUFC_METADATA, e); 4487 total_evicted += bytes; 4488 mrum -= bytes; 4489 asize -= bytes; 4490 4491 /* Evict MFU metadata. */ 4492 w = wt * (arc_meta >> 16) >> 16; 4493 e = MIN((int64_t)(asize - arc_c), (int64_t)(m - w)); 4494 bytes = arc_evict_impl(arc_mfu, ARC_BUFC_METADATA, e); 4495 total_evicted += bytes; 4496 mfum -= bytes; 4497 asize -= bytes; 4498 4499 /* Evict MRU data. */ 4500 wt -= m - total_evicted; 4501 w = wt * (arc_pd >> 16) >> 16; 4502 e = MIN((int64_t)(asize - arc_c), (int64_t)(mrud - w)); 4503 bytes = arc_evict_impl(arc_mru, ARC_BUFC_DATA, e); 4504 total_evicted += bytes; 4505 mrud -= bytes; 4506 asize -= bytes; 4507 4508 /* Evict MFU data. */ 4509 e = asize - arc_c; 4510 bytes = arc_evict_impl(arc_mfu, ARC_BUFC_DATA, e); 4511 mfud -= bytes; 4512 total_evicted += bytes; 4513 4514 /* 4515 * Evict ghost lists 4516 * 4517 * Size of each state's ghost list represents how much that state 4518 * may grow by shrinking the other states. Would it need to shrink 4519 * other states to zero (that is unlikely), its ghost size would be 4520 * equal to sum of other three state sizes. But excessive ghost 4521 * size may result in false ghost hits (too far back), that may 4522 * never result in real cache hits if several states are competing. 4523 * So choose some arbitraty point of 1/2 of other state sizes. 4524 */ 4525 gsrd = (mrum + mfud + mfum) / 2; 4526 e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]) - 4527 gsrd; 4528 (void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_DATA, e); 4529 4530 gsrm = (mrud + mfud + mfum) / 2; 4531 e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]) - 4532 gsrm; 4533 (void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_METADATA, e); 4534 4535 gsfd = (mrud + mrum + mfum) / 2; 4536 e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]) - 4537 gsfd; 4538 (void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_DATA, e); 4539 4540 gsfm = (mrud + mrum + mfud) / 2; 4541 e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]) - 4542 gsfm; 4543 (void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_METADATA, e); 4544 4545 return (total_evicted); 4546 } 4547 4548 void 4549 arc_flush(spa_t *spa, boolean_t retry) 4550 { 4551 uint64_t guid = 0; 4552 4553 /* 4554 * If retry is B_TRUE, a spa must not be specified since we have 4555 * no good way to determine if all of a spa's buffers have been 4556 * evicted from an arc state. 4557 */ 4558 ASSERT(!retry || spa == NULL); 4559 4560 if (spa != NULL) 4561 guid = spa_load_guid(spa); 4562 4563 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry); 4564 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry); 4565 4566 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry); 4567 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry); 4568 4569 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry); 4570 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry); 4571 4572 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry); 4573 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry); 4574 4575 (void) arc_flush_state(arc_uncached, guid, ARC_BUFC_DATA, retry); 4576 (void) arc_flush_state(arc_uncached, guid, ARC_BUFC_METADATA, retry); 4577 } 4578 4579 void 4580 arc_reduce_target_size(int64_t to_free) 4581 { 4582 uint64_t c = arc_c; 4583 4584 if (c <= arc_c_min) 4585 return; 4586 4587 /* 4588 * All callers want the ARC to actually evict (at least) this much 4589 * memory. Therefore we reduce from the lower of the current size and 4590 * the target size. This way, even if arc_c is much higher than 4591 * arc_size (as can be the case after many calls to arc_freed(), we will 4592 * immediately have arc_c < arc_size and therefore the arc_evict_zthr 4593 * will evict. 4594 */ 4595 uint64_t asize = aggsum_value(&arc_sums.arcstat_size); 4596 if (asize < c) 4597 to_free += c - asize; 4598 arc_c = MAX((int64_t)c - to_free, (int64_t)arc_c_min); 4599 4600 /* See comment in arc_evict_cb_check() on why lock+flag */ 4601 mutex_enter(&arc_evict_lock); 4602 arc_evict_needed = B_TRUE; 4603 mutex_exit(&arc_evict_lock); 4604 zthr_wakeup(arc_evict_zthr); 4605 } 4606 4607 /* 4608 * Determine if the system is under memory pressure and is asking 4609 * to reclaim memory. A return value of B_TRUE indicates that the system 4610 * is under memory pressure and that the arc should adjust accordingly. 4611 */ 4612 boolean_t 4613 arc_reclaim_needed(void) 4614 { 4615 return (arc_available_memory() < 0); 4616 } 4617 4618 void 4619 arc_kmem_reap_soon(void) 4620 { 4621 size_t i; 4622 kmem_cache_t *prev_cache = NULL; 4623 kmem_cache_t *prev_data_cache = NULL; 4624 4625 #ifdef _KERNEL 4626 #if defined(_ILP32) 4627 /* 4628 * Reclaim unused memory from all kmem caches. 4629 */ 4630 kmem_reap(); 4631 #endif 4632 #endif 4633 4634 for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { 4635 #if defined(_ILP32) 4636 /* reach upper limit of cache size on 32-bit */ 4637 if (zio_buf_cache[i] == NULL) 4638 break; 4639 #endif 4640 if (zio_buf_cache[i] != prev_cache) { 4641 prev_cache = zio_buf_cache[i]; 4642 kmem_cache_reap_now(zio_buf_cache[i]); 4643 } 4644 if (zio_data_buf_cache[i] != prev_data_cache) { 4645 prev_data_cache = zio_data_buf_cache[i]; 4646 kmem_cache_reap_now(zio_data_buf_cache[i]); 4647 } 4648 } 4649 kmem_cache_reap_now(buf_cache); 4650 kmem_cache_reap_now(hdr_full_cache); 4651 kmem_cache_reap_now(hdr_l2only_cache); 4652 kmem_cache_reap_now(zfs_btree_leaf_cache); 4653 abd_cache_reap_now(); 4654 } 4655 4656 static boolean_t 4657 arc_evict_cb_check(void *arg, zthr_t *zthr) 4658 { 4659 (void) arg, (void) zthr; 4660 4661 #ifdef ZFS_DEBUG 4662 /* 4663 * This is necessary in order to keep the kstat information 4664 * up to date for tools that display kstat data such as the 4665 * mdb ::arc dcmd and the Linux crash utility. These tools 4666 * typically do not call kstat's update function, but simply 4667 * dump out stats from the most recent update. Without 4668 * this call, these commands may show stale stats for the 4669 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even 4670 * with this call, the data might be out of date if the 4671 * evict thread hasn't been woken recently; but that should 4672 * suffice. The arc_state_t structures can be queried 4673 * directly if more accurate information is needed. 4674 */ 4675 if (arc_ksp != NULL) 4676 arc_ksp->ks_update(arc_ksp, KSTAT_READ); 4677 #endif 4678 4679 /* 4680 * We have to rely on arc_wait_for_eviction() to tell us when to 4681 * evict, rather than checking if we are overflowing here, so that we 4682 * are sure to not leave arc_wait_for_eviction() waiting on aew_cv. 4683 * If we have become "not overflowing" since arc_wait_for_eviction() 4684 * checked, we need to wake it up. We could broadcast the CV here, 4685 * but arc_wait_for_eviction() may have not yet gone to sleep. We 4686 * would need to use a mutex to ensure that this function doesn't 4687 * broadcast until arc_wait_for_eviction() has gone to sleep (e.g. 4688 * the arc_evict_lock). However, the lock ordering of such a lock 4689 * would necessarily be incorrect with respect to the zthr_lock, 4690 * which is held before this function is called, and is held by 4691 * arc_wait_for_eviction() when it calls zthr_wakeup(). 4692 */ 4693 if (arc_evict_needed) 4694 return (B_TRUE); 4695 4696 /* 4697 * If we have buffers in uncached state, evict them periodically. 4698 */ 4699 return ((zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_DATA]) + 4700 zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]) && 4701 ddi_get_lbolt() - arc_last_uncached_flush > 4702 MSEC_TO_TICK(arc_min_prefetch_ms / 2))); 4703 } 4704 4705 /* 4706 * Keep arc_size under arc_c by running arc_evict which evicts data 4707 * from the ARC. 4708 */ 4709 static void 4710 arc_evict_cb(void *arg, zthr_t *zthr) 4711 { 4712 (void) arg, (void) zthr; 4713 4714 uint64_t evicted = 0; 4715 fstrans_cookie_t cookie = spl_fstrans_mark(); 4716 4717 /* Always try to evict from uncached state. */ 4718 arc_last_uncached_flush = ddi_get_lbolt(); 4719 evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_DATA, B_FALSE); 4720 evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_METADATA, B_FALSE); 4721 4722 /* Evict from other states only if told to. */ 4723 if (arc_evict_needed) 4724 evicted += arc_evict(); 4725 4726 /* 4727 * If evicted is zero, we couldn't evict anything 4728 * via arc_evict(). This could be due to hash lock 4729 * collisions, but more likely due to the majority of 4730 * arc buffers being unevictable. Therefore, even if 4731 * arc_size is above arc_c, another pass is unlikely to 4732 * be helpful and could potentially cause us to enter an 4733 * infinite loop. Additionally, zthr_iscancelled() is 4734 * checked here so that if the arc is shutting down, the 4735 * broadcast will wake any remaining arc evict waiters. 4736 */ 4737 mutex_enter(&arc_evict_lock); 4738 arc_evict_needed = !zthr_iscancelled(arc_evict_zthr) && 4739 evicted > 0 && aggsum_compare(&arc_sums.arcstat_size, arc_c) > 0; 4740 if (!arc_evict_needed) { 4741 /* 4742 * We're either no longer overflowing, or we 4743 * can't evict anything more, so we should wake 4744 * arc_get_data_impl() sooner. 4745 */ 4746 arc_evict_waiter_t *aw; 4747 while ((aw = list_remove_head(&arc_evict_waiters)) != NULL) { 4748 cv_broadcast(&aw->aew_cv); 4749 } 4750 arc_set_need_free(); 4751 } 4752 mutex_exit(&arc_evict_lock); 4753 spl_fstrans_unmark(cookie); 4754 } 4755 4756 static boolean_t 4757 arc_reap_cb_check(void *arg, zthr_t *zthr) 4758 { 4759 (void) arg, (void) zthr; 4760 4761 int64_t free_memory = arc_available_memory(); 4762 static int reap_cb_check_counter = 0; 4763 4764 /* 4765 * If a kmem reap is already active, don't schedule more. We must 4766 * check for this because kmem_cache_reap_soon() won't actually 4767 * block on the cache being reaped (this is to prevent callers from 4768 * becoming implicitly blocked by a system-wide kmem reap -- which, 4769 * on a system with many, many full magazines, can take minutes). 4770 */ 4771 if (!kmem_cache_reap_active() && free_memory < 0) { 4772 4773 arc_no_grow = B_TRUE; 4774 arc_warm = B_TRUE; 4775 /* 4776 * Wait at least zfs_grow_retry (default 5) seconds 4777 * before considering growing. 4778 */ 4779 arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry); 4780 return (B_TRUE); 4781 } else if (free_memory < arc_c >> arc_no_grow_shift) { 4782 arc_no_grow = B_TRUE; 4783 } else if (gethrtime() >= arc_growtime) { 4784 arc_no_grow = B_FALSE; 4785 } 4786 4787 /* 4788 * Called unconditionally every 60 seconds to reclaim unused 4789 * zstd compression and decompression context. This is done 4790 * here to avoid the need for an independent thread. 4791 */ 4792 if (!((reap_cb_check_counter++) % 60)) 4793 zfs_zstd_cache_reap_now(); 4794 4795 return (B_FALSE); 4796 } 4797 4798 /* 4799 * Keep enough free memory in the system by reaping the ARC's kmem 4800 * caches. To cause more slabs to be reapable, we may reduce the 4801 * target size of the cache (arc_c), causing the arc_evict_cb() 4802 * to free more buffers. 4803 */ 4804 static void 4805 arc_reap_cb(void *arg, zthr_t *zthr) 4806 { 4807 (void) arg, (void) zthr; 4808 4809 int64_t free_memory; 4810 fstrans_cookie_t cookie = spl_fstrans_mark(); 4811 4812 /* 4813 * Kick off asynchronous kmem_reap()'s of all our caches. 4814 */ 4815 arc_kmem_reap_soon(); 4816 4817 /* 4818 * Wait at least arc_kmem_cache_reap_retry_ms between 4819 * arc_kmem_reap_soon() calls. Without this check it is possible to 4820 * end up in a situation where we spend lots of time reaping 4821 * caches, while we're near arc_c_min. Waiting here also gives the 4822 * subsequent free memory check a chance of finding that the 4823 * asynchronous reap has already freed enough memory, and we don't 4824 * need to call arc_reduce_target_size(). 4825 */ 4826 delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000); 4827 4828 /* 4829 * Reduce the target size as needed to maintain the amount of free 4830 * memory in the system at a fraction of the arc_size (1/128th by 4831 * default). If oversubscribed (free_memory < 0) then reduce the 4832 * target arc_size by the deficit amount plus the fractional 4833 * amount. If free memory is positive but less than the fractional 4834 * amount, reduce by what is needed to hit the fractional amount. 4835 */ 4836 free_memory = arc_available_memory(); 4837 4838 int64_t can_free = arc_c - arc_c_min; 4839 if (can_free > 0) { 4840 int64_t to_free = (can_free >> arc_shrink_shift) - free_memory; 4841 if (to_free > 0) 4842 arc_reduce_target_size(to_free); 4843 } 4844 spl_fstrans_unmark(cookie); 4845 } 4846 4847 #ifdef _KERNEL 4848 /* 4849 * Determine the amount of memory eligible for eviction contained in the 4850 * ARC. All clean data reported by the ghost lists can always be safely 4851 * evicted. Due to arc_c_min, the same does not hold for all clean data 4852 * contained by the regular mru and mfu lists. 4853 * 4854 * In the case of the regular mru and mfu lists, we need to report as 4855 * much clean data as possible, such that evicting that same reported 4856 * data will not bring arc_size below arc_c_min. Thus, in certain 4857 * circumstances, the total amount of clean data in the mru and mfu 4858 * lists might not actually be evictable. 4859 * 4860 * The following two distinct cases are accounted for: 4861 * 4862 * 1. The sum of the amount of dirty data contained by both the mru and 4863 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 4864 * is greater than or equal to arc_c_min. 4865 * (i.e. amount of dirty data >= arc_c_min) 4866 * 4867 * This is the easy case; all clean data contained by the mru and mfu 4868 * lists is evictable. Evicting all clean data can only drop arc_size 4869 * to the amount of dirty data, which is greater than arc_c_min. 4870 * 4871 * 2. The sum of the amount of dirty data contained by both the mru and 4872 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 4873 * is less than arc_c_min. 4874 * (i.e. arc_c_min > amount of dirty data) 4875 * 4876 * 2.1. arc_size is greater than or equal arc_c_min. 4877 * (i.e. arc_size >= arc_c_min > amount of dirty data) 4878 * 4879 * In this case, not all clean data from the regular mru and mfu 4880 * lists is actually evictable; we must leave enough clean data 4881 * to keep arc_size above arc_c_min. Thus, the maximum amount of 4882 * evictable data from the two lists combined, is exactly the 4883 * difference between arc_size and arc_c_min. 4884 * 4885 * 2.2. arc_size is less than arc_c_min 4886 * (i.e. arc_c_min > arc_size > amount of dirty data) 4887 * 4888 * In this case, none of the data contained in the mru and mfu 4889 * lists is evictable, even if it's clean. Since arc_size is 4890 * already below arc_c_min, evicting any more would only 4891 * increase this negative difference. 4892 */ 4893 4894 #endif /* _KERNEL */ 4895 4896 /* 4897 * Adapt arc info given the number of bytes we are trying to add and 4898 * the state that we are coming from. This function is only called 4899 * when we are adding new content to the cache. 4900 */ 4901 static void 4902 arc_adapt(uint64_t bytes) 4903 { 4904 /* 4905 * Wake reap thread if we do not have any available memory 4906 */ 4907 if (arc_reclaim_needed()) { 4908 zthr_wakeup(arc_reap_zthr); 4909 return; 4910 } 4911 4912 if (arc_no_grow) 4913 return; 4914 4915 if (arc_c >= arc_c_max) 4916 return; 4917 4918 /* 4919 * If we're within (2 * maxblocksize) bytes of the target 4920 * cache size, increment the target cache size 4921 */ 4922 if (aggsum_upper_bound(&arc_sums.arcstat_size) + 4923 2 * SPA_MAXBLOCKSIZE >= arc_c) { 4924 uint64_t dc = MAX(bytes, SPA_OLD_MAXBLOCKSIZE); 4925 if (atomic_add_64_nv(&arc_c, dc) > arc_c_max) 4926 arc_c = arc_c_max; 4927 } 4928 } 4929 4930 /* 4931 * Check if arc_size has grown past our upper threshold, determined by 4932 * zfs_arc_overflow_shift. 4933 */ 4934 static arc_ovf_level_t 4935 arc_is_overflowing(boolean_t use_reserve) 4936 { 4937 /* Always allow at least one block of overflow */ 4938 int64_t overflow = MAX(SPA_MAXBLOCKSIZE, 4939 arc_c >> zfs_arc_overflow_shift); 4940 4941 /* 4942 * We just compare the lower bound here for performance reasons. Our 4943 * primary goals are to make sure that the arc never grows without 4944 * bound, and that it can reach its maximum size. This check 4945 * accomplishes both goals. The maximum amount we could run over by is 4946 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block 4947 * in the ARC. In practice, that's in the tens of MB, which is low 4948 * enough to be safe. 4949 */ 4950 int64_t over = aggsum_lower_bound(&arc_sums.arcstat_size) - 4951 arc_c - overflow / 2; 4952 if (!use_reserve) 4953 overflow /= 2; 4954 return (over < 0 ? ARC_OVF_NONE : 4955 over < overflow ? ARC_OVF_SOME : ARC_OVF_SEVERE); 4956 } 4957 4958 static abd_t * 4959 arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, const void *tag, 4960 int alloc_flags) 4961 { 4962 arc_buf_contents_t type = arc_buf_type(hdr); 4963 4964 arc_get_data_impl(hdr, size, tag, alloc_flags); 4965 if (alloc_flags & ARC_HDR_ALLOC_LINEAR) 4966 return (abd_alloc_linear(size, type == ARC_BUFC_METADATA)); 4967 else 4968 return (abd_alloc(size, type == ARC_BUFC_METADATA)); 4969 } 4970 4971 static void * 4972 arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, const void *tag) 4973 { 4974 arc_buf_contents_t type = arc_buf_type(hdr); 4975 4976 arc_get_data_impl(hdr, size, tag, 0); 4977 if (type == ARC_BUFC_METADATA) { 4978 return (zio_buf_alloc(size)); 4979 } else { 4980 ASSERT(type == ARC_BUFC_DATA); 4981 return (zio_data_buf_alloc(size)); 4982 } 4983 } 4984 4985 /* 4986 * Wait for the specified amount of data (in bytes) to be evicted from the 4987 * ARC, and for there to be sufficient free memory in the system. Waiting for 4988 * eviction ensures that the memory used by the ARC decreases. Waiting for 4989 * free memory ensures that the system won't run out of free pages, regardless 4990 * of ARC behavior and settings. See arc_lowmem_init(). 4991 */ 4992 void 4993 arc_wait_for_eviction(uint64_t amount, boolean_t use_reserve) 4994 { 4995 switch (arc_is_overflowing(use_reserve)) { 4996 case ARC_OVF_NONE: 4997 return; 4998 case ARC_OVF_SOME: 4999 /* 5000 * This is a bit racy without taking arc_evict_lock, but the 5001 * worst that can happen is we either call zthr_wakeup() extra 5002 * time due to race with other thread here, or the set flag 5003 * get cleared by arc_evict_cb(), which is unlikely due to 5004 * big hysteresis, but also not important since at this level 5005 * of overflow the eviction is purely advisory. Same time 5006 * taking the global lock here every time without waiting for 5007 * the actual eviction creates a significant lock contention. 5008 */ 5009 if (!arc_evict_needed) { 5010 arc_evict_needed = B_TRUE; 5011 zthr_wakeup(arc_evict_zthr); 5012 } 5013 return; 5014 case ARC_OVF_SEVERE: 5015 default: 5016 { 5017 arc_evict_waiter_t aw; 5018 list_link_init(&aw.aew_node); 5019 cv_init(&aw.aew_cv, NULL, CV_DEFAULT, NULL); 5020 5021 uint64_t last_count = 0; 5022 mutex_enter(&arc_evict_lock); 5023 if (!list_is_empty(&arc_evict_waiters)) { 5024 arc_evict_waiter_t *last = 5025 list_tail(&arc_evict_waiters); 5026 last_count = last->aew_count; 5027 } else if (!arc_evict_needed) { 5028 arc_evict_needed = B_TRUE; 5029 zthr_wakeup(arc_evict_zthr); 5030 } 5031 /* 5032 * Note, the last waiter's count may be less than 5033 * arc_evict_count if we are low on memory in which 5034 * case arc_evict_state_impl() may have deferred 5035 * wakeups (but still incremented arc_evict_count). 5036 */ 5037 aw.aew_count = MAX(last_count, arc_evict_count) + amount; 5038 5039 list_insert_tail(&arc_evict_waiters, &aw); 5040 5041 arc_set_need_free(); 5042 5043 DTRACE_PROBE3(arc__wait__for__eviction, 5044 uint64_t, amount, 5045 uint64_t, arc_evict_count, 5046 uint64_t, aw.aew_count); 5047 5048 /* 5049 * We will be woken up either when arc_evict_count reaches 5050 * aew_count, or when the ARC is no longer overflowing and 5051 * eviction completes. 5052 * In case of "false" wakeup, we will still be on the list. 5053 */ 5054 do { 5055 cv_wait(&aw.aew_cv, &arc_evict_lock); 5056 } while (list_link_active(&aw.aew_node)); 5057 mutex_exit(&arc_evict_lock); 5058 5059 cv_destroy(&aw.aew_cv); 5060 } 5061 } 5062 } 5063 5064 /* 5065 * Allocate a block and return it to the caller. If we are hitting the 5066 * hard limit for the cache size, we must sleep, waiting for the eviction 5067 * thread to catch up. If we're past the target size but below the hard 5068 * limit, we'll only signal the reclaim thread and continue on. 5069 */ 5070 static void 5071 arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag, 5072 int alloc_flags) 5073 { 5074 arc_adapt(size); 5075 5076 /* 5077 * If arc_size is currently overflowing, we must be adding data 5078 * faster than we are evicting. To ensure we don't compound the 5079 * problem by adding more data and forcing arc_size to grow even 5080 * further past it's target size, we wait for the eviction thread to 5081 * make some progress. We also wait for there to be sufficient free 5082 * memory in the system, as measured by arc_free_memory(). 5083 * 5084 * Specifically, we wait for zfs_arc_eviction_pct percent of the 5085 * requested size to be evicted. This should be more than 100%, to 5086 * ensure that that progress is also made towards getting arc_size 5087 * under arc_c. See the comment above zfs_arc_eviction_pct. 5088 */ 5089 arc_wait_for_eviction(size * zfs_arc_eviction_pct / 100, 5090 alloc_flags & ARC_HDR_USE_RESERVE); 5091 5092 arc_buf_contents_t type = arc_buf_type(hdr); 5093 if (type == ARC_BUFC_METADATA) { 5094 arc_space_consume(size, ARC_SPACE_META); 5095 } else { 5096 arc_space_consume(size, ARC_SPACE_DATA); 5097 } 5098 5099 /* 5100 * Update the state size. Note that ghost states have a 5101 * "ghost size" and so don't need to be updated. 5102 */ 5103 arc_state_t *state = hdr->b_l1hdr.b_state; 5104 if (!GHOST_STATE(state)) { 5105 5106 (void) zfs_refcount_add_many(&state->arcs_size[type], size, 5107 tag); 5108 5109 /* 5110 * If this is reached via arc_read, the link is 5111 * protected by the hash lock. If reached via 5112 * arc_buf_alloc, the header should not be accessed by 5113 * any other thread. And, if reached via arc_read_done, 5114 * the hash lock will protect it if it's found in the 5115 * hash table; otherwise no other thread should be 5116 * trying to [add|remove]_reference it. 5117 */ 5118 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5119 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5120 (void) zfs_refcount_add_many(&state->arcs_esize[type], 5121 size, tag); 5122 } 5123 } 5124 } 5125 5126 static void 5127 arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, 5128 const void *tag) 5129 { 5130 arc_free_data_impl(hdr, size, tag); 5131 abd_free(abd); 5132 } 5133 5134 static void 5135 arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, const void *tag) 5136 { 5137 arc_buf_contents_t type = arc_buf_type(hdr); 5138 5139 arc_free_data_impl(hdr, size, tag); 5140 if (type == ARC_BUFC_METADATA) { 5141 zio_buf_free(buf, size); 5142 } else { 5143 ASSERT(type == ARC_BUFC_DATA); 5144 zio_data_buf_free(buf, size); 5145 } 5146 } 5147 5148 /* 5149 * Free the arc data buffer. 5150 */ 5151 static void 5152 arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag) 5153 { 5154 arc_state_t *state = hdr->b_l1hdr.b_state; 5155 arc_buf_contents_t type = arc_buf_type(hdr); 5156 5157 /* protected by hash lock, if in the hash table */ 5158 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5159 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5160 ASSERT(state != arc_anon && state != arc_l2c_only); 5161 5162 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 5163 size, tag); 5164 } 5165 (void) zfs_refcount_remove_many(&state->arcs_size[type], size, tag); 5166 5167 VERIFY3U(hdr->b_type, ==, type); 5168 if (type == ARC_BUFC_METADATA) { 5169 arc_space_return(size, ARC_SPACE_META); 5170 } else { 5171 ASSERT(type == ARC_BUFC_DATA); 5172 arc_space_return(size, ARC_SPACE_DATA); 5173 } 5174 } 5175 5176 /* 5177 * This routine is called whenever a buffer is accessed. 5178 */ 5179 static void 5180 arc_access(arc_buf_hdr_t *hdr, arc_flags_t arc_flags, boolean_t hit) 5181 { 5182 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 5183 ASSERT(HDR_HAS_L1HDR(hdr)); 5184 5185 /* 5186 * Update buffer prefetch status. 5187 */ 5188 boolean_t was_prefetch = HDR_PREFETCH(hdr); 5189 boolean_t now_prefetch = arc_flags & ARC_FLAG_PREFETCH; 5190 if (was_prefetch != now_prefetch) { 5191 if (was_prefetch) { 5192 ARCSTAT_CONDSTAT(hit, demand_hit, demand_iohit, 5193 HDR_PRESCIENT_PREFETCH(hdr), prescient, predictive, 5194 prefetch); 5195 } 5196 if (HDR_HAS_L2HDR(hdr)) 5197 l2arc_hdr_arcstats_decrement_state(hdr); 5198 if (was_prefetch) { 5199 arc_hdr_clear_flags(hdr, 5200 ARC_FLAG_PREFETCH | ARC_FLAG_PRESCIENT_PREFETCH); 5201 } else { 5202 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 5203 } 5204 if (HDR_HAS_L2HDR(hdr)) 5205 l2arc_hdr_arcstats_increment_state(hdr); 5206 } 5207 if (now_prefetch) { 5208 if (arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) { 5209 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 5210 ARCSTAT_BUMP(arcstat_prescient_prefetch); 5211 } else { 5212 ARCSTAT_BUMP(arcstat_predictive_prefetch); 5213 } 5214 } 5215 if (arc_flags & ARC_FLAG_L2CACHE) 5216 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 5217 5218 clock_t now = ddi_get_lbolt(); 5219 if (hdr->b_l1hdr.b_state == arc_anon) { 5220 arc_state_t *new_state; 5221 /* 5222 * This buffer is not in the cache, and does not appear in 5223 * our "ghost" lists. Add it to the MRU or uncached state. 5224 */ 5225 ASSERT0(hdr->b_l1hdr.b_arc_access); 5226 hdr->b_l1hdr.b_arc_access = now; 5227 if (HDR_UNCACHED(hdr)) { 5228 new_state = arc_uncached; 5229 DTRACE_PROBE1(new_state__uncached, arc_buf_hdr_t *, 5230 hdr); 5231 } else { 5232 new_state = arc_mru; 5233 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5234 } 5235 arc_change_state(new_state, hdr); 5236 } else if (hdr->b_l1hdr.b_state == arc_mru) { 5237 /* 5238 * This buffer has been accessed once recently and either 5239 * its read is still in progress or it is in the cache. 5240 */ 5241 if (HDR_IO_IN_PROGRESS(hdr)) { 5242 hdr->b_l1hdr.b_arc_access = now; 5243 return; 5244 } 5245 hdr->b_l1hdr.b_mru_hits++; 5246 ARCSTAT_BUMP(arcstat_mru_hits); 5247 5248 /* 5249 * If the previous access was a prefetch, then it already 5250 * handled possible promotion, so nothing more to do for now. 5251 */ 5252 if (was_prefetch) { 5253 hdr->b_l1hdr.b_arc_access = now; 5254 return; 5255 } 5256 5257 /* 5258 * If more than ARC_MINTIME have passed from the previous 5259 * hit, promote the buffer to the MFU state. 5260 */ 5261 if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access + 5262 ARC_MINTIME)) { 5263 hdr->b_l1hdr.b_arc_access = now; 5264 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5265 arc_change_state(arc_mfu, hdr); 5266 } 5267 } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) { 5268 arc_state_t *new_state; 5269 /* 5270 * This buffer has been accessed once recently, but was 5271 * evicted from the cache. Would we have bigger MRU, it 5272 * would be an MRU hit, so handle it the same way, except 5273 * we don't need to check the previous access time. 5274 */ 5275 hdr->b_l1hdr.b_mru_ghost_hits++; 5276 ARCSTAT_BUMP(arcstat_mru_ghost_hits); 5277 hdr->b_l1hdr.b_arc_access = now; 5278 wmsum_add(&arc_mru_ghost->arcs_hits[arc_buf_type(hdr)], 5279 arc_hdr_size(hdr)); 5280 if (was_prefetch) { 5281 new_state = arc_mru; 5282 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5283 } else { 5284 new_state = arc_mfu; 5285 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5286 } 5287 arc_change_state(new_state, hdr); 5288 } else if (hdr->b_l1hdr.b_state == arc_mfu) { 5289 /* 5290 * This buffer has been accessed more than once and either 5291 * still in the cache or being restored from one of ghosts. 5292 */ 5293 if (!HDR_IO_IN_PROGRESS(hdr)) { 5294 hdr->b_l1hdr.b_mfu_hits++; 5295 ARCSTAT_BUMP(arcstat_mfu_hits); 5296 } 5297 hdr->b_l1hdr.b_arc_access = now; 5298 } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) { 5299 /* 5300 * This buffer has been accessed more than once recently, but 5301 * has been evicted from the cache. Would we have bigger MFU 5302 * it would stay in cache, so move it back to MFU state. 5303 */ 5304 hdr->b_l1hdr.b_mfu_ghost_hits++; 5305 ARCSTAT_BUMP(arcstat_mfu_ghost_hits); 5306 hdr->b_l1hdr.b_arc_access = now; 5307 wmsum_add(&arc_mfu_ghost->arcs_hits[arc_buf_type(hdr)], 5308 arc_hdr_size(hdr)); 5309 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5310 arc_change_state(arc_mfu, hdr); 5311 } else if (hdr->b_l1hdr.b_state == arc_uncached) { 5312 /* 5313 * This buffer is uncacheable, but we got a hit. Probably 5314 * a demand read after prefetch. Nothing more to do here. 5315 */ 5316 if (!HDR_IO_IN_PROGRESS(hdr)) 5317 ARCSTAT_BUMP(arcstat_uncached_hits); 5318 hdr->b_l1hdr.b_arc_access = now; 5319 } else if (hdr->b_l1hdr.b_state == arc_l2c_only) { 5320 /* 5321 * This buffer is on the 2nd Level ARC and was not accessed 5322 * for a long time, so treat it as new and put into MRU. 5323 */ 5324 hdr->b_l1hdr.b_arc_access = now; 5325 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5326 arc_change_state(arc_mru, hdr); 5327 } else { 5328 cmn_err(CE_PANIC, "invalid arc state 0x%p", 5329 hdr->b_l1hdr.b_state); 5330 } 5331 } 5332 5333 /* 5334 * This routine is called by dbuf_hold() to update the arc_access() state 5335 * which otherwise would be skipped for entries in the dbuf cache. 5336 */ 5337 void 5338 arc_buf_access(arc_buf_t *buf) 5339 { 5340 arc_buf_hdr_t *hdr = buf->b_hdr; 5341 5342 /* 5343 * Avoid taking the hash_lock when possible as an optimization. 5344 * The header must be checked again under the hash_lock in order 5345 * to handle the case where it is concurrently being released. 5346 */ 5347 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) 5348 return; 5349 5350 kmutex_t *hash_lock = HDR_LOCK(hdr); 5351 mutex_enter(hash_lock); 5352 5353 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5354 mutex_exit(hash_lock); 5355 ARCSTAT_BUMP(arcstat_access_skip); 5356 return; 5357 } 5358 5359 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5360 hdr->b_l1hdr.b_state == arc_mfu || 5361 hdr->b_l1hdr.b_state == arc_uncached); 5362 5363 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5364 arc_access(hdr, 0, B_TRUE); 5365 mutex_exit(hash_lock); 5366 5367 ARCSTAT_BUMP(arcstat_hits); 5368 ARCSTAT_CONDSTAT(B_TRUE /* demand */, demand, prefetch, 5369 !HDR_ISTYPE_METADATA(hdr), data, metadata, hits); 5370 } 5371 5372 /* a generic arc_read_done_func_t which you can use */ 5373 void 5374 arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5375 arc_buf_t *buf, void *arg) 5376 { 5377 (void) zio, (void) zb, (void) bp; 5378 5379 if (buf == NULL) 5380 return; 5381 5382 memcpy(arg, buf->b_data, arc_buf_size(buf)); 5383 arc_buf_destroy(buf, arg); 5384 } 5385 5386 /* a generic arc_read_done_func_t */ 5387 void 5388 arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5389 arc_buf_t *buf, void *arg) 5390 { 5391 (void) zb, (void) bp; 5392 arc_buf_t **bufp = arg; 5393 5394 if (buf == NULL) { 5395 ASSERT(zio == NULL || zio->io_error != 0); 5396 *bufp = NULL; 5397 } else { 5398 ASSERT(zio == NULL || zio->io_error == 0); 5399 *bufp = buf; 5400 ASSERT(buf->b_data != NULL); 5401 } 5402 } 5403 5404 static void 5405 arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp) 5406 { 5407 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 5408 ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0); 5409 ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF); 5410 } else { 5411 if (HDR_COMPRESSION_ENABLED(hdr)) { 5412 ASSERT3U(arc_hdr_get_compress(hdr), ==, 5413 BP_GET_COMPRESS(bp)); 5414 } 5415 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 5416 ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp)); 5417 ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp)); 5418 } 5419 } 5420 5421 static void 5422 arc_read_done(zio_t *zio) 5423 { 5424 blkptr_t *bp = zio->io_bp; 5425 arc_buf_hdr_t *hdr = zio->io_private; 5426 kmutex_t *hash_lock = NULL; 5427 arc_callback_t *callback_list; 5428 arc_callback_t *acb; 5429 5430 /* 5431 * The hdr was inserted into hash-table and removed from lists 5432 * prior to starting I/O. We should find this header, since 5433 * it's in the hash table, and it should be legit since it's 5434 * not possible to evict it during the I/O. The only possible 5435 * reason for it not to be found is if we were freed during the 5436 * read. 5437 */ 5438 if (HDR_IN_HASH_TABLE(hdr)) { 5439 arc_buf_hdr_t *found; 5440 5441 ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp)); 5442 ASSERT3U(hdr->b_dva.dva_word[0], ==, 5443 BP_IDENTITY(zio->io_bp)->dva_word[0]); 5444 ASSERT3U(hdr->b_dva.dva_word[1], ==, 5445 BP_IDENTITY(zio->io_bp)->dva_word[1]); 5446 5447 found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock); 5448 5449 ASSERT((found == hdr && 5450 DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) || 5451 (found == hdr && HDR_L2_READING(hdr))); 5452 ASSERT3P(hash_lock, !=, NULL); 5453 } 5454 5455 if (BP_IS_PROTECTED(bp)) { 5456 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 5457 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 5458 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 5459 hdr->b_crypt_hdr.b_iv); 5460 5461 if (zio->io_error == 0) { 5462 if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) { 5463 void *tmpbuf; 5464 5465 tmpbuf = abd_borrow_buf_copy(zio->io_abd, 5466 sizeof (zil_chain_t)); 5467 zio_crypt_decode_mac_zil(tmpbuf, 5468 hdr->b_crypt_hdr.b_mac); 5469 abd_return_buf(zio->io_abd, tmpbuf, 5470 sizeof (zil_chain_t)); 5471 } else { 5472 zio_crypt_decode_mac_bp(bp, 5473 hdr->b_crypt_hdr.b_mac); 5474 } 5475 } 5476 } 5477 5478 if (zio->io_error == 0) { 5479 /* byteswap if necessary */ 5480 if (BP_SHOULD_BYTESWAP(zio->io_bp)) { 5481 if (BP_GET_LEVEL(zio->io_bp) > 0) { 5482 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 5483 } else { 5484 hdr->b_l1hdr.b_byteswap = 5485 DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp)); 5486 } 5487 } else { 5488 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 5489 } 5490 if (!HDR_L2_READING(hdr)) { 5491 hdr->b_complevel = zio->io_prop.zp_complevel; 5492 } 5493 } 5494 5495 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED); 5496 if (l2arc_noprefetch && HDR_PREFETCH(hdr)) 5497 arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE); 5498 5499 callback_list = hdr->b_l1hdr.b_acb; 5500 ASSERT3P(callback_list, !=, NULL); 5501 hdr->b_l1hdr.b_acb = NULL; 5502 5503 /* 5504 * If a read request has a callback (i.e. acb_done is not NULL), then we 5505 * make a buf containing the data according to the parameters which were 5506 * passed in. The implementation of arc_buf_alloc_impl() ensures that we 5507 * aren't needlessly decompressing the data multiple times. 5508 */ 5509 int callback_cnt = 0; 5510 for (acb = callback_list; acb != NULL; acb = acb->acb_next) { 5511 5512 /* We need the last one to call below in original order. */ 5513 callback_list = acb; 5514 5515 if (!acb->acb_done || acb->acb_nobuf) 5516 continue; 5517 5518 callback_cnt++; 5519 5520 if (zio->io_error != 0) 5521 continue; 5522 5523 int error = arc_buf_alloc_impl(hdr, zio->io_spa, 5524 &acb->acb_zb, acb->acb_private, acb->acb_encrypted, 5525 acb->acb_compressed, acb->acb_noauth, B_TRUE, 5526 &acb->acb_buf); 5527 5528 /* 5529 * Assert non-speculative zios didn't fail because an 5530 * encryption key wasn't loaded 5531 */ 5532 ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) || 5533 error != EACCES); 5534 5535 /* 5536 * If we failed to decrypt, report an error now (as the zio 5537 * layer would have done if it had done the transforms). 5538 */ 5539 if (error == ECKSUM) { 5540 ASSERT(BP_IS_PROTECTED(bp)); 5541 error = SET_ERROR(EIO); 5542 if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5543 spa_log_error(zio->io_spa, &acb->acb_zb, 5544 &zio->io_bp->blk_birth); 5545 (void) zfs_ereport_post( 5546 FM_EREPORT_ZFS_AUTHENTICATION, 5547 zio->io_spa, NULL, &acb->acb_zb, zio, 0); 5548 } 5549 } 5550 5551 if (error != 0) { 5552 /* 5553 * Decompression or decryption failed. Set 5554 * io_error so that when we call acb_done 5555 * (below), we will indicate that the read 5556 * failed. Note that in the unusual case 5557 * where one callback is compressed and another 5558 * uncompressed, we will mark all of them 5559 * as failed, even though the uncompressed 5560 * one can't actually fail. In this case, 5561 * the hdr will not be anonymous, because 5562 * if there are multiple callbacks, it's 5563 * because multiple threads found the same 5564 * arc buf in the hash table. 5565 */ 5566 zio->io_error = error; 5567 } 5568 } 5569 5570 /* 5571 * If there are multiple callbacks, we must have the hash lock, 5572 * because the only way for multiple threads to find this hdr is 5573 * in the hash table. This ensures that if there are multiple 5574 * callbacks, the hdr is not anonymous. If it were anonymous, 5575 * we couldn't use arc_buf_destroy() in the error case below. 5576 */ 5577 ASSERT(callback_cnt < 2 || hash_lock != NULL); 5578 5579 if (zio->io_error == 0) { 5580 arc_hdr_verify(hdr, zio->io_bp); 5581 } else { 5582 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 5583 if (hdr->b_l1hdr.b_state != arc_anon) 5584 arc_change_state(arc_anon, hdr); 5585 if (HDR_IN_HASH_TABLE(hdr)) 5586 buf_hash_remove(hdr); 5587 } 5588 5589 /* 5590 * Broadcast before we drop the hash_lock to avoid the possibility 5591 * that the hdr (and hence the cv) might be freed before we get to 5592 * the cv_broadcast(). 5593 */ 5594 cv_broadcast(&hdr->b_l1hdr.b_cv); 5595 5596 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5597 (void) remove_reference(hdr, hdr); 5598 5599 if (hash_lock != NULL) 5600 mutex_exit(hash_lock); 5601 5602 /* execute each callback and free its structure */ 5603 while ((acb = callback_list) != NULL) { 5604 if (acb->acb_done != NULL) { 5605 if (zio->io_error != 0 && acb->acb_buf != NULL) { 5606 /* 5607 * If arc_buf_alloc_impl() fails during 5608 * decompression, the buf will still be 5609 * allocated, and needs to be freed here. 5610 */ 5611 arc_buf_destroy(acb->acb_buf, 5612 acb->acb_private); 5613 acb->acb_buf = NULL; 5614 } 5615 acb->acb_done(zio, &zio->io_bookmark, zio->io_bp, 5616 acb->acb_buf, acb->acb_private); 5617 } 5618 5619 if (acb->acb_zio_dummy != NULL) { 5620 acb->acb_zio_dummy->io_error = zio->io_error; 5621 zio_nowait(acb->acb_zio_dummy); 5622 } 5623 5624 callback_list = acb->acb_prev; 5625 if (acb->acb_wait) { 5626 mutex_enter(&acb->acb_wait_lock); 5627 acb->acb_wait_error = zio->io_error; 5628 acb->acb_wait = B_FALSE; 5629 cv_signal(&acb->acb_wait_cv); 5630 mutex_exit(&acb->acb_wait_lock); 5631 /* acb will be freed by the waiting thread. */ 5632 } else { 5633 kmem_free(acb, sizeof (arc_callback_t)); 5634 } 5635 } 5636 } 5637 5638 /* 5639 * "Read" the block at the specified DVA (in bp) via the 5640 * cache. If the block is found in the cache, invoke the provided 5641 * callback immediately and return. Note that the `zio' parameter 5642 * in the callback will be NULL in this case, since no IO was 5643 * required. If the block is not in the cache pass the read request 5644 * on to the spa with a substitute callback function, so that the 5645 * requested block will be added to the cache. 5646 * 5647 * If a read request arrives for a block that has a read in-progress, 5648 * either wait for the in-progress read to complete (and return the 5649 * results); or, if this is a read with a "done" func, add a record 5650 * to the read to invoke the "done" func when the read completes, 5651 * and return; or just return. 5652 * 5653 * arc_read_done() will invoke all the requested "done" functions 5654 * for readers of this block. 5655 */ 5656 int 5657 arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, 5658 arc_read_done_func_t *done, void *private, zio_priority_t priority, 5659 int zio_flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb) 5660 { 5661 arc_buf_hdr_t *hdr = NULL; 5662 kmutex_t *hash_lock = NULL; 5663 zio_t *rzio; 5664 uint64_t guid = spa_load_guid(spa); 5665 boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0; 5666 boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) && 5667 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5668 boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) && 5669 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5670 boolean_t embedded_bp = !!BP_IS_EMBEDDED(bp); 5671 boolean_t no_buf = *arc_flags & ARC_FLAG_NO_BUF; 5672 arc_buf_t *buf = NULL; 5673 int rc = 0; 5674 5675 ASSERT(!embedded_bp || 5676 BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA); 5677 ASSERT(!BP_IS_HOLE(bp)); 5678 ASSERT(!BP_IS_REDACTED(bp)); 5679 5680 /* 5681 * Normally SPL_FSTRANS will already be set since kernel threads which 5682 * expect to call the DMU interfaces will set it when created. System 5683 * calls are similarly handled by setting/cleaning the bit in the 5684 * registered callback (module/os/.../zfs/zpl_*). 5685 * 5686 * External consumers such as Lustre which call the exported DMU 5687 * interfaces may not have set SPL_FSTRANS. To avoid a deadlock 5688 * on the hash_lock always set and clear the bit. 5689 */ 5690 fstrans_cookie_t cookie = spl_fstrans_mark(); 5691 top: 5692 /* 5693 * Verify the block pointer contents are reasonable. This should 5694 * always be the case since the blkptr is protected by a checksum. 5695 * However, if there is damage it's desirable to detect this early 5696 * and treat it as a checksum error. This allows an alternate blkptr 5697 * to be tried when one is available (e.g. ditto blocks). 5698 */ 5699 if (!zfs_blkptr_verify(spa, bp, zio_flags & ZIO_FLAG_CONFIG_WRITER, 5700 BLK_VERIFY_LOG)) { 5701 rc = SET_ERROR(ECKSUM); 5702 goto done; 5703 } 5704 5705 if (!embedded_bp) { 5706 /* 5707 * Embedded BP's have no DVA and require no I/O to "read". 5708 * Create an anonymous arc buf to back it. 5709 */ 5710 hdr = buf_hash_find(guid, bp, &hash_lock); 5711 } 5712 5713 /* 5714 * Determine if we have an L1 cache hit or a cache miss. For simplicity 5715 * we maintain encrypted data separately from compressed / uncompressed 5716 * data. If the user is requesting raw encrypted data and we don't have 5717 * that in the header we will read from disk to guarantee that we can 5718 * get it even if the encryption keys aren't loaded. 5719 */ 5720 if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) || 5721 (hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) { 5722 boolean_t is_data = !HDR_ISTYPE_METADATA(hdr); 5723 5724 if (HDR_IO_IN_PROGRESS(hdr)) { 5725 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5726 mutex_exit(hash_lock); 5727 ARCSTAT_BUMP(arcstat_cached_only_in_progress); 5728 rc = SET_ERROR(ENOENT); 5729 goto done; 5730 } 5731 5732 zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head; 5733 ASSERT3P(head_zio, !=, NULL); 5734 if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) && 5735 priority == ZIO_PRIORITY_SYNC_READ) { 5736 /* 5737 * This is a sync read that needs to wait for 5738 * an in-flight async read. Request that the 5739 * zio have its priority upgraded. 5740 */ 5741 zio_change_priority(head_zio, priority); 5742 DTRACE_PROBE1(arc__async__upgrade__sync, 5743 arc_buf_hdr_t *, hdr); 5744 ARCSTAT_BUMP(arcstat_async_upgrade_sync); 5745 } 5746 5747 DTRACE_PROBE1(arc__iohit, arc_buf_hdr_t *, hdr); 5748 arc_access(hdr, *arc_flags, B_FALSE); 5749 5750 /* 5751 * If there are multiple threads reading the same block 5752 * and that block is not yet in the ARC, then only one 5753 * thread will do the physical I/O and all other 5754 * threads will wait until that I/O completes. 5755 * Synchronous reads use the acb_wait_cv whereas nowait 5756 * reads register a callback. Both are signalled/called 5757 * in arc_read_done. 5758 * 5759 * Errors of the physical I/O may need to be propagated. 5760 * Synchronous read errors are returned here from 5761 * arc_read_done via acb_wait_error. Nowait reads 5762 * attach the acb_zio_dummy zio to pio and 5763 * arc_read_done propagates the physical I/O's io_error 5764 * to acb_zio_dummy, and thereby to pio. 5765 */ 5766 arc_callback_t *acb = NULL; 5767 if (done || pio || *arc_flags & ARC_FLAG_WAIT) { 5768 acb = kmem_zalloc(sizeof (arc_callback_t), 5769 KM_SLEEP); 5770 acb->acb_done = done; 5771 acb->acb_private = private; 5772 acb->acb_compressed = compressed_read; 5773 acb->acb_encrypted = encrypted_read; 5774 acb->acb_noauth = noauth_read; 5775 acb->acb_nobuf = no_buf; 5776 if (*arc_flags & ARC_FLAG_WAIT) { 5777 acb->acb_wait = B_TRUE; 5778 mutex_init(&acb->acb_wait_lock, NULL, 5779 MUTEX_DEFAULT, NULL); 5780 cv_init(&acb->acb_wait_cv, NULL, 5781 CV_DEFAULT, NULL); 5782 } 5783 acb->acb_zb = *zb; 5784 if (pio != NULL) { 5785 acb->acb_zio_dummy = zio_null(pio, 5786 spa, NULL, NULL, NULL, zio_flags); 5787 } 5788 acb->acb_zio_head = head_zio; 5789 acb->acb_next = hdr->b_l1hdr.b_acb; 5790 if (hdr->b_l1hdr.b_acb) 5791 hdr->b_l1hdr.b_acb->acb_prev = acb; 5792 hdr->b_l1hdr.b_acb = acb; 5793 } 5794 mutex_exit(hash_lock); 5795 5796 ARCSTAT_BUMP(arcstat_iohits); 5797 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5798 demand, prefetch, is_data, data, metadata, iohits); 5799 5800 if (*arc_flags & ARC_FLAG_WAIT) { 5801 mutex_enter(&acb->acb_wait_lock); 5802 while (acb->acb_wait) { 5803 cv_wait(&acb->acb_wait_cv, 5804 &acb->acb_wait_lock); 5805 } 5806 rc = acb->acb_wait_error; 5807 mutex_exit(&acb->acb_wait_lock); 5808 mutex_destroy(&acb->acb_wait_lock); 5809 cv_destroy(&acb->acb_wait_cv); 5810 kmem_free(acb, sizeof (arc_callback_t)); 5811 } 5812 goto out; 5813 } 5814 5815 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5816 hdr->b_l1hdr.b_state == arc_mfu || 5817 hdr->b_l1hdr.b_state == arc_uncached); 5818 5819 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5820 arc_access(hdr, *arc_flags, B_TRUE); 5821 5822 if (done && !no_buf) { 5823 ASSERT(!embedded_bp || !BP_IS_HOLE(bp)); 5824 5825 /* Get a buf with the desired data in it. */ 5826 rc = arc_buf_alloc_impl(hdr, spa, zb, private, 5827 encrypted_read, compressed_read, noauth_read, 5828 B_TRUE, &buf); 5829 if (rc == ECKSUM) { 5830 /* 5831 * Convert authentication and decryption errors 5832 * to EIO (and generate an ereport if needed) 5833 * before leaving the ARC. 5834 */ 5835 rc = SET_ERROR(EIO); 5836 if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5837 spa_log_error(spa, zb, &hdr->b_birth); 5838 (void) zfs_ereport_post( 5839 FM_EREPORT_ZFS_AUTHENTICATION, 5840 spa, NULL, zb, NULL, 0); 5841 } 5842 } 5843 if (rc != 0) { 5844 arc_buf_destroy_impl(buf); 5845 buf = NULL; 5846 (void) remove_reference(hdr, private); 5847 } 5848 5849 /* assert any errors weren't due to unloaded keys */ 5850 ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) || 5851 rc != EACCES); 5852 } 5853 mutex_exit(hash_lock); 5854 ARCSTAT_BUMP(arcstat_hits); 5855 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5856 demand, prefetch, is_data, data, metadata, hits); 5857 *arc_flags |= ARC_FLAG_CACHED; 5858 goto done; 5859 } else { 5860 uint64_t lsize = BP_GET_LSIZE(bp); 5861 uint64_t psize = BP_GET_PSIZE(bp); 5862 arc_callback_t *acb; 5863 vdev_t *vd = NULL; 5864 uint64_t addr = 0; 5865 boolean_t devw = B_FALSE; 5866 uint64_t size; 5867 abd_t *hdr_abd; 5868 int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0; 5869 arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp); 5870 5871 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5872 if (hash_lock != NULL) 5873 mutex_exit(hash_lock); 5874 rc = SET_ERROR(ENOENT); 5875 goto done; 5876 } 5877 5878 if (hdr == NULL) { 5879 /* 5880 * This block is not in the cache or it has 5881 * embedded data. 5882 */ 5883 arc_buf_hdr_t *exists = NULL; 5884 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 5885 BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), 0, type); 5886 5887 if (!embedded_bp) { 5888 hdr->b_dva = *BP_IDENTITY(bp); 5889 hdr->b_birth = BP_PHYSICAL_BIRTH(bp); 5890 exists = buf_hash_insert(hdr, &hash_lock); 5891 } 5892 if (exists != NULL) { 5893 /* somebody beat us to the hash insert */ 5894 mutex_exit(hash_lock); 5895 buf_discard_identity(hdr); 5896 arc_hdr_destroy(hdr); 5897 goto top; /* restart the IO request */ 5898 } 5899 } else { 5900 /* 5901 * This block is in the ghost cache or encrypted data 5902 * was requested and we didn't have it. If it was 5903 * L2-only (and thus didn't have an L1 hdr), 5904 * we realloc the header to add an L1 hdr. 5905 */ 5906 if (!HDR_HAS_L1HDR(hdr)) { 5907 hdr = arc_hdr_realloc(hdr, hdr_l2only_cache, 5908 hdr_full_cache); 5909 } 5910 5911 if (GHOST_STATE(hdr->b_l1hdr.b_state)) { 5912 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 5913 ASSERT(!HDR_HAS_RABD(hdr)); 5914 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 5915 ASSERT0(zfs_refcount_count( 5916 &hdr->b_l1hdr.b_refcnt)); 5917 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 5918 #ifdef ZFS_DEBUG 5919 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 5920 #endif 5921 } else if (HDR_IO_IN_PROGRESS(hdr)) { 5922 /* 5923 * If this header already had an IO in progress 5924 * and we are performing another IO to fetch 5925 * encrypted data we must wait until the first 5926 * IO completes so as not to confuse 5927 * arc_read_done(). This should be very rare 5928 * and so the performance impact shouldn't 5929 * matter. 5930 */ 5931 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); 5932 mutex_exit(hash_lock); 5933 goto top; 5934 } 5935 } 5936 if (*arc_flags & ARC_FLAG_UNCACHED) { 5937 arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED); 5938 if (!encrypted_read) 5939 alloc_flags |= ARC_HDR_ALLOC_LINEAR; 5940 } 5941 5942 /* 5943 * Take additional reference for IO_IN_PROGRESS. It stops 5944 * arc_access() from putting this header without any buffers 5945 * and so other references but obviously nonevictable onto 5946 * the evictable list of MRU or MFU state. 5947 */ 5948 add_reference(hdr, hdr); 5949 if (!embedded_bp) 5950 arc_access(hdr, *arc_flags, B_FALSE); 5951 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5952 arc_hdr_alloc_abd(hdr, alloc_flags); 5953 if (encrypted_read) { 5954 ASSERT(HDR_HAS_RABD(hdr)); 5955 size = HDR_GET_PSIZE(hdr); 5956 hdr_abd = hdr->b_crypt_hdr.b_rabd; 5957 zio_flags |= ZIO_FLAG_RAW; 5958 } else { 5959 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 5960 size = arc_hdr_size(hdr); 5961 hdr_abd = hdr->b_l1hdr.b_pabd; 5962 5963 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 5964 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 5965 } 5966 5967 /* 5968 * For authenticated bp's, we do not ask the ZIO layer 5969 * to authenticate them since this will cause the entire 5970 * IO to fail if the key isn't loaded. Instead, we 5971 * defer authentication until arc_buf_fill(), which will 5972 * verify the data when the key is available. 5973 */ 5974 if (BP_IS_AUTHENTICATED(bp)) 5975 zio_flags |= ZIO_FLAG_RAW_ENCRYPT; 5976 } 5977 5978 if (BP_IS_AUTHENTICATED(bp)) 5979 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 5980 if (BP_GET_LEVEL(bp) > 0) 5981 arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT); 5982 ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state)); 5983 5984 acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP); 5985 acb->acb_done = done; 5986 acb->acb_private = private; 5987 acb->acb_compressed = compressed_read; 5988 acb->acb_encrypted = encrypted_read; 5989 acb->acb_noauth = noauth_read; 5990 acb->acb_zb = *zb; 5991 5992 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 5993 hdr->b_l1hdr.b_acb = acb; 5994 5995 if (HDR_HAS_L2HDR(hdr) && 5996 (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) { 5997 devw = hdr->b_l2hdr.b_dev->l2ad_writing; 5998 addr = hdr->b_l2hdr.b_daddr; 5999 /* 6000 * Lock out L2ARC device removal. 6001 */ 6002 if (vdev_is_dead(vd) || 6003 !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER)) 6004 vd = NULL; 6005 } 6006 6007 /* 6008 * We count both async reads and scrub IOs as asynchronous so 6009 * that both can be upgraded in the event of a cache hit while 6010 * the read IO is still in-flight. 6011 */ 6012 if (priority == ZIO_PRIORITY_ASYNC_READ || 6013 priority == ZIO_PRIORITY_SCRUB) 6014 arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 6015 else 6016 arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 6017 6018 /* 6019 * At this point, we have a level 1 cache miss or a blkptr 6020 * with embedded data. Try again in L2ARC if possible. 6021 */ 6022 ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize); 6023 6024 /* 6025 * Skip ARC stat bump for block pointers with embedded 6026 * data. The data are read from the blkptr itself via 6027 * decode_embedded_bp_compressed(). 6028 */ 6029 if (!embedded_bp) { 6030 DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, 6031 blkptr_t *, bp, uint64_t, lsize, 6032 zbookmark_phys_t *, zb); 6033 ARCSTAT_BUMP(arcstat_misses); 6034 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 6035 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, 6036 metadata, misses); 6037 zfs_racct_read(size, 1); 6038 } 6039 6040 /* Check if the spa even has l2 configured */ 6041 const boolean_t spa_has_l2 = l2arc_ndev != 0 && 6042 spa->spa_l2cache.sav_count > 0; 6043 6044 if (vd != NULL && spa_has_l2 && !(l2arc_norw && devw)) { 6045 /* 6046 * Read from the L2ARC if the following are true: 6047 * 1. The L2ARC vdev was previously cached. 6048 * 2. This buffer still has L2ARC metadata. 6049 * 3. This buffer isn't currently writing to the L2ARC. 6050 * 4. The L2ARC entry wasn't evicted, which may 6051 * also have invalidated the vdev. 6052 * 5. This isn't prefetch or l2arc_noprefetch is 0. 6053 */ 6054 if (HDR_HAS_L2HDR(hdr) && 6055 !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr) && 6056 !(l2arc_noprefetch && 6057 (*arc_flags & ARC_FLAG_PREFETCH))) { 6058 l2arc_read_callback_t *cb; 6059 abd_t *abd; 6060 uint64_t asize; 6061 6062 DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr); 6063 ARCSTAT_BUMP(arcstat_l2_hits); 6064 hdr->b_l2hdr.b_hits++; 6065 6066 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), 6067 KM_SLEEP); 6068 cb->l2rcb_hdr = hdr; 6069 cb->l2rcb_bp = *bp; 6070 cb->l2rcb_zb = *zb; 6071 cb->l2rcb_flags = zio_flags; 6072 6073 /* 6074 * When Compressed ARC is disabled, but the 6075 * L2ARC block is compressed, arc_hdr_size() 6076 * will have returned LSIZE rather than PSIZE. 6077 */ 6078 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 6079 !HDR_COMPRESSION_ENABLED(hdr) && 6080 HDR_GET_PSIZE(hdr) != 0) { 6081 size = HDR_GET_PSIZE(hdr); 6082 } 6083 6084 asize = vdev_psize_to_asize(vd, size); 6085 if (asize != size) { 6086 abd = abd_alloc_for_io(asize, 6087 HDR_ISTYPE_METADATA(hdr)); 6088 cb->l2rcb_abd = abd; 6089 } else { 6090 abd = hdr_abd; 6091 } 6092 6093 ASSERT(addr >= VDEV_LABEL_START_SIZE && 6094 addr + asize <= vd->vdev_psize - 6095 VDEV_LABEL_END_SIZE); 6096 6097 /* 6098 * l2arc read. The SCL_L2ARC lock will be 6099 * released by l2arc_read_done(). 6100 * Issue a null zio if the underlying buffer 6101 * was squashed to zero size by compression. 6102 */ 6103 ASSERT3U(arc_hdr_get_compress(hdr), !=, 6104 ZIO_COMPRESS_EMPTY); 6105 rzio = zio_read_phys(pio, vd, addr, 6106 asize, abd, 6107 ZIO_CHECKSUM_OFF, 6108 l2arc_read_done, cb, priority, 6109 zio_flags | ZIO_FLAG_DONT_CACHE | 6110 ZIO_FLAG_CANFAIL | 6111 ZIO_FLAG_DONT_PROPAGATE | 6112 ZIO_FLAG_DONT_RETRY, B_FALSE); 6113 acb->acb_zio_head = rzio; 6114 6115 if (hash_lock != NULL) 6116 mutex_exit(hash_lock); 6117 6118 DTRACE_PROBE2(l2arc__read, vdev_t *, vd, 6119 zio_t *, rzio); 6120 ARCSTAT_INCR(arcstat_l2_read_bytes, 6121 HDR_GET_PSIZE(hdr)); 6122 6123 if (*arc_flags & ARC_FLAG_NOWAIT) { 6124 zio_nowait(rzio); 6125 goto out; 6126 } 6127 6128 ASSERT(*arc_flags & ARC_FLAG_WAIT); 6129 if (zio_wait(rzio) == 0) 6130 goto out; 6131 6132 /* l2arc read error; goto zio_read() */ 6133 if (hash_lock != NULL) 6134 mutex_enter(hash_lock); 6135 } else { 6136 DTRACE_PROBE1(l2arc__miss, 6137 arc_buf_hdr_t *, hdr); 6138 ARCSTAT_BUMP(arcstat_l2_misses); 6139 if (HDR_L2_WRITING(hdr)) 6140 ARCSTAT_BUMP(arcstat_l2_rw_clash); 6141 spa_config_exit(spa, SCL_L2ARC, vd); 6142 } 6143 } else { 6144 if (vd != NULL) 6145 spa_config_exit(spa, SCL_L2ARC, vd); 6146 6147 /* 6148 * Only a spa with l2 should contribute to l2 6149 * miss stats. (Including the case of having a 6150 * faulted cache device - that's also a miss.) 6151 */ 6152 if (spa_has_l2) { 6153 /* 6154 * Skip ARC stat bump for block pointers with 6155 * embedded data. The data are read from the 6156 * blkptr itself via 6157 * decode_embedded_bp_compressed(). 6158 */ 6159 if (!embedded_bp) { 6160 DTRACE_PROBE1(l2arc__miss, 6161 arc_buf_hdr_t *, hdr); 6162 ARCSTAT_BUMP(arcstat_l2_misses); 6163 } 6164 } 6165 } 6166 6167 rzio = zio_read(pio, spa, bp, hdr_abd, size, 6168 arc_read_done, hdr, priority, zio_flags, zb); 6169 acb->acb_zio_head = rzio; 6170 6171 if (hash_lock != NULL) 6172 mutex_exit(hash_lock); 6173 6174 if (*arc_flags & ARC_FLAG_WAIT) { 6175 rc = zio_wait(rzio); 6176 goto out; 6177 } 6178 6179 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 6180 zio_nowait(rzio); 6181 } 6182 6183 out: 6184 /* embedded bps don't actually go to disk */ 6185 if (!embedded_bp) 6186 spa_read_history_add(spa, zb, *arc_flags); 6187 spl_fstrans_unmark(cookie); 6188 return (rc); 6189 6190 done: 6191 if (done) 6192 done(NULL, zb, bp, buf, private); 6193 if (pio && rc != 0) { 6194 zio_t *zio = zio_null(pio, spa, NULL, NULL, NULL, zio_flags); 6195 zio->io_error = rc; 6196 zio_nowait(zio); 6197 } 6198 goto out; 6199 } 6200 6201 arc_prune_t * 6202 arc_add_prune_callback(arc_prune_func_t *func, void *private) 6203 { 6204 arc_prune_t *p; 6205 6206 p = kmem_alloc(sizeof (*p), KM_SLEEP); 6207 p->p_pfunc = func; 6208 p->p_private = private; 6209 list_link_init(&p->p_node); 6210 zfs_refcount_create(&p->p_refcnt); 6211 6212 mutex_enter(&arc_prune_mtx); 6213 zfs_refcount_add(&p->p_refcnt, &arc_prune_list); 6214 list_insert_head(&arc_prune_list, p); 6215 mutex_exit(&arc_prune_mtx); 6216 6217 return (p); 6218 } 6219 6220 void 6221 arc_remove_prune_callback(arc_prune_t *p) 6222 { 6223 boolean_t wait = B_FALSE; 6224 mutex_enter(&arc_prune_mtx); 6225 list_remove(&arc_prune_list, p); 6226 if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0) 6227 wait = B_TRUE; 6228 mutex_exit(&arc_prune_mtx); 6229 6230 /* wait for arc_prune_task to finish */ 6231 if (wait) 6232 taskq_wait_outstanding(arc_prune_taskq, 0); 6233 ASSERT0(zfs_refcount_count(&p->p_refcnt)); 6234 zfs_refcount_destroy(&p->p_refcnt); 6235 kmem_free(p, sizeof (*p)); 6236 } 6237 6238 /* 6239 * Notify the arc that a block was freed, and thus will never be used again. 6240 */ 6241 void 6242 arc_freed(spa_t *spa, const blkptr_t *bp) 6243 { 6244 arc_buf_hdr_t *hdr; 6245 kmutex_t *hash_lock; 6246 uint64_t guid = spa_load_guid(spa); 6247 6248 ASSERT(!BP_IS_EMBEDDED(bp)); 6249 6250 hdr = buf_hash_find(guid, bp, &hash_lock); 6251 if (hdr == NULL) 6252 return; 6253 6254 /* 6255 * We might be trying to free a block that is still doing I/O 6256 * (i.e. prefetch) or has some other reference (i.e. a dedup-ed, 6257 * dmu_sync-ed block). A block may also have a reference if it is 6258 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would 6259 * have written the new block to its final resting place on disk but 6260 * without the dedup flag set. This would have left the hdr in the MRU 6261 * state and discoverable. When the txg finally syncs it detects that 6262 * the block was overridden in open context and issues an override I/O. 6263 * Since this is a dedup block, the override I/O will determine if the 6264 * block is already in the DDT. If so, then it will replace the io_bp 6265 * with the bp from the DDT and allow the I/O to finish. When the I/O 6266 * reaches the done callback, dbuf_write_override_done, it will 6267 * check to see if the io_bp and io_bp_override are identical. 6268 * If they are not, then it indicates that the bp was replaced with 6269 * the bp in the DDT and the override bp is freed. This allows 6270 * us to arrive here with a reference on a block that is being 6271 * freed. So if we have an I/O in progress, or a reference to 6272 * this hdr, then we don't destroy the hdr. 6273 */ 6274 if (!HDR_HAS_L1HDR(hdr) || 6275 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6276 arc_change_state(arc_anon, hdr); 6277 arc_hdr_destroy(hdr); 6278 mutex_exit(hash_lock); 6279 } else { 6280 mutex_exit(hash_lock); 6281 } 6282 6283 } 6284 6285 /* 6286 * Release this buffer from the cache, making it an anonymous buffer. This 6287 * must be done after a read and prior to modifying the buffer contents. 6288 * If the buffer has more than one reference, we must make 6289 * a new hdr for the buffer. 6290 */ 6291 void 6292 arc_release(arc_buf_t *buf, const void *tag) 6293 { 6294 arc_buf_hdr_t *hdr = buf->b_hdr; 6295 6296 /* 6297 * It would be nice to assert that if its DMU metadata (level > 6298 * 0 || it's the dnode file), then it must be syncing context. 6299 * But we don't know that information at this level. 6300 */ 6301 6302 ASSERT(HDR_HAS_L1HDR(hdr)); 6303 6304 /* 6305 * We don't grab the hash lock prior to this check, because if 6306 * the buffer's header is in the arc_anon state, it won't be 6307 * linked into the hash table. 6308 */ 6309 if (hdr->b_l1hdr.b_state == arc_anon) { 6310 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6311 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 6312 ASSERT(!HDR_HAS_L2HDR(hdr)); 6313 6314 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6315 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1); 6316 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6317 6318 hdr->b_l1hdr.b_arc_access = 0; 6319 6320 /* 6321 * If the buf is being overridden then it may already 6322 * have a hdr that is not empty. 6323 */ 6324 buf_discard_identity(hdr); 6325 arc_buf_thaw(buf); 6326 6327 return; 6328 } 6329 6330 kmutex_t *hash_lock = HDR_LOCK(hdr); 6331 mutex_enter(hash_lock); 6332 6333 /* 6334 * This assignment is only valid as long as the hash_lock is 6335 * held, we must be careful not to reference state or the 6336 * b_state field after dropping the lock. 6337 */ 6338 arc_state_t *state = hdr->b_l1hdr.b_state; 6339 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 6340 ASSERT3P(state, !=, arc_anon); 6341 6342 /* this buffer is not on any list */ 6343 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0); 6344 6345 if (HDR_HAS_L2HDR(hdr)) { 6346 mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6347 6348 /* 6349 * We have to recheck this conditional again now that 6350 * we're holding the l2ad_mtx to prevent a race with 6351 * another thread which might be concurrently calling 6352 * l2arc_evict(). In that case, l2arc_evict() might have 6353 * destroyed the header's L2 portion as we were waiting 6354 * to acquire the l2ad_mtx. 6355 */ 6356 if (HDR_HAS_L2HDR(hdr)) 6357 arc_hdr_l2hdr_destroy(hdr); 6358 6359 mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6360 } 6361 6362 /* 6363 * Do we have more than one buf? 6364 */ 6365 if (hdr->b_l1hdr.b_bufcnt > 1) { 6366 arc_buf_hdr_t *nhdr; 6367 uint64_t spa = hdr->b_spa; 6368 uint64_t psize = HDR_GET_PSIZE(hdr); 6369 uint64_t lsize = HDR_GET_LSIZE(hdr); 6370 boolean_t protected = HDR_PROTECTED(hdr); 6371 enum zio_compress compress = arc_hdr_get_compress(hdr); 6372 arc_buf_contents_t type = arc_buf_type(hdr); 6373 VERIFY3U(hdr->b_type, ==, type); 6374 6375 ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL); 6376 VERIFY3S(remove_reference(hdr, tag), >, 0); 6377 6378 if (arc_buf_is_shared(buf) && !ARC_BUF_COMPRESSED(buf)) { 6379 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6380 ASSERT(ARC_BUF_LAST(buf)); 6381 } 6382 6383 /* 6384 * Pull the data off of this hdr and attach it to 6385 * a new anonymous hdr. Also find the last buffer 6386 * in the hdr's buffer list. 6387 */ 6388 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 6389 ASSERT3P(lastbuf, !=, NULL); 6390 6391 /* 6392 * If the current arc_buf_t and the hdr are sharing their data 6393 * buffer, then we must stop sharing that block. 6394 */ 6395 if (arc_buf_is_shared(buf)) { 6396 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6397 VERIFY(!arc_buf_is_shared(lastbuf)); 6398 6399 /* 6400 * First, sever the block sharing relationship between 6401 * buf and the arc_buf_hdr_t. 6402 */ 6403 arc_unshare_buf(hdr, buf); 6404 6405 /* 6406 * Now we need to recreate the hdr's b_pabd. Since we 6407 * have lastbuf handy, we try to share with it, but if 6408 * we can't then we allocate a new b_pabd and copy the 6409 * data from buf into it. 6410 */ 6411 if (arc_can_share(hdr, lastbuf)) { 6412 arc_share_buf(hdr, lastbuf); 6413 } else { 6414 arc_hdr_alloc_abd(hdr, 0); 6415 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, 6416 buf->b_data, psize); 6417 } 6418 VERIFY3P(lastbuf->b_data, !=, NULL); 6419 } else if (HDR_SHARED_DATA(hdr)) { 6420 /* 6421 * Uncompressed shared buffers are always at the end 6422 * of the list. Compressed buffers don't have the 6423 * same requirements. This makes it hard to 6424 * simply assert that the lastbuf is shared so 6425 * we rely on the hdr's compression flags to determine 6426 * if we have a compressed, shared buffer. 6427 */ 6428 ASSERT(arc_buf_is_shared(lastbuf) || 6429 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 6430 ASSERT(!ARC_BUF_SHARED(buf)); 6431 } 6432 6433 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 6434 ASSERT3P(state, !=, arc_l2c_only); 6435 6436 (void) zfs_refcount_remove_many(&state->arcs_size[type], 6437 arc_buf_size(buf), buf); 6438 6439 if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6440 ASSERT3P(state, !=, arc_l2c_only); 6441 (void) zfs_refcount_remove_many( 6442 &state->arcs_esize[type], 6443 arc_buf_size(buf), buf); 6444 } 6445 6446 hdr->b_l1hdr.b_bufcnt -= 1; 6447 if (ARC_BUF_ENCRYPTED(buf)) 6448 hdr->b_crypt_hdr.b_ebufcnt -= 1; 6449 6450 arc_cksum_verify(buf); 6451 arc_buf_unwatch(buf); 6452 6453 /* if this is the last uncompressed buf free the checksum */ 6454 if (!arc_hdr_has_uncompressed_buf(hdr)) 6455 arc_cksum_free(hdr); 6456 6457 mutex_exit(hash_lock); 6458 6459 nhdr = arc_hdr_alloc(spa, psize, lsize, protected, 6460 compress, hdr->b_complevel, type); 6461 ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL); 6462 ASSERT0(nhdr->b_l1hdr.b_bufcnt); 6463 ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt)); 6464 VERIFY3U(nhdr->b_type, ==, type); 6465 ASSERT(!HDR_SHARED_DATA(nhdr)); 6466 6467 nhdr->b_l1hdr.b_buf = buf; 6468 nhdr->b_l1hdr.b_bufcnt = 1; 6469 if (ARC_BUF_ENCRYPTED(buf)) 6470 nhdr->b_crypt_hdr.b_ebufcnt = 1; 6471 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag); 6472 buf->b_hdr = nhdr; 6473 6474 (void) zfs_refcount_add_many(&arc_anon->arcs_size[type], 6475 arc_buf_size(buf), buf); 6476 } else { 6477 ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1); 6478 /* protected by hash lock, or hdr is on arc_anon */ 6479 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6480 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6481 hdr->b_l1hdr.b_mru_hits = 0; 6482 hdr->b_l1hdr.b_mru_ghost_hits = 0; 6483 hdr->b_l1hdr.b_mfu_hits = 0; 6484 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 6485 arc_change_state(arc_anon, hdr); 6486 hdr->b_l1hdr.b_arc_access = 0; 6487 6488 mutex_exit(hash_lock); 6489 buf_discard_identity(hdr); 6490 arc_buf_thaw(buf); 6491 } 6492 } 6493 6494 int 6495 arc_released(arc_buf_t *buf) 6496 { 6497 return (buf->b_data != NULL && 6498 buf->b_hdr->b_l1hdr.b_state == arc_anon); 6499 } 6500 6501 #ifdef ZFS_DEBUG 6502 int 6503 arc_referenced(arc_buf_t *buf) 6504 { 6505 return (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt)); 6506 } 6507 #endif 6508 6509 static void 6510 arc_write_ready(zio_t *zio) 6511 { 6512 arc_write_callback_t *callback = zio->io_private; 6513 arc_buf_t *buf = callback->awcb_buf; 6514 arc_buf_hdr_t *hdr = buf->b_hdr; 6515 blkptr_t *bp = zio->io_bp; 6516 uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp); 6517 fstrans_cookie_t cookie = spl_fstrans_mark(); 6518 6519 ASSERT(HDR_HAS_L1HDR(hdr)); 6520 ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt)); 6521 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 6522 6523 /* 6524 * If we're reexecuting this zio because the pool suspended, then 6525 * cleanup any state that was previously set the first time the 6526 * callback was invoked. 6527 */ 6528 if (zio->io_flags & ZIO_FLAG_REEXECUTED) { 6529 arc_cksum_free(hdr); 6530 arc_buf_unwatch(buf); 6531 if (hdr->b_l1hdr.b_pabd != NULL) { 6532 if (arc_buf_is_shared(buf)) { 6533 arc_unshare_buf(hdr, buf); 6534 } else { 6535 arc_hdr_free_abd(hdr, B_FALSE); 6536 } 6537 } 6538 6539 if (HDR_HAS_RABD(hdr)) 6540 arc_hdr_free_abd(hdr, B_TRUE); 6541 } 6542 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6543 ASSERT(!HDR_HAS_RABD(hdr)); 6544 ASSERT(!HDR_SHARED_DATA(hdr)); 6545 ASSERT(!arc_buf_is_shared(buf)); 6546 6547 callback->awcb_ready(zio, buf, callback->awcb_private); 6548 6549 if (HDR_IO_IN_PROGRESS(hdr)) { 6550 ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED); 6551 } else { 6552 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6553 add_reference(hdr, hdr); /* For IO_IN_PROGRESS. */ 6554 } 6555 6556 if (BP_IS_PROTECTED(bp) != !!HDR_PROTECTED(hdr)) 6557 hdr = arc_hdr_realloc_crypt(hdr, BP_IS_PROTECTED(bp)); 6558 6559 if (BP_IS_PROTECTED(bp)) { 6560 /* ZIL blocks are written through zio_rewrite */ 6561 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 6562 ASSERT(HDR_PROTECTED(hdr)); 6563 6564 if (BP_SHOULD_BYTESWAP(bp)) { 6565 if (BP_GET_LEVEL(bp) > 0) { 6566 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 6567 } else { 6568 hdr->b_l1hdr.b_byteswap = 6569 DMU_OT_BYTESWAP(BP_GET_TYPE(bp)); 6570 } 6571 } else { 6572 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 6573 } 6574 6575 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 6576 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 6577 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 6578 hdr->b_crypt_hdr.b_iv); 6579 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 6580 } 6581 6582 /* 6583 * If this block was written for raw encryption but the zio layer 6584 * ended up only authenticating it, adjust the buffer flags now. 6585 */ 6586 if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) { 6587 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 6588 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6589 if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF) 6590 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6591 } else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) { 6592 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6593 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6594 } 6595 6596 /* this must be done after the buffer flags are adjusted */ 6597 arc_cksum_compute(buf); 6598 6599 enum zio_compress compress; 6600 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 6601 compress = ZIO_COMPRESS_OFF; 6602 } else { 6603 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 6604 compress = BP_GET_COMPRESS(bp); 6605 } 6606 HDR_SET_PSIZE(hdr, psize); 6607 arc_hdr_set_compress(hdr, compress); 6608 hdr->b_complevel = zio->io_prop.zp_complevel; 6609 6610 if (zio->io_error != 0 || psize == 0) 6611 goto out; 6612 6613 /* 6614 * Fill the hdr with data. If the buffer is encrypted we have no choice 6615 * but to copy the data into b_radb. If the hdr is compressed, the data 6616 * we want is available from the zio, otherwise we can take it from 6617 * the buf. 6618 * 6619 * We might be able to share the buf's data with the hdr here. However, 6620 * doing so would cause the ARC to be full of linear ABDs if we write a 6621 * lot of shareable data. As a compromise, we check whether scattered 6622 * ABDs are allowed, and assume that if they are then the user wants 6623 * the ARC to be primarily filled with them regardless of the data being 6624 * written. Therefore, if they're allowed then we allocate one and copy 6625 * the data into it; otherwise, we share the data directly if we can. 6626 */ 6627 if (ARC_BUF_ENCRYPTED(buf)) { 6628 ASSERT3U(psize, >, 0); 6629 ASSERT(ARC_BUF_COMPRESSED(buf)); 6630 arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA | 6631 ARC_HDR_USE_RESERVE); 6632 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6633 } else if (!(HDR_UNCACHED(hdr) || 6634 abd_size_alloc_linear(arc_buf_size(buf))) || 6635 !arc_can_share(hdr, buf)) { 6636 /* 6637 * Ideally, we would always copy the io_abd into b_pabd, but the 6638 * user may have disabled compressed ARC, thus we must check the 6639 * hdr's compression setting rather than the io_bp's. 6640 */ 6641 if (BP_IS_ENCRYPTED(bp)) { 6642 ASSERT3U(psize, >, 0); 6643 arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA | 6644 ARC_HDR_USE_RESERVE); 6645 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6646 } else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 6647 !ARC_BUF_COMPRESSED(buf)) { 6648 ASSERT3U(psize, >, 0); 6649 arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE); 6650 abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize); 6651 } else { 6652 ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr)); 6653 arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE); 6654 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data, 6655 arc_buf_size(buf)); 6656 } 6657 } else { 6658 ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd)); 6659 ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf)); 6660 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6661 6662 arc_share_buf(hdr, buf); 6663 } 6664 6665 out: 6666 arc_hdr_verify(hdr, bp); 6667 spl_fstrans_unmark(cookie); 6668 } 6669 6670 static void 6671 arc_write_children_ready(zio_t *zio) 6672 { 6673 arc_write_callback_t *callback = zio->io_private; 6674 arc_buf_t *buf = callback->awcb_buf; 6675 6676 callback->awcb_children_ready(zio, buf, callback->awcb_private); 6677 } 6678 6679 /* 6680 * The SPA calls this callback for each physical write that happens on behalf 6681 * of a logical write. See the comment in dbuf_write_physdone() for details. 6682 */ 6683 static void 6684 arc_write_physdone(zio_t *zio) 6685 { 6686 arc_write_callback_t *cb = zio->io_private; 6687 if (cb->awcb_physdone != NULL) 6688 cb->awcb_physdone(zio, cb->awcb_buf, cb->awcb_private); 6689 } 6690 6691 static void 6692 arc_write_done(zio_t *zio) 6693 { 6694 arc_write_callback_t *callback = zio->io_private; 6695 arc_buf_t *buf = callback->awcb_buf; 6696 arc_buf_hdr_t *hdr = buf->b_hdr; 6697 6698 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6699 6700 if (zio->io_error == 0) { 6701 arc_hdr_verify(hdr, zio->io_bp); 6702 6703 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) { 6704 buf_discard_identity(hdr); 6705 } else { 6706 hdr->b_dva = *BP_IDENTITY(zio->io_bp); 6707 hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp); 6708 } 6709 } else { 6710 ASSERT(HDR_EMPTY(hdr)); 6711 } 6712 6713 /* 6714 * If the block to be written was all-zero or compressed enough to be 6715 * embedded in the BP, no write was performed so there will be no 6716 * dva/birth/checksum. The buffer must therefore remain anonymous 6717 * (and uncached). 6718 */ 6719 if (!HDR_EMPTY(hdr)) { 6720 arc_buf_hdr_t *exists; 6721 kmutex_t *hash_lock; 6722 6723 ASSERT3U(zio->io_error, ==, 0); 6724 6725 arc_cksum_verify(buf); 6726 6727 exists = buf_hash_insert(hdr, &hash_lock); 6728 if (exists != NULL) { 6729 /* 6730 * This can only happen if we overwrite for 6731 * sync-to-convergence, because we remove 6732 * buffers from the hash table when we arc_free(). 6733 */ 6734 if (zio->io_flags & ZIO_FLAG_IO_REWRITE) { 6735 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6736 panic("bad overwrite, hdr=%p exists=%p", 6737 (void *)hdr, (void *)exists); 6738 ASSERT(zfs_refcount_is_zero( 6739 &exists->b_l1hdr.b_refcnt)); 6740 arc_change_state(arc_anon, exists); 6741 arc_hdr_destroy(exists); 6742 mutex_exit(hash_lock); 6743 exists = buf_hash_insert(hdr, &hash_lock); 6744 ASSERT3P(exists, ==, NULL); 6745 } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) { 6746 /* nopwrite */ 6747 ASSERT(zio->io_prop.zp_nopwrite); 6748 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6749 panic("bad nopwrite, hdr=%p exists=%p", 6750 (void *)hdr, (void *)exists); 6751 } else { 6752 /* Dedup */ 6753 ASSERT(hdr->b_l1hdr.b_bufcnt == 1); 6754 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 6755 ASSERT(BP_GET_DEDUP(zio->io_bp)); 6756 ASSERT(BP_GET_LEVEL(zio->io_bp) == 0); 6757 } 6758 } 6759 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6760 VERIFY3S(remove_reference(hdr, hdr), >, 0); 6761 /* if it's not anon, we are doing a scrub */ 6762 if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon) 6763 arc_access(hdr, 0, B_FALSE); 6764 mutex_exit(hash_lock); 6765 } else { 6766 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6767 VERIFY3S(remove_reference(hdr, hdr), >, 0); 6768 } 6769 6770 callback->awcb_done(zio, buf, callback->awcb_private); 6771 6772 abd_free(zio->io_abd); 6773 kmem_free(callback, sizeof (arc_write_callback_t)); 6774 } 6775 6776 zio_t * 6777 arc_write(zio_t *pio, spa_t *spa, uint64_t txg, 6778 blkptr_t *bp, arc_buf_t *buf, boolean_t uncached, boolean_t l2arc, 6779 const zio_prop_t *zp, arc_write_done_func_t *ready, 6780 arc_write_done_func_t *children_ready, arc_write_done_func_t *physdone, 6781 arc_write_done_func_t *done, void *private, zio_priority_t priority, 6782 int zio_flags, const zbookmark_phys_t *zb) 6783 { 6784 arc_buf_hdr_t *hdr = buf->b_hdr; 6785 arc_write_callback_t *callback; 6786 zio_t *zio; 6787 zio_prop_t localprop = *zp; 6788 6789 ASSERT3P(ready, !=, NULL); 6790 ASSERT3P(done, !=, NULL); 6791 ASSERT(!HDR_IO_ERROR(hdr)); 6792 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6793 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6794 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); 6795 if (uncached) 6796 arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED); 6797 else if (l2arc) 6798 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 6799 6800 if (ARC_BUF_ENCRYPTED(buf)) { 6801 ASSERT(ARC_BUF_COMPRESSED(buf)); 6802 localprop.zp_encrypt = B_TRUE; 6803 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6804 localprop.zp_complevel = hdr->b_complevel; 6805 localprop.zp_byteorder = 6806 (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 6807 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 6808 memcpy(localprop.zp_salt, hdr->b_crypt_hdr.b_salt, 6809 ZIO_DATA_SALT_LEN); 6810 memcpy(localprop.zp_iv, hdr->b_crypt_hdr.b_iv, 6811 ZIO_DATA_IV_LEN); 6812 memcpy(localprop.zp_mac, hdr->b_crypt_hdr.b_mac, 6813 ZIO_DATA_MAC_LEN); 6814 if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) { 6815 localprop.zp_nopwrite = B_FALSE; 6816 localprop.zp_copies = 6817 MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1); 6818 } 6819 zio_flags |= ZIO_FLAG_RAW; 6820 } else if (ARC_BUF_COMPRESSED(buf)) { 6821 ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf)); 6822 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6823 localprop.zp_complevel = hdr->b_complevel; 6824 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 6825 } 6826 callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP); 6827 callback->awcb_ready = ready; 6828 callback->awcb_children_ready = children_ready; 6829 callback->awcb_physdone = physdone; 6830 callback->awcb_done = done; 6831 callback->awcb_private = private; 6832 callback->awcb_buf = buf; 6833 6834 /* 6835 * The hdr's b_pabd is now stale, free it now. A new data block 6836 * will be allocated when the zio pipeline calls arc_write_ready(). 6837 */ 6838 if (hdr->b_l1hdr.b_pabd != NULL) { 6839 /* 6840 * If the buf is currently sharing the data block with 6841 * the hdr then we need to break that relationship here. 6842 * The hdr will remain with a NULL data pointer and the 6843 * buf will take sole ownership of the block. 6844 */ 6845 if (arc_buf_is_shared(buf)) { 6846 arc_unshare_buf(hdr, buf); 6847 } else { 6848 arc_hdr_free_abd(hdr, B_FALSE); 6849 } 6850 VERIFY3P(buf->b_data, !=, NULL); 6851 } 6852 6853 if (HDR_HAS_RABD(hdr)) 6854 arc_hdr_free_abd(hdr, B_TRUE); 6855 6856 if (!(zio_flags & ZIO_FLAG_RAW)) 6857 arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF); 6858 6859 ASSERT(!arc_buf_is_shared(buf)); 6860 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6861 6862 zio = zio_write(pio, spa, txg, bp, 6863 abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)), 6864 HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready, 6865 (children_ready != NULL) ? arc_write_children_ready : NULL, 6866 arc_write_physdone, arc_write_done, callback, 6867 priority, zio_flags, zb); 6868 6869 return (zio); 6870 } 6871 6872 void 6873 arc_tempreserve_clear(uint64_t reserve) 6874 { 6875 atomic_add_64(&arc_tempreserve, -reserve); 6876 ASSERT((int64_t)arc_tempreserve >= 0); 6877 } 6878 6879 int 6880 arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg) 6881 { 6882 int error; 6883 uint64_t anon_size; 6884 6885 if (!arc_no_grow && 6886 reserve > arc_c/4 && 6887 reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT)) 6888 arc_c = MIN(arc_c_max, reserve * 4); 6889 6890 /* 6891 * Throttle when the calculated memory footprint for the TXG 6892 * exceeds the target ARC size. 6893 */ 6894 if (reserve > arc_c) { 6895 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve); 6896 return (SET_ERROR(ERESTART)); 6897 } 6898 6899 /* 6900 * Don't count loaned bufs as in flight dirty data to prevent long 6901 * network delays from blocking transactions that are ready to be 6902 * assigned to a txg. 6903 */ 6904 6905 /* assert that it has not wrapped around */ 6906 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 6907 6908 anon_size = MAX((int64_t) 6909 (zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]) + 6910 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]) - 6911 arc_loaned_bytes), 0); 6912 6913 /* 6914 * Writes will, almost always, require additional memory allocations 6915 * in order to compress/encrypt/etc the data. We therefore need to 6916 * make sure that there is sufficient available memory for this. 6917 */ 6918 error = arc_memory_throttle(spa, reserve, txg); 6919 if (error != 0) 6920 return (error); 6921 6922 /* 6923 * Throttle writes when the amount of dirty data in the cache 6924 * gets too large. We try to keep the cache less than half full 6925 * of dirty blocks so that our sync times don't grow too large. 6926 * 6927 * In the case of one pool being built on another pool, we want 6928 * to make sure we don't end up throttling the lower (backing) 6929 * pool when the upper pool is the majority contributor to dirty 6930 * data. To insure we make forward progress during throttling, we 6931 * also check the current pool's net dirty data and only throttle 6932 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty 6933 * data in the cache. 6934 * 6935 * Note: if two requests come in concurrently, we might let them 6936 * both succeed, when one of them should fail. Not a huge deal. 6937 */ 6938 uint64_t total_dirty = reserve + arc_tempreserve + anon_size; 6939 uint64_t spa_dirty_anon = spa_dirty_data(spa); 6940 uint64_t rarc_c = arc_warm ? arc_c : arc_c_max; 6941 if (total_dirty > rarc_c * zfs_arc_dirty_limit_percent / 100 && 6942 anon_size > rarc_c * zfs_arc_anon_limit_percent / 100 && 6943 spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) { 6944 #ifdef ZFS_DEBUG 6945 uint64_t meta_esize = zfs_refcount_count( 6946 &arc_anon->arcs_esize[ARC_BUFC_METADATA]); 6947 uint64_t data_esize = 6948 zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 6949 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK " 6950 "anon_data=%lluK tempreserve=%lluK rarc_c=%lluK\n", 6951 (u_longlong_t)arc_tempreserve >> 10, 6952 (u_longlong_t)meta_esize >> 10, 6953 (u_longlong_t)data_esize >> 10, 6954 (u_longlong_t)reserve >> 10, 6955 (u_longlong_t)rarc_c >> 10); 6956 #endif 6957 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle); 6958 return (SET_ERROR(ERESTART)); 6959 } 6960 atomic_add_64(&arc_tempreserve, reserve); 6961 return (0); 6962 } 6963 6964 static void 6965 arc_kstat_update_state(arc_state_t *state, kstat_named_t *size, 6966 kstat_named_t *data, kstat_named_t *metadata, 6967 kstat_named_t *evict_data, kstat_named_t *evict_metadata) 6968 { 6969 data->value.ui64 = 6970 zfs_refcount_count(&state->arcs_size[ARC_BUFC_DATA]); 6971 metadata->value.ui64 = 6972 zfs_refcount_count(&state->arcs_size[ARC_BUFC_METADATA]); 6973 size->value.ui64 = data->value.ui64 + metadata->value.ui64; 6974 evict_data->value.ui64 = 6975 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]); 6976 evict_metadata->value.ui64 = 6977 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]); 6978 } 6979 6980 static int 6981 arc_kstat_update(kstat_t *ksp, int rw) 6982 { 6983 arc_stats_t *as = ksp->ks_data; 6984 6985 if (rw == KSTAT_WRITE) 6986 return (SET_ERROR(EACCES)); 6987 6988 as->arcstat_hits.value.ui64 = 6989 wmsum_value(&arc_sums.arcstat_hits); 6990 as->arcstat_iohits.value.ui64 = 6991 wmsum_value(&arc_sums.arcstat_iohits); 6992 as->arcstat_misses.value.ui64 = 6993 wmsum_value(&arc_sums.arcstat_misses); 6994 as->arcstat_demand_data_hits.value.ui64 = 6995 wmsum_value(&arc_sums.arcstat_demand_data_hits); 6996 as->arcstat_demand_data_iohits.value.ui64 = 6997 wmsum_value(&arc_sums.arcstat_demand_data_iohits); 6998 as->arcstat_demand_data_misses.value.ui64 = 6999 wmsum_value(&arc_sums.arcstat_demand_data_misses); 7000 as->arcstat_demand_metadata_hits.value.ui64 = 7001 wmsum_value(&arc_sums.arcstat_demand_metadata_hits); 7002 as->arcstat_demand_metadata_iohits.value.ui64 = 7003 wmsum_value(&arc_sums.arcstat_demand_metadata_iohits); 7004 as->arcstat_demand_metadata_misses.value.ui64 = 7005 wmsum_value(&arc_sums.arcstat_demand_metadata_misses); 7006 as->arcstat_prefetch_data_hits.value.ui64 = 7007 wmsum_value(&arc_sums.arcstat_prefetch_data_hits); 7008 as->arcstat_prefetch_data_iohits.value.ui64 = 7009 wmsum_value(&arc_sums.arcstat_prefetch_data_iohits); 7010 as->arcstat_prefetch_data_misses.value.ui64 = 7011 wmsum_value(&arc_sums.arcstat_prefetch_data_misses); 7012 as->arcstat_prefetch_metadata_hits.value.ui64 = 7013 wmsum_value(&arc_sums.arcstat_prefetch_metadata_hits); 7014 as->arcstat_prefetch_metadata_iohits.value.ui64 = 7015 wmsum_value(&arc_sums.arcstat_prefetch_metadata_iohits); 7016 as->arcstat_prefetch_metadata_misses.value.ui64 = 7017 wmsum_value(&arc_sums.arcstat_prefetch_metadata_misses); 7018 as->arcstat_mru_hits.value.ui64 = 7019 wmsum_value(&arc_sums.arcstat_mru_hits); 7020 as->arcstat_mru_ghost_hits.value.ui64 = 7021 wmsum_value(&arc_sums.arcstat_mru_ghost_hits); 7022 as->arcstat_mfu_hits.value.ui64 = 7023 wmsum_value(&arc_sums.arcstat_mfu_hits); 7024 as->arcstat_mfu_ghost_hits.value.ui64 = 7025 wmsum_value(&arc_sums.arcstat_mfu_ghost_hits); 7026 as->arcstat_uncached_hits.value.ui64 = 7027 wmsum_value(&arc_sums.arcstat_uncached_hits); 7028 as->arcstat_deleted.value.ui64 = 7029 wmsum_value(&arc_sums.arcstat_deleted); 7030 as->arcstat_mutex_miss.value.ui64 = 7031 wmsum_value(&arc_sums.arcstat_mutex_miss); 7032 as->arcstat_access_skip.value.ui64 = 7033 wmsum_value(&arc_sums.arcstat_access_skip); 7034 as->arcstat_evict_skip.value.ui64 = 7035 wmsum_value(&arc_sums.arcstat_evict_skip); 7036 as->arcstat_evict_not_enough.value.ui64 = 7037 wmsum_value(&arc_sums.arcstat_evict_not_enough); 7038 as->arcstat_evict_l2_cached.value.ui64 = 7039 wmsum_value(&arc_sums.arcstat_evict_l2_cached); 7040 as->arcstat_evict_l2_eligible.value.ui64 = 7041 wmsum_value(&arc_sums.arcstat_evict_l2_eligible); 7042 as->arcstat_evict_l2_eligible_mfu.value.ui64 = 7043 wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mfu); 7044 as->arcstat_evict_l2_eligible_mru.value.ui64 = 7045 wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mru); 7046 as->arcstat_evict_l2_ineligible.value.ui64 = 7047 wmsum_value(&arc_sums.arcstat_evict_l2_ineligible); 7048 as->arcstat_evict_l2_skip.value.ui64 = 7049 wmsum_value(&arc_sums.arcstat_evict_l2_skip); 7050 as->arcstat_hash_collisions.value.ui64 = 7051 wmsum_value(&arc_sums.arcstat_hash_collisions); 7052 as->arcstat_hash_chains.value.ui64 = 7053 wmsum_value(&arc_sums.arcstat_hash_chains); 7054 as->arcstat_size.value.ui64 = 7055 aggsum_value(&arc_sums.arcstat_size); 7056 as->arcstat_compressed_size.value.ui64 = 7057 wmsum_value(&arc_sums.arcstat_compressed_size); 7058 as->arcstat_uncompressed_size.value.ui64 = 7059 wmsum_value(&arc_sums.arcstat_uncompressed_size); 7060 as->arcstat_overhead_size.value.ui64 = 7061 wmsum_value(&arc_sums.arcstat_overhead_size); 7062 as->arcstat_hdr_size.value.ui64 = 7063 wmsum_value(&arc_sums.arcstat_hdr_size); 7064 as->arcstat_data_size.value.ui64 = 7065 wmsum_value(&arc_sums.arcstat_data_size); 7066 as->arcstat_metadata_size.value.ui64 = 7067 wmsum_value(&arc_sums.arcstat_metadata_size); 7068 as->arcstat_dbuf_size.value.ui64 = 7069 wmsum_value(&arc_sums.arcstat_dbuf_size); 7070 #if defined(COMPAT_FREEBSD11) 7071 as->arcstat_other_size.value.ui64 = 7072 wmsum_value(&arc_sums.arcstat_bonus_size) + 7073 wmsum_value(&arc_sums.arcstat_dnode_size) + 7074 wmsum_value(&arc_sums.arcstat_dbuf_size); 7075 #endif 7076 7077 arc_kstat_update_state(arc_anon, 7078 &as->arcstat_anon_size, 7079 &as->arcstat_anon_data, 7080 &as->arcstat_anon_metadata, 7081 &as->arcstat_anon_evictable_data, 7082 &as->arcstat_anon_evictable_metadata); 7083 arc_kstat_update_state(arc_mru, 7084 &as->arcstat_mru_size, 7085 &as->arcstat_mru_data, 7086 &as->arcstat_mru_metadata, 7087 &as->arcstat_mru_evictable_data, 7088 &as->arcstat_mru_evictable_metadata); 7089 arc_kstat_update_state(arc_mru_ghost, 7090 &as->arcstat_mru_ghost_size, 7091 &as->arcstat_mru_ghost_data, 7092 &as->arcstat_mru_ghost_metadata, 7093 &as->arcstat_mru_ghost_evictable_data, 7094 &as->arcstat_mru_ghost_evictable_metadata); 7095 arc_kstat_update_state(arc_mfu, 7096 &as->arcstat_mfu_size, 7097 &as->arcstat_mfu_data, 7098 &as->arcstat_mfu_metadata, 7099 &as->arcstat_mfu_evictable_data, 7100 &as->arcstat_mfu_evictable_metadata); 7101 arc_kstat_update_state(arc_mfu_ghost, 7102 &as->arcstat_mfu_ghost_size, 7103 &as->arcstat_mfu_ghost_data, 7104 &as->arcstat_mfu_ghost_metadata, 7105 &as->arcstat_mfu_ghost_evictable_data, 7106 &as->arcstat_mfu_ghost_evictable_metadata); 7107 arc_kstat_update_state(arc_uncached, 7108 &as->arcstat_uncached_size, 7109 &as->arcstat_uncached_data, 7110 &as->arcstat_uncached_metadata, 7111 &as->arcstat_uncached_evictable_data, 7112 &as->arcstat_uncached_evictable_metadata); 7113 7114 as->arcstat_dnode_size.value.ui64 = 7115 wmsum_value(&arc_sums.arcstat_dnode_size); 7116 as->arcstat_bonus_size.value.ui64 = 7117 wmsum_value(&arc_sums.arcstat_bonus_size); 7118 as->arcstat_l2_hits.value.ui64 = 7119 wmsum_value(&arc_sums.arcstat_l2_hits); 7120 as->arcstat_l2_misses.value.ui64 = 7121 wmsum_value(&arc_sums.arcstat_l2_misses); 7122 as->arcstat_l2_prefetch_asize.value.ui64 = 7123 wmsum_value(&arc_sums.arcstat_l2_prefetch_asize); 7124 as->arcstat_l2_mru_asize.value.ui64 = 7125 wmsum_value(&arc_sums.arcstat_l2_mru_asize); 7126 as->arcstat_l2_mfu_asize.value.ui64 = 7127 wmsum_value(&arc_sums.arcstat_l2_mfu_asize); 7128 as->arcstat_l2_bufc_data_asize.value.ui64 = 7129 wmsum_value(&arc_sums.arcstat_l2_bufc_data_asize); 7130 as->arcstat_l2_bufc_metadata_asize.value.ui64 = 7131 wmsum_value(&arc_sums.arcstat_l2_bufc_metadata_asize); 7132 as->arcstat_l2_feeds.value.ui64 = 7133 wmsum_value(&arc_sums.arcstat_l2_feeds); 7134 as->arcstat_l2_rw_clash.value.ui64 = 7135 wmsum_value(&arc_sums.arcstat_l2_rw_clash); 7136 as->arcstat_l2_read_bytes.value.ui64 = 7137 wmsum_value(&arc_sums.arcstat_l2_read_bytes); 7138 as->arcstat_l2_write_bytes.value.ui64 = 7139 wmsum_value(&arc_sums.arcstat_l2_write_bytes); 7140 as->arcstat_l2_writes_sent.value.ui64 = 7141 wmsum_value(&arc_sums.arcstat_l2_writes_sent); 7142 as->arcstat_l2_writes_done.value.ui64 = 7143 wmsum_value(&arc_sums.arcstat_l2_writes_done); 7144 as->arcstat_l2_writes_error.value.ui64 = 7145 wmsum_value(&arc_sums.arcstat_l2_writes_error); 7146 as->arcstat_l2_writes_lock_retry.value.ui64 = 7147 wmsum_value(&arc_sums.arcstat_l2_writes_lock_retry); 7148 as->arcstat_l2_evict_lock_retry.value.ui64 = 7149 wmsum_value(&arc_sums.arcstat_l2_evict_lock_retry); 7150 as->arcstat_l2_evict_reading.value.ui64 = 7151 wmsum_value(&arc_sums.arcstat_l2_evict_reading); 7152 as->arcstat_l2_evict_l1cached.value.ui64 = 7153 wmsum_value(&arc_sums.arcstat_l2_evict_l1cached); 7154 as->arcstat_l2_free_on_write.value.ui64 = 7155 wmsum_value(&arc_sums.arcstat_l2_free_on_write); 7156 as->arcstat_l2_abort_lowmem.value.ui64 = 7157 wmsum_value(&arc_sums.arcstat_l2_abort_lowmem); 7158 as->arcstat_l2_cksum_bad.value.ui64 = 7159 wmsum_value(&arc_sums.arcstat_l2_cksum_bad); 7160 as->arcstat_l2_io_error.value.ui64 = 7161 wmsum_value(&arc_sums.arcstat_l2_io_error); 7162 as->arcstat_l2_lsize.value.ui64 = 7163 wmsum_value(&arc_sums.arcstat_l2_lsize); 7164 as->arcstat_l2_psize.value.ui64 = 7165 wmsum_value(&arc_sums.arcstat_l2_psize); 7166 as->arcstat_l2_hdr_size.value.ui64 = 7167 aggsum_value(&arc_sums.arcstat_l2_hdr_size); 7168 as->arcstat_l2_log_blk_writes.value.ui64 = 7169 wmsum_value(&arc_sums.arcstat_l2_log_blk_writes); 7170 as->arcstat_l2_log_blk_asize.value.ui64 = 7171 wmsum_value(&arc_sums.arcstat_l2_log_blk_asize); 7172 as->arcstat_l2_log_blk_count.value.ui64 = 7173 wmsum_value(&arc_sums.arcstat_l2_log_blk_count); 7174 as->arcstat_l2_rebuild_success.value.ui64 = 7175 wmsum_value(&arc_sums.arcstat_l2_rebuild_success); 7176 as->arcstat_l2_rebuild_abort_unsupported.value.ui64 = 7177 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_unsupported); 7178 as->arcstat_l2_rebuild_abort_io_errors.value.ui64 = 7179 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_io_errors); 7180 as->arcstat_l2_rebuild_abort_dh_errors.value.ui64 = 7181 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); 7182 as->arcstat_l2_rebuild_abort_cksum_lb_errors.value.ui64 = 7183 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); 7184 as->arcstat_l2_rebuild_abort_lowmem.value.ui64 = 7185 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_lowmem); 7186 as->arcstat_l2_rebuild_size.value.ui64 = 7187 wmsum_value(&arc_sums.arcstat_l2_rebuild_size); 7188 as->arcstat_l2_rebuild_asize.value.ui64 = 7189 wmsum_value(&arc_sums.arcstat_l2_rebuild_asize); 7190 as->arcstat_l2_rebuild_bufs.value.ui64 = 7191 wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs); 7192 as->arcstat_l2_rebuild_bufs_precached.value.ui64 = 7193 wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs_precached); 7194 as->arcstat_l2_rebuild_log_blks.value.ui64 = 7195 wmsum_value(&arc_sums.arcstat_l2_rebuild_log_blks); 7196 as->arcstat_memory_throttle_count.value.ui64 = 7197 wmsum_value(&arc_sums.arcstat_memory_throttle_count); 7198 as->arcstat_memory_direct_count.value.ui64 = 7199 wmsum_value(&arc_sums.arcstat_memory_direct_count); 7200 as->arcstat_memory_indirect_count.value.ui64 = 7201 wmsum_value(&arc_sums.arcstat_memory_indirect_count); 7202 7203 as->arcstat_memory_all_bytes.value.ui64 = 7204 arc_all_memory(); 7205 as->arcstat_memory_free_bytes.value.ui64 = 7206 arc_free_memory(); 7207 as->arcstat_memory_available_bytes.value.i64 = 7208 arc_available_memory(); 7209 7210 as->arcstat_prune.value.ui64 = 7211 wmsum_value(&arc_sums.arcstat_prune); 7212 as->arcstat_meta_used.value.ui64 = 7213 wmsum_value(&arc_sums.arcstat_meta_used); 7214 as->arcstat_async_upgrade_sync.value.ui64 = 7215 wmsum_value(&arc_sums.arcstat_async_upgrade_sync); 7216 as->arcstat_predictive_prefetch.value.ui64 = 7217 wmsum_value(&arc_sums.arcstat_predictive_prefetch); 7218 as->arcstat_demand_hit_predictive_prefetch.value.ui64 = 7219 wmsum_value(&arc_sums.arcstat_demand_hit_predictive_prefetch); 7220 as->arcstat_demand_iohit_predictive_prefetch.value.ui64 = 7221 wmsum_value(&arc_sums.arcstat_demand_iohit_predictive_prefetch); 7222 as->arcstat_prescient_prefetch.value.ui64 = 7223 wmsum_value(&arc_sums.arcstat_prescient_prefetch); 7224 as->arcstat_demand_hit_prescient_prefetch.value.ui64 = 7225 wmsum_value(&arc_sums.arcstat_demand_hit_prescient_prefetch); 7226 as->arcstat_demand_iohit_prescient_prefetch.value.ui64 = 7227 wmsum_value(&arc_sums.arcstat_demand_iohit_prescient_prefetch); 7228 as->arcstat_raw_size.value.ui64 = 7229 wmsum_value(&arc_sums.arcstat_raw_size); 7230 as->arcstat_cached_only_in_progress.value.ui64 = 7231 wmsum_value(&arc_sums.arcstat_cached_only_in_progress); 7232 as->arcstat_abd_chunk_waste_size.value.ui64 = 7233 wmsum_value(&arc_sums.arcstat_abd_chunk_waste_size); 7234 7235 return (0); 7236 } 7237 7238 /* 7239 * This function *must* return indices evenly distributed between all 7240 * sublists of the multilist. This is needed due to how the ARC eviction 7241 * code is laid out; arc_evict_state() assumes ARC buffers are evenly 7242 * distributed between all sublists and uses this assumption when 7243 * deciding which sublist to evict from and how much to evict from it. 7244 */ 7245 static unsigned int 7246 arc_state_multilist_index_func(multilist_t *ml, void *obj) 7247 { 7248 arc_buf_hdr_t *hdr = obj; 7249 7250 /* 7251 * We rely on b_dva to generate evenly distributed index 7252 * numbers using buf_hash below. So, as an added precaution, 7253 * let's make sure we never add empty buffers to the arc lists. 7254 */ 7255 ASSERT(!HDR_EMPTY(hdr)); 7256 7257 /* 7258 * The assumption here, is the hash value for a given 7259 * arc_buf_hdr_t will remain constant throughout its lifetime 7260 * (i.e. its b_spa, b_dva, and b_birth fields don't change). 7261 * Thus, we don't need to store the header's sublist index 7262 * on insertion, as this index can be recalculated on removal. 7263 * 7264 * Also, the low order bits of the hash value are thought to be 7265 * distributed evenly. Otherwise, in the case that the multilist 7266 * has a power of two number of sublists, each sublists' usage 7267 * would not be evenly distributed. In this context full 64bit 7268 * division would be a waste of time, so limit it to 32 bits. 7269 */ 7270 return ((unsigned int)buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) % 7271 multilist_get_num_sublists(ml)); 7272 } 7273 7274 static unsigned int 7275 arc_state_l2c_multilist_index_func(multilist_t *ml, void *obj) 7276 { 7277 panic("Header %p insert into arc_l2c_only %p", obj, ml); 7278 } 7279 7280 #define WARN_IF_TUNING_IGNORED(tuning, value, do_warn) do { \ 7281 if ((do_warn) && (tuning) && ((tuning) != (value))) { \ 7282 cmn_err(CE_WARN, \ 7283 "ignoring tunable %s (using %llu instead)", \ 7284 (#tuning), (u_longlong_t)(value)); \ 7285 } \ 7286 } while (0) 7287 7288 /* 7289 * Called during module initialization and periodically thereafter to 7290 * apply reasonable changes to the exposed performance tunings. Can also be 7291 * called explicitly by param_set_arc_*() functions when ARC tunables are 7292 * updated manually. Non-zero zfs_* values which differ from the currently set 7293 * values will be applied. 7294 */ 7295 void 7296 arc_tuning_update(boolean_t verbose) 7297 { 7298 uint64_t allmem = arc_all_memory(); 7299 7300 /* Valid range: 32M - <arc_c_max> */ 7301 if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) && 7302 (zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) && 7303 (zfs_arc_min <= arc_c_max)) { 7304 arc_c_min = zfs_arc_min; 7305 arc_c = MAX(arc_c, arc_c_min); 7306 } 7307 WARN_IF_TUNING_IGNORED(zfs_arc_min, arc_c_min, verbose); 7308 7309 /* Valid range: 64M - <all physical memory> */ 7310 if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) && 7311 (zfs_arc_max >= MIN_ARC_MAX) && (zfs_arc_max < allmem) && 7312 (zfs_arc_max > arc_c_min)) { 7313 arc_c_max = zfs_arc_max; 7314 arc_c = MIN(arc_c, arc_c_max); 7315 if (arc_dnode_limit > arc_c_max) 7316 arc_dnode_limit = arc_c_max; 7317 } 7318 WARN_IF_TUNING_IGNORED(zfs_arc_max, arc_c_max, verbose); 7319 7320 /* Valid range: 0 - <all physical memory> */ 7321 arc_dnode_limit = zfs_arc_dnode_limit ? zfs_arc_dnode_limit : 7322 MIN(zfs_arc_dnode_limit_percent, 100) * arc_c_max / 100; 7323 WARN_IF_TUNING_IGNORED(zfs_arc_dnode_limit, arc_dnode_limit, verbose); 7324 7325 /* Valid range: 1 - N */ 7326 if (zfs_arc_grow_retry) 7327 arc_grow_retry = zfs_arc_grow_retry; 7328 7329 /* Valid range: 1 - N */ 7330 if (zfs_arc_shrink_shift) { 7331 arc_shrink_shift = zfs_arc_shrink_shift; 7332 arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1); 7333 } 7334 7335 /* Valid range: 1 - N ms */ 7336 if (zfs_arc_min_prefetch_ms) 7337 arc_min_prefetch_ms = zfs_arc_min_prefetch_ms; 7338 7339 /* Valid range: 1 - N ms */ 7340 if (zfs_arc_min_prescient_prefetch_ms) { 7341 arc_min_prescient_prefetch_ms = 7342 zfs_arc_min_prescient_prefetch_ms; 7343 } 7344 7345 /* Valid range: 0 - 100 */ 7346 if (zfs_arc_lotsfree_percent <= 100) 7347 arc_lotsfree_percent = zfs_arc_lotsfree_percent; 7348 WARN_IF_TUNING_IGNORED(zfs_arc_lotsfree_percent, arc_lotsfree_percent, 7349 verbose); 7350 7351 /* Valid range: 0 - <all physical memory> */ 7352 if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free)) 7353 arc_sys_free = MIN(zfs_arc_sys_free, allmem); 7354 WARN_IF_TUNING_IGNORED(zfs_arc_sys_free, arc_sys_free, verbose); 7355 } 7356 7357 static void 7358 arc_state_multilist_init(multilist_t *ml, 7359 multilist_sublist_index_func_t *index_func, int *maxcountp) 7360 { 7361 multilist_create(ml, sizeof (arc_buf_hdr_t), 7362 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), index_func); 7363 *maxcountp = MAX(*maxcountp, multilist_get_num_sublists(ml)); 7364 } 7365 7366 static void 7367 arc_state_init(void) 7368 { 7369 int num_sublists = 0; 7370 7371 arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_METADATA], 7372 arc_state_multilist_index_func, &num_sublists); 7373 arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_DATA], 7374 arc_state_multilist_index_func, &num_sublists); 7375 arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA], 7376 arc_state_multilist_index_func, &num_sublists); 7377 arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA], 7378 arc_state_multilist_index_func, &num_sublists); 7379 arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_METADATA], 7380 arc_state_multilist_index_func, &num_sublists); 7381 arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_DATA], 7382 arc_state_multilist_index_func, &num_sublists); 7383 arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA], 7384 arc_state_multilist_index_func, &num_sublists); 7385 arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA], 7386 arc_state_multilist_index_func, &num_sublists); 7387 arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_METADATA], 7388 arc_state_multilist_index_func, &num_sublists); 7389 arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_DATA], 7390 arc_state_multilist_index_func, &num_sublists); 7391 7392 /* 7393 * L2 headers should never be on the L2 state list since they don't 7394 * have L1 headers allocated. Special index function asserts that. 7395 */ 7396 arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA], 7397 arc_state_l2c_multilist_index_func, &num_sublists); 7398 arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_DATA], 7399 arc_state_l2c_multilist_index_func, &num_sublists); 7400 7401 /* 7402 * Keep track of the number of markers needed to reclaim buffers from 7403 * any ARC state. The markers will be pre-allocated so as to minimize 7404 * the number of memory allocations performed by the eviction thread. 7405 */ 7406 arc_state_evict_marker_count = num_sublists; 7407 7408 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7409 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7410 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7411 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7412 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7413 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7414 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7415 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7416 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7417 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7418 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7419 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7420 zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]); 7421 zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_DATA]); 7422 7423 zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_DATA]); 7424 zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 7425 zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_DATA]); 7426 zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_METADATA]); 7427 zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]); 7428 zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]); 7429 zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 7430 zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 7431 zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]); 7432 zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]); 7433 zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]); 7434 zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]); 7435 zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_DATA]); 7436 zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_METADATA]); 7437 7438 wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA], 0); 7439 wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA], 0); 7440 wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA], 0); 7441 wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA], 0); 7442 7443 wmsum_init(&arc_sums.arcstat_hits, 0); 7444 wmsum_init(&arc_sums.arcstat_iohits, 0); 7445 wmsum_init(&arc_sums.arcstat_misses, 0); 7446 wmsum_init(&arc_sums.arcstat_demand_data_hits, 0); 7447 wmsum_init(&arc_sums.arcstat_demand_data_iohits, 0); 7448 wmsum_init(&arc_sums.arcstat_demand_data_misses, 0); 7449 wmsum_init(&arc_sums.arcstat_demand_metadata_hits, 0); 7450 wmsum_init(&arc_sums.arcstat_demand_metadata_iohits, 0); 7451 wmsum_init(&arc_sums.arcstat_demand_metadata_misses, 0); 7452 wmsum_init(&arc_sums.arcstat_prefetch_data_hits, 0); 7453 wmsum_init(&arc_sums.arcstat_prefetch_data_iohits, 0); 7454 wmsum_init(&arc_sums.arcstat_prefetch_data_misses, 0); 7455 wmsum_init(&arc_sums.arcstat_prefetch_metadata_hits, 0); 7456 wmsum_init(&arc_sums.arcstat_prefetch_metadata_iohits, 0); 7457 wmsum_init(&arc_sums.arcstat_prefetch_metadata_misses, 0); 7458 wmsum_init(&arc_sums.arcstat_mru_hits, 0); 7459 wmsum_init(&arc_sums.arcstat_mru_ghost_hits, 0); 7460 wmsum_init(&arc_sums.arcstat_mfu_hits, 0); 7461 wmsum_init(&arc_sums.arcstat_mfu_ghost_hits, 0); 7462 wmsum_init(&arc_sums.arcstat_uncached_hits, 0); 7463 wmsum_init(&arc_sums.arcstat_deleted, 0); 7464 wmsum_init(&arc_sums.arcstat_mutex_miss, 0); 7465 wmsum_init(&arc_sums.arcstat_access_skip, 0); 7466 wmsum_init(&arc_sums.arcstat_evict_skip, 0); 7467 wmsum_init(&arc_sums.arcstat_evict_not_enough, 0); 7468 wmsum_init(&arc_sums.arcstat_evict_l2_cached, 0); 7469 wmsum_init(&arc_sums.arcstat_evict_l2_eligible, 0); 7470 wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mfu, 0); 7471 wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mru, 0); 7472 wmsum_init(&arc_sums.arcstat_evict_l2_ineligible, 0); 7473 wmsum_init(&arc_sums.arcstat_evict_l2_skip, 0); 7474 wmsum_init(&arc_sums.arcstat_hash_collisions, 0); 7475 wmsum_init(&arc_sums.arcstat_hash_chains, 0); 7476 aggsum_init(&arc_sums.arcstat_size, 0); 7477 wmsum_init(&arc_sums.arcstat_compressed_size, 0); 7478 wmsum_init(&arc_sums.arcstat_uncompressed_size, 0); 7479 wmsum_init(&arc_sums.arcstat_overhead_size, 0); 7480 wmsum_init(&arc_sums.arcstat_hdr_size, 0); 7481 wmsum_init(&arc_sums.arcstat_data_size, 0); 7482 wmsum_init(&arc_sums.arcstat_metadata_size, 0); 7483 wmsum_init(&arc_sums.arcstat_dbuf_size, 0); 7484 wmsum_init(&arc_sums.arcstat_dnode_size, 0); 7485 wmsum_init(&arc_sums.arcstat_bonus_size, 0); 7486 wmsum_init(&arc_sums.arcstat_l2_hits, 0); 7487 wmsum_init(&arc_sums.arcstat_l2_misses, 0); 7488 wmsum_init(&arc_sums.arcstat_l2_prefetch_asize, 0); 7489 wmsum_init(&arc_sums.arcstat_l2_mru_asize, 0); 7490 wmsum_init(&arc_sums.arcstat_l2_mfu_asize, 0); 7491 wmsum_init(&arc_sums.arcstat_l2_bufc_data_asize, 0); 7492 wmsum_init(&arc_sums.arcstat_l2_bufc_metadata_asize, 0); 7493 wmsum_init(&arc_sums.arcstat_l2_feeds, 0); 7494 wmsum_init(&arc_sums.arcstat_l2_rw_clash, 0); 7495 wmsum_init(&arc_sums.arcstat_l2_read_bytes, 0); 7496 wmsum_init(&arc_sums.arcstat_l2_write_bytes, 0); 7497 wmsum_init(&arc_sums.arcstat_l2_writes_sent, 0); 7498 wmsum_init(&arc_sums.arcstat_l2_writes_done, 0); 7499 wmsum_init(&arc_sums.arcstat_l2_writes_error, 0); 7500 wmsum_init(&arc_sums.arcstat_l2_writes_lock_retry, 0); 7501 wmsum_init(&arc_sums.arcstat_l2_evict_lock_retry, 0); 7502 wmsum_init(&arc_sums.arcstat_l2_evict_reading, 0); 7503 wmsum_init(&arc_sums.arcstat_l2_evict_l1cached, 0); 7504 wmsum_init(&arc_sums.arcstat_l2_free_on_write, 0); 7505 wmsum_init(&arc_sums.arcstat_l2_abort_lowmem, 0); 7506 wmsum_init(&arc_sums.arcstat_l2_cksum_bad, 0); 7507 wmsum_init(&arc_sums.arcstat_l2_io_error, 0); 7508 wmsum_init(&arc_sums.arcstat_l2_lsize, 0); 7509 wmsum_init(&arc_sums.arcstat_l2_psize, 0); 7510 aggsum_init(&arc_sums.arcstat_l2_hdr_size, 0); 7511 wmsum_init(&arc_sums.arcstat_l2_log_blk_writes, 0); 7512 wmsum_init(&arc_sums.arcstat_l2_log_blk_asize, 0); 7513 wmsum_init(&arc_sums.arcstat_l2_log_blk_count, 0); 7514 wmsum_init(&arc_sums.arcstat_l2_rebuild_success, 0); 7515 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_unsupported, 0); 7516 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_io_errors, 0); 7517 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_dh_errors, 0); 7518 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors, 0); 7519 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_lowmem, 0); 7520 wmsum_init(&arc_sums.arcstat_l2_rebuild_size, 0); 7521 wmsum_init(&arc_sums.arcstat_l2_rebuild_asize, 0); 7522 wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs, 0); 7523 wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs_precached, 0); 7524 wmsum_init(&arc_sums.arcstat_l2_rebuild_log_blks, 0); 7525 wmsum_init(&arc_sums.arcstat_memory_throttle_count, 0); 7526 wmsum_init(&arc_sums.arcstat_memory_direct_count, 0); 7527 wmsum_init(&arc_sums.arcstat_memory_indirect_count, 0); 7528 wmsum_init(&arc_sums.arcstat_prune, 0); 7529 wmsum_init(&arc_sums.arcstat_meta_used, 0); 7530 wmsum_init(&arc_sums.arcstat_async_upgrade_sync, 0); 7531 wmsum_init(&arc_sums.arcstat_predictive_prefetch, 0); 7532 wmsum_init(&arc_sums.arcstat_demand_hit_predictive_prefetch, 0); 7533 wmsum_init(&arc_sums.arcstat_demand_iohit_predictive_prefetch, 0); 7534 wmsum_init(&arc_sums.arcstat_prescient_prefetch, 0); 7535 wmsum_init(&arc_sums.arcstat_demand_hit_prescient_prefetch, 0); 7536 wmsum_init(&arc_sums.arcstat_demand_iohit_prescient_prefetch, 0); 7537 wmsum_init(&arc_sums.arcstat_raw_size, 0); 7538 wmsum_init(&arc_sums.arcstat_cached_only_in_progress, 0); 7539 wmsum_init(&arc_sums.arcstat_abd_chunk_waste_size, 0); 7540 7541 arc_anon->arcs_state = ARC_STATE_ANON; 7542 arc_mru->arcs_state = ARC_STATE_MRU; 7543 arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST; 7544 arc_mfu->arcs_state = ARC_STATE_MFU; 7545 arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST; 7546 arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY; 7547 arc_uncached->arcs_state = ARC_STATE_UNCACHED; 7548 } 7549 7550 static void 7551 arc_state_fini(void) 7552 { 7553 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7554 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7555 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7556 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7557 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7558 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7559 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7560 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7561 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7562 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7563 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7564 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7565 zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]); 7566 zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_DATA]); 7567 7568 zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_DATA]); 7569 zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 7570 zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_DATA]); 7571 zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_METADATA]); 7572 zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]); 7573 zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]); 7574 zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 7575 zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 7576 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]); 7577 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]); 7578 zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]); 7579 zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]); 7580 zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_DATA]); 7581 zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_METADATA]); 7582 7583 multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_METADATA]); 7584 multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]); 7585 multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_METADATA]); 7586 multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]); 7587 multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_DATA]); 7588 multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA]); 7589 multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_DATA]); 7590 multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]); 7591 multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA]); 7592 multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_DATA]); 7593 multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_METADATA]); 7594 multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_DATA]); 7595 7596 wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]); 7597 wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]); 7598 wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]); 7599 wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]); 7600 7601 wmsum_fini(&arc_sums.arcstat_hits); 7602 wmsum_fini(&arc_sums.arcstat_iohits); 7603 wmsum_fini(&arc_sums.arcstat_misses); 7604 wmsum_fini(&arc_sums.arcstat_demand_data_hits); 7605 wmsum_fini(&arc_sums.arcstat_demand_data_iohits); 7606 wmsum_fini(&arc_sums.arcstat_demand_data_misses); 7607 wmsum_fini(&arc_sums.arcstat_demand_metadata_hits); 7608 wmsum_fini(&arc_sums.arcstat_demand_metadata_iohits); 7609 wmsum_fini(&arc_sums.arcstat_demand_metadata_misses); 7610 wmsum_fini(&arc_sums.arcstat_prefetch_data_hits); 7611 wmsum_fini(&arc_sums.arcstat_prefetch_data_iohits); 7612 wmsum_fini(&arc_sums.arcstat_prefetch_data_misses); 7613 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_hits); 7614 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_iohits); 7615 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_misses); 7616 wmsum_fini(&arc_sums.arcstat_mru_hits); 7617 wmsum_fini(&arc_sums.arcstat_mru_ghost_hits); 7618 wmsum_fini(&arc_sums.arcstat_mfu_hits); 7619 wmsum_fini(&arc_sums.arcstat_mfu_ghost_hits); 7620 wmsum_fini(&arc_sums.arcstat_uncached_hits); 7621 wmsum_fini(&arc_sums.arcstat_deleted); 7622 wmsum_fini(&arc_sums.arcstat_mutex_miss); 7623 wmsum_fini(&arc_sums.arcstat_access_skip); 7624 wmsum_fini(&arc_sums.arcstat_evict_skip); 7625 wmsum_fini(&arc_sums.arcstat_evict_not_enough); 7626 wmsum_fini(&arc_sums.arcstat_evict_l2_cached); 7627 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible); 7628 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mfu); 7629 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mru); 7630 wmsum_fini(&arc_sums.arcstat_evict_l2_ineligible); 7631 wmsum_fini(&arc_sums.arcstat_evict_l2_skip); 7632 wmsum_fini(&arc_sums.arcstat_hash_collisions); 7633 wmsum_fini(&arc_sums.arcstat_hash_chains); 7634 aggsum_fini(&arc_sums.arcstat_size); 7635 wmsum_fini(&arc_sums.arcstat_compressed_size); 7636 wmsum_fini(&arc_sums.arcstat_uncompressed_size); 7637 wmsum_fini(&arc_sums.arcstat_overhead_size); 7638 wmsum_fini(&arc_sums.arcstat_hdr_size); 7639 wmsum_fini(&arc_sums.arcstat_data_size); 7640 wmsum_fini(&arc_sums.arcstat_metadata_size); 7641 wmsum_fini(&arc_sums.arcstat_dbuf_size); 7642 wmsum_fini(&arc_sums.arcstat_dnode_size); 7643 wmsum_fini(&arc_sums.arcstat_bonus_size); 7644 wmsum_fini(&arc_sums.arcstat_l2_hits); 7645 wmsum_fini(&arc_sums.arcstat_l2_misses); 7646 wmsum_fini(&arc_sums.arcstat_l2_prefetch_asize); 7647 wmsum_fini(&arc_sums.arcstat_l2_mru_asize); 7648 wmsum_fini(&arc_sums.arcstat_l2_mfu_asize); 7649 wmsum_fini(&arc_sums.arcstat_l2_bufc_data_asize); 7650 wmsum_fini(&arc_sums.arcstat_l2_bufc_metadata_asize); 7651 wmsum_fini(&arc_sums.arcstat_l2_feeds); 7652 wmsum_fini(&arc_sums.arcstat_l2_rw_clash); 7653 wmsum_fini(&arc_sums.arcstat_l2_read_bytes); 7654 wmsum_fini(&arc_sums.arcstat_l2_write_bytes); 7655 wmsum_fini(&arc_sums.arcstat_l2_writes_sent); 7656 wmsum_fini(&arc_sums.arcstat_l2_writes_done); 7657 wmsum_fini(&arc_sums.arcstat_l2_writes_error); 7658 wmsum_fini(&arc_sums.arcstat_l2_writes_lock_retry); 7659 wmsum_fini(&arc_sums.arcstat_l2_evict_lock_retry); 7660 wmsum_fini(&arc_sums.arcstat_l2_evict_reading); 7661 wmsum_fini(&arc_sums.arcstat_l2_evict_l1cached); 7662 wmsum_fini(&arc_sums.arcstat_l2_free_on_write); 7663 wmsum_fini(&arc_sums.arcstat_l2_abort_lowmem); 7664 wmsum_fini(&arc_sums.arcstat_l2_cksum_bad); 7665 wmsum_fini(&arc_sums.arcstat_l2_io_error); 7666 wmsum_fini(&arc_sums.arcstat_l2_lsize); 7667 wmsum_fini(&arc_sums.arcstat_l2_psize); 7668 aggsum_fini(&arc_sums.arcstat_l2_hdr_size); 7669 wmsum_fini(&arc_sums.arcstat_l2_log_blk_writes); 7670 wmsum_fini(&arc_sums.arcstat_l2_log_blk_asize); 7671 wmsum_fini(&arc_sums.arcstat_l2_log_blk_count); 7672 wmsum_fini(&arc_sums.arcstat_l2_rebuild_success); 7673 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_unsupported); 7674 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_io_errors); 7675 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); 7676 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); 7677 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_lowmem); 7678 wmsum_fini(&arc_sums.arcstat_l2_rebuild_size); 7679 wmsum_fini(&arc_sums.arcstat_l2_rebuild_asize); 7680 wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs); 7681 wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs_precached); 7682 wmsum_fini(&arc_sums.arcstat_l2_rebuild_log_blks); 7683 wmsum_fini(&arc_sums.arcstat_memory_throttle_count); 7684 wmsum_fini(&arc_sums.arcstat_memory_direct_count); 7685 wmsum_fini(&arc_sums.arcstat_memory_indirect_count); 7686 wmsum_fini(&arc_sums.arcstat_prune); 7687 wmsum_fini(&arc_sums.arcstat_meta_used); 7688 wmsum_fini(&arc_sums.arcstat_async_upgrade_sync); 7689 wmsum_fini(&arc_sums.arcstat_predictive_prefetch); 7690 wmsum_fini(&arc_sums.arcstat_demand_hit_predictive_prefetch); 7691 wmsum_fini(&arc_sums.arcstat_demand_iohit_predictive_prefetch); 7692 wmsum_fini(&arc_sums.arcstat_prescient_prefetch); 7693 wmsum_fini(&arc_sums.arcstat_demand_hit_prescient_prefetch); 7694 wmsum_fini(&arc_sums.arcstat_demand_iohit_prescient_prefetch); 7695 wmsum_fini(&arc_sums.arcstat_raw_size); 7696 wmsum_fini(&arc_sums.arcstat_cached_only_in_progress); 7697 wmsum_fini(&arc_sums.arcstat_abd_chunk_waste_size); 7698 } 7699 7700 uint64_t 7701 arc_target_bytes(void) 7702 { 7703 return (arc_c); 7704 } 7705 7706 void 7707 arc_set_limits(uint64_t allmem) 7708 { 7709 /* Set min cache to 1/32 of all memory, or 32MB, whichever is more. */ 7710 arc_c_min = MAX(allmem / 32, 2ULL << SPA_MAXBLOCKSHIFT); 7711 7712 /* How to set default max varies by platform. */ 7713 arc_c_max = arc_default_max(arc_c_min, allmem); 7714 } 7715 void 7716 arc_init(void) 7717 { 7718 uint64_t percent, allmem = arc_all_memory(); 7719 mutex_init(&arc_evict_lock, NULL, MUTEX_DEFAULT, NULL); 7720 list_create(&arc_evict_waiters, sizeof (arc_evict_waiter_t), 7721 offsetof(arc_evict_waiter_t, aew_node)); 7722 7723 arc_min_prefetch_ms = 1000; 7724 arc_min_prescient_prefetch_ms = 6000; 7725 7726 #if defined(_KERNEL) 7727 arc_lowmem_init(); 7728 #endif 7729 7730 arc_set_limits(allmem); 7731 7732 #ifdef _KERNEL 7733 /* 7734 * If zfs_arc_max is non-zero at init, meaning it was set in the kernel 7735 * environment before the module was loaded, don't block setting the 7736 * maximum because it is less than arc_c_min, instead, reset arc_c_min 7737 * to a lower value. 7738 * zfs_arc_min will be handled by arc_tuning_update(). 7739 */ 7740 if (zfs_arc_max != 0 && zfs_arc_max >= MIN_ARC_MAX && 7741 zfs_arc_max < allmem) { 7742 arc_c_max = zfs_arc_max; 7743 if (arc_c_min >= arc_c_max) { 7744 arc_c_min = MAX(zfs_arc_max / 2, 7745 2ULL << SPA_MAXBLOCKSHIFT); 7746 } 7747 } 7748 #else 7749 /* 7750 * In userland, there's only the memory pressure that we artificially 7751 * create (see arc_available_memory()). Don't let arc_c get too 7752 * small, because it can cause transactions to be larger than 7753 * arc_c, causing arc_tempreserve_space() to fail. 7754 */ 7755 arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT); 7756 #endif 7757 7758 arc_c = arc_c_min; 7759 /* 7760 * 32-bit fixed point fractions of metadata from total ARC size, 7761 * MRU data from all data and MRU metadata from all metadata. 7762 */ 7763 arc_meta = (1ULL << 32) / 4; /* Metadata is 25% of arc_c. */ 7764 arc_pd = (1ULL << 32) / 2; /* Data MRU is 50% of data. */ 7765 arc_pm = (1ULL << 32) / 2; /* Metadata MRU is 50% of metadata. */ 7766 7767 percent = MIN(zfs_arc_dnode_limit_percent, 100); 7768 arc_dnode_limit = arc_c_max * percent / 100; 7769 7770 /* Apply user specified tunings */ 7771 arc_tuning_update(B_TRUE); 7772 7773 /* if kmem_flags are set, lets try to use less memory */ 7774 if (kmem_debugging()) 7775 arc_c = arc_c / 2; 7776 if (arc_c < arc_c_min) 7777 arc_c = arc_c_min; 7778 7779 arc_register_hotplug(); 7780 7781 arc_state_init(); 7782 7783 buf_init(); 7784 7785 list_create(&arc_prune_list, sizeof (arc_prune_t), 7786 offsetof(arc_prune_t, p_node)); 7787 mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL); 7788 7789 arc_prune_taskq = taskq_create("arc_prune", zfs_arc_prune_task_threads, 7790 defclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC); 7791 7792 arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED, 7793 sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL); 7794 7795 if (arc_ksp != NULL) { 7796 arc_ksp->ks_data = &arc_stats; 7797 arc_ksp->ks_update = arc_kstat_update; 7798 kstat_install(arc_ksp); 7799 } 7800 7801 arc_state_evict_markers = 7802 arc_state_alloc_markers(arc_state_evict_marker_count); 7803 arc_evict_zthr = zthr_create_timer("arc_evict", 7804 arc_evict_cb_check, arc_evict_cb, NULL, SEC2NSEC(1), defclsyspri); 7805 arc_reap_zthr = zthr_create_timer("arc_reap", 7806 arc_reap_cb_check, arc_reap_cb, NULL, SEC2NSEC(1), minclsyspri); 7807 7808 arc_warm = B_FALSE; 7809 7810 /* 7811 * Calculate maximum amount of dirty data per pool. 7812 * 7813 * If it has been set by a module parameter, take that. 7814 * Otherwise, use a percentage of physical memory defined by 7815 * zfs_dirty_data_max_percent (default 10%) with a cap at 7816 * zfs_dirty_data_max_max (default 4G or 25% of physical memory). 7817 */ 7818 #ifdef __LP64__ 7819 if (zfs_dirty_data_max_max == 0) 7820 zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024, 7821 allmem * zfs_dirty_data_max_max_percent / 100); 7822 #else 7823 if (zfs_dirty_data_max_max == 0) 7824 zfs_dirty_data_max_max = MIN(1ULL * 1024 * 1024 * 1024, 7825 allmem * zfs_dirty_data_max_max_percent / 100); 7826 #endif 7827 7828 if (zfs_dirty_data_max == 0) { 7829 zfs_dirty_data_max = allmem * 7830 zfs_dirty_data_max_percent / 100; 7831 zfs_dirty_data_max = MIN(zfs_dirty_data_max, 7832 zfs_dirty_data_max_max); 7833 } 7834 7835 if (zfs_wrlog_data_max == 0) { 7836 7837 /* 7838 * dp_wrlog_total is reduced for each txg at the end of 7839 * spa_sync(). However, dp_dirty_total is reduced every time 7840 * a block is written out. Thus under normal operation, 7841 * dp_wrlog_total could grow 2 times as big as 7842 * zfs_dirty_data_max. 7843 */ 7844 zfs_wrlog_data_max = zfs_dirty_data_max * 2; 7845 } 7846 } 7847 7848 void 7849 arc_fini(void) 7850 { 7851 arc_prune_t *p; 7852 7853 #ifdef _KERNEL 7854 arc_lowmem_fini(); 7855 #endif /* _KERNEL */ 7856 7857 /* Use B_TRUE to ensure *all* buffers are evicted */ 7858 arc_flush(NULL, B_TRUE); 7859 7860 if (arc_ksp != NULL) { 7861 kstat_delete(arc_ksp); 7862 arc_ksp = NULL; 7863 } 7864 7865 taskq_wait(arc_prune_taskq); 7866 taskq_destroy(arc_prune_taskq); 7867 7868 mutex_enter(&arc_prune_mtx); 7869 while ((p = list_head(&arc_prune_list)) != NULL) { 7870 list_remove(&arc_prune_list, p); 7871 zfs_refcount_remove(&p->p_refcnt, &arc_prune_list); 7872 zfs_refcount_destroy(&p->p_refcnt); 7873 kmem_free(p, sizeof (*p)); 7874 } 7875 mutex_exit(&arc_prune_mtx); 7876 7877 list_destroy(&arc_prune_list); 7878 mutex_destroy(&arc_prune_mtx); 7879 7880 (void) zthr_cancel(arc_evict_zthr); 7881 (void) zthr_cancel(arc_reap_zthr); 7882 arc_state_free_markers(arc_state_evict_markers, 7883 arc_state_evict_marker_count); 7884 7885 mutex_destroy(&arc_evict_lock); 7886 list_destroy(&arc_evict_waiters); 7887 7888 /* 7889 * Free any buffers that were tagged for destruction. This needs 7890 * to occur before arc_state_fini() runs and destroys the aggsum 7891 * values which are updated when freeing scatter ABDs. 7892 */ 7893 l2arc_do_free_on_write(); 7894 7895 /* 7896 * buf_fini() must proceed arc_state_fini() because buf_fin() may 7897 * trigger the release of kmem magazines, which can callback to 7898 * arc_space_return() which accesses aggsums freed in act_state_fini(). 7899 */ 7900 buf_fini(); 7901 arc_state_fini(); 7902 7903 arc_unregister_hotplug(); 7904 7905 /* 7906 * We destroy the zthrs after all the ARC state has been 7907 * torn down to avoid the case of them receiving any 7908 * wakeup() signals after they are destroyed. 7909 */ 7910 zthr_destroy(arc_evict_zthr); 7911 zthr_destroy(arc_reap_zthr); 7912 7913 ASSERT0(arc_loaned_bytes); 7914 } 7915 7916 /* 7917 * Level 2 ARC 7918 * 7919 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk. 7920 * It uses dedicated storage devices to hold cached data, which are populated 7921 * using large infrequent writes. The main role of this cache is to boost 7922 * the performance of random read workloads. The intended L2ARC devices 7923 * include short-stroked disks, solid state disks, and other media with 7924 * substantially faster read latency than disk. 7925 * 7926 * +-----------------------+ 7927 * | ARC | 7928 * +-----------------------+ 7929 * | ^ ^ 7930 * | | | 7931 * l2arc_feed_thread() arc_read() 7932 * | | | 7933 * | l2arc read | 7934 * V | | 7935 * +---------------+ | 7936 * | L2ARC | | 7937 * +---------------+ | 7938 * | ^ | 7939 * l2arc_write() | | 7940 * | | | 7941 * V | | 7942 * +-------+ +-------+ 7943 * | vdev | | vdev | 7944 * | cache | | cache | 7945 * +-------+ +-------+ 7946 * +=========+ .-----. 7947 * : L2ARC : |-_____-| 7948 * : devices : | Disks | 7949 * +=========+ `-_____-' 7950 * 7951 * Read requests are satisfied from the following sources, in order: 7952 * 7953 * 1) ARC 7954 * 2) vdev cache of L2ARC devices 7955 * 3) L2ARC devices 7956 * 4) vdev cache of disks 7957 * 5) disks 7958 * 7959 * Some L2ARC device types exhibit extremely slow write performance. 7960 * To accommodate for this there are some significant differences between 7961 * the L2ARC and traditional cache design: 7962 * 7963 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from 7964 * the ARC behave as usual, freeing buffers and placing headers on ghost 7965 * lists. The ARC does not send buffers to the L2ARC during eviction as 7966 * this would add inflated write latencies for all ARC memory pressure. 7967 * 7968 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. 7969 * It does this by periodically scanning buffers from the eviction-end of 7970 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are 7971 * not already there. It scans until a headroom of buffers is satisfied, 7972 * which itself is a buffer for ARC eviction. If a compressible buffer is 7973 * found during scanning and selected for writing to an L2ARC device, we 7974 * temporarily boost scanning headroom during the next scan cycle to make 7975 * sure we adapt to compression effects (which might significantly reduce 7976 * the data volume we write to L2ARC). The thread that does this is 7977 * l2arc_feed_thread(), illustrated below; example sizes are included to 7978 * provide a better sense of ratio than this diagram: 7979 * 7980 * head --> tail 7981 * +---------------------+----------+ 7982 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC 7983 * +---------------------+----------+ | o L2ARC eligible 7984 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer 7985 * +---------------------+----------+ | 7986 * 15.9 Gbytes ^ 32 Mbytes | 7987 * headroom | 7988 * l2arc_feed_thread() 7989 * | 7990 * l2arc write hand <--[oooo]--' 7991 * | 8 Mbyte 7992 * | write max 7993 * V 7994 * +==============================+ 7995 * L2ARC dev |####|#|###|###| |####| ... | 7996 * +==============================+ 7997 * 32 Gbytes 7998 * 7999 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of 8000 * evicted, then the L2ARC has cached a buffer much sooner than it probably 8001 * needed to, potentially wasting L2ARC device bandwidth and storage. It is 8002 * safe to say that this is an uncommon case, since buffers at the end of 8003 * the ARC lists have moved there due to inactivity. 8004 * 8005 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom, 8006 * then the L2ARC simply misses copying some buffers. This serves as a 8007 * pressure valve to prevent heavy read workloads from both stalling the ARC 8008 * with waits and clogging the L2ARC with writes. This also helps prevent 8009 * the potential for the L2ARC to churn if it attempts to cache content too 8010 * quickly, such as during backups of the entire pool. 8011 * 8012 * 5. After system boot and before the ARC has filled main memory, there are 8013 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru 8014 * lists can remain mostly static. Instead of searching from tail of these 8015 * lists as pictured, the l2arc_feed_thread() will search from the list heads 8016 * for eligible buffers, greatly increasing its chance of finding them. 8017 * 8018 * The L2ARC device write speed is also boosted during this time so that 8019 * the L2ARC warms up faster. Since there have been no ARC evictions yet, 8020 * there are no L2ARC reads, and no fear of degrading read performance 8021 * through increased writes. 8022 * 8023 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that 8024 * the vdev queue can aggregate them into larger and fewer writes. Each 8025 * device is written to in a rotor fashion, sweeping writes through 8026 * available space then repeating. 8027 * 8028 * 7. The L2ARC does not store dirty content. It never needs to flush 8029 * write buffers back to disk based storage. 8030 * 8031 * 8. If an ARC buffer is written (and dirtied) which also exists in the 8032 * L2ARC, the now stale L2ARC buffer is immediately dropped. 8033 * 8034 * The performance of the L2ARC can be tweaked by a number of tunables, which 8035 * may be necessary for different workloads: 8036 * 8037 * l2arc_write_max max write bytes per interval 8038 * l2arc_write_boost extra write bytes during device warmup 8039 * l2arc_noprefetch skip caching prefetched buffers 8040 * l2arc_headroom number of max device writes to precache 8041 * l2arc_headroom_boost when we find compressed buffers during ARC 8042 * scanning, we multiply headroom by this 8043 * percentage factor for the next scan cycle, 8044 * since more compressed buffers are likely to 8045 * be present 8046 * l2arc_feed_secs seconds between L2ARC writing 8047 * 8048 * Tunables may be removed or added as future performance improvements are 8049 * integrated, and also may become zpool properties. 8050 * 8051 * There are three key functions that control how the L2ARC warms up: 8052 * 8053 * l2arc_write_eligible() check if a buffer is eligible to cache 8054 * l2arc_write_size() calculate how much to write 8055 * l2arc_write_interval() calculate sleep delay between writes 8056 * 8057 * These three functions determine what to write, how much, and how quickly 8058 * to send writes. 8059 * 8060 * L2ARC persistence: 8061 * 8062 * When writing buffers to L2ARC, we periodically add some metadata to 8063 * make sure we can pick them up after reboot, thus dramatically reducing 8064 * the impact that any downtime has on the performance of storage systems 8065 * with large caches. 8066 * 8067 * The implementation works fairly simply by integrating the following two 8068 * modifications: 8069 * 8070 * *) When writing to the L2ARC, we occasionally write a "l2arc log block", 8071 * which is an additional piece of metadata which describes what's been 8072 * written. This allows us to rebuild the arc_buf_hdr_t structures of the 8073 * main ARC buffers. There are 2 linked-lists of log blocks headed by 8074 * dh_start_lbps[2]. We alternate which chain we append to, so they are 8075 * time-wise and offset-wise interleaved, but that is an optimization rather 8076 * than for correctness. The log block also includes a pointer to the 8077 * previous block in its chain. 8078 * 8079 * *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device 8080 * for our header bookkeeping purposes. This contains a device header, 8081 * which contains our top-level reference structures. We update it each 8082 * time we write a new log block, so that we're able to locate it in the 8083 * L2ARC device. If this write results in an inconsistent device header 8084 * (e.g. due to power failure), we detect this by verifying the header's 8085 * checksum and simply fail to reconstruct the L2ARC after reboot. 8086 * 8087 * Implementation diagram: 8088 * 8089 * +=== L2ARC device (not to scale) ======================================+ 8090 * | ___two newest log block pointers__.__________ | 8091 * | / \dh_start_lbps[1] | 8092 * | / \ \dh_start_lbps[0]| 8093 * |.___/__. V V | 8094 * ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---| 8095 * || hdr| ^ /^ /^ / / | 8096 * |+------+ ...--\-------/ \-----/--\------/ / | 8097 * | \--------------/ \--------------/ | 8098 * +======================================================================+ 8099 * 8100 * As can be seen on the diagram, rather than using a simple linked list, 8101 * we use a pair of linked lists with alternating elements. This is a 8102 * performance enhancement due to the fact that we only find out the 8103 * address of the next log block access once the current block has been 8104 * completely read in. Obviously, this hurts performance, because we'd be 8105 * keeping the device's I/O queue at only a 1 operation deep, thus 8106 * incurring a large amount of I/O round-trip latency. Having two lists 8107 * allows us to fetch two log blocks ahead of where we are currently 8108 * rebuilding L2ARC buffers. 8109 * 8110 * On-device data structures: 8111 * 8112 * L2ARC device header: l2arc_dev_hdr_phys_t 8113 * L2ARC log block: l2arc_log_blk_phys_t 8114 * 8115 * L2ARC reconstruction: 8116 * 8117 * When writing data, we simply write in the standard rotary fashion, 8118 * evicting buffers as we go and simply writing new data over them (writing 8119 * a new log block every now and then). This obviously means that once we 8120 * loop around the end of the device, we will start cutting into an already 8121 * committed log block (and its referenced data buffers), like so: 8122 * 8123 * current write head__ __old tail 8124 * \ / 8125 * V V 8126 * <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |--> 8127 * ^ ^^^^^^^^^___________________________________ 8128 * | \ 8129 * <<nextwrite>> may overwrite this blk and/or its bufs --' 8130 * 8131 * When importing the pool, we detect this situation and use it to stop 8132 * our scanning process (see l2arc_rebuild). 8133 * 8134 * There is one significant caveat to consider when rebuilding ARC contents 8135 * from an L2ARC device: what about invalidated buffers? Given the above 8136 * construction, we cannot update blocks which we've already written to amend 8137 * them to remove buffers which were invalidated. Thus, during reconstruction, 8138 * we might be populating the cache with buffers for data that's not on the 8139 * main pool anymore, or may have been overwritten! 8140 * 8141 * As it turns out, this isn't a problem. Every arc_read request includes 8142 * both the DVA and, crucially, the birth TXG of the BP the caller is 8143 * looking for. So even if the cache were populated by completely rotten 8144 * blocks for data that had been long deleted and/or overwritten, we'll 8145 * never actually return bad data from the cache, since the DVA with the 8146 * birth TXG uniquely identify a block in space and time - once created, 8147 * a block is immutable on disk. The worst thing we have done is wasted 8148 * some time and memory at l2arc rebuild to reconstruct outdated ARC 8149 * entries that will get dropped from the l2arc as it is being updated 8150 * with new blocks. 8151 * 8152 * L2ARC buffers that have been evicted by l2arc_evict() ahead of the write 8153 * hand are not restored. This is done by saving the offset (in bytes) 8154 * l2arc_evict() has evicted to in the L2ARC device header and taking it 8155 * into account when restoring buffers. 8156 */ 8157 8158 static boolean_t 8159 l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr) 8160 { 8161 /* 8162 * A buffer is *not* eligible for the L2ARC if it: 8163 * 1. belongs to a different spa. 8164 * 2. is already cached on the L2ARC. 8165 * 3. has an I/O in progress (it may be an incomplete read). 8166 * 4. is flagged not eligible (zfs property). 8167 */ 8168 if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) || 8169 HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr)) 8170 return (B_FALSE); 8171 8172 return (B_TRUE); 8173 } 8174 8175 static uint64_t 8176 l2arc_write_size(l2arc_dev_t *dev) 8177 { 8178 uint64_t size, dev_size, tsize; 8179 8180 /* 8181 * Make sure our globals have meaningful values in case the user 8182 * altered them. 8183 */ 8184 size = l2arc_write_max; 8185 if (size == 0) { 8186 cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must " 8187 "be greater than zero, resetting it to the default (%d)", 8188 L2ARC_WRITE_SIZE); 8189 size = l2arc_write_max = L2ARC_WRITE_SIZE; 8190 } 8191 8192 if (arc_warm == B_FALSE) 8193 size += l2arc_write_boost; 8194 8195 /* 8196 * Make sure the write size does not exceed the size of the cache 8197 * device. This is important in l2arc_evict(), otherwise infinite 8198 * iteration can occur. 8199 */ 8200 dev_size = dev->l2ad_end - dev->l2ad_start; 8201 tsize = size + l2arc_log_blk_overhead(size, dev); 8202 if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) 8203 tsize += MAX(64 * 1024 * 1024, 8204 (tsize * l2arc_trim_ahead) / 100); 8205 8206 if (tsize >= dev_size) { 8207 cmn_err(CE_NOTE, "l2arc_write_max or l2arc_write_boost " 8208 "plus the overhead of log blocks (persistent L2ARC, " 8209 "%llu bytes) exceeds the size of the cache device " 8210 "(guid %llu), resetting them to the default (%d)", 8211 (u_longlong_t)l2arc_log_blk_overhead(size, dev), 8212 (u_longlong_t)dev->l2ad_vdev->vdev_guid, L2ARC_WRITE_SIZE); 8213 size = l2arc_write_max = l2arc_write_boost = L2ARC_WRITE_SIZE; 8214 8215 if (arc_warm == B_FALSE) 8216 size += l2arc_write_boost; 8217 } 8218 8219 return (size); 8220 8221 } 8222 8223 static clock_t 8224 l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote) 8225 { 8226 clock_t interval, next, now; 8227 8228 /* 8229 * If the ARC lists are busy, increase our write rate; if the 8230 * lists are stale, idle back. This is achieved by checking 8231 * how much we previously wrote - if it was more than half of 8232 * what we wanted, schedule the next write much sooner. 8233 */ 8234 if (l2arc_feed_again && wrote > (wanted / 2)) 8235 interval = (hz * l2arc_feed_min_ms) / 1000; 8236 else 8237 interval = hz * l2arc_feed_secs; 8238 8239 now = ddi_get_lbolt(); 8240 next = MAX(now, MIN(now + interval, began + interval)); 8241 8242 return (next); 8243 } 8244 8245 /* 8246 * Cycle through L2ARC devices. This is how L2ARC load balances. 8247 * If a device is returned, this also returns holding the spa config lock. 8248 */ 8249 static l2arc_dev_t * 8250 l2arc_dev_get_next(void) 8251 { 8252 l2arc_dev_t *first, *next = NULL; 8253 8254 /* 8255 * Lock out the removal of spas (spa_namespace_lock), then removal 8256 * of cache devices (l2arc_dev_mtx). Once a device has been selected, 8257 * both locks will be dropped and a spa config lock held instead. 8258 */ 8259 mutex_enter(&spa_namespace_lock); 8260 mutex_enter(&l2arc_dev_mtx); 8261 8262 /* if there are no vdevs, there is nothing to do */ 8263 if (l2arc_ndev == 0) 8264 goto out; 8265 8266 first = NULL; 8267 next = l2arc_dev_last; 8268 do { 8269 /* loop around the list looking for a non-faulted vdev */ 8270 if (next == NULL) { 8271 next = list_head(l2arc_dev_list); 8272 } else { 8273 next = list_next(l2arc_dev_list, next); 8274 if (next == NULL) 8275 next = list_head(l2arc_dev_list); 8276 } 8277 8278 /* if we have come back to the start, bail out */ 8279 if (first == NULL) 8280 first = next; 8281 else if (next == first) 8282 break; 8283 8284 ASSERT3P(next, !=, NULL); 8285 } while (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8286 next->l2ad_trim_all); 8287 8288 /* if we were unable to find any usable vdevs, return NULL */ 8289 if (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8290 next->l2ad_trim_all) 8291 next = NULL; 8292 8293 l2arc_dev_last = next; 8294 8295 out: 8296 mutex_exit(&l2arc_dev_mtx); 8297 8298 /* 8299 * Grab the config lock to prevent the 'next' device from being 8300 * removed while we are writing to it. 8301 */ 8302 if (next != NULL) 8303 spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER); 8304 mutex_exit(&spa_namespace_lock); 8305 8306 return (next); 8307 } 8308 8309 /* 8310 * Free buffers that were tagged for destruction. 8311 */ 8312 static void 8313 l2arc_do_free_on_write(void) 8314 { 8315 list_t *buflist; 8316 l2arc_data_free_t *df, *df_prev; 8317 8318 mutex_enter(&l2arc_free_on_write_mtx); 8319 buflist = l2arc_free_on_write; 8320 8321 for (df = list_tail(buflist); df; df = df_prev) { 8322 df_prev = list_prev(buflist, df); 8323 ASSERT3P(df->l2df_abd, !=, NULL); 8324 abd_free(df->l2df_abd); 8325 list_remove(buflist, df); 8326 kmem_free(df, sizeof (l2arc_data_free_t)); 8327 } 8328 8329 mutex_exit(&l2arc_free_on_write_mtx); 8330 } 8331 8332 /* 8333 * A write to a cache device has completed. Update all headers to allow 8334 * reads from these buffers to begin. 8335 */ 8336 static void 8337 l2arc_write_done(zio_t *zio) 8338 { 8339 l2arc_write_callback_t *cb; 8340 l2arc_lb_abd_buf_t *abd_buf; 8341 l2arc_lb_ptr_buf_t *lb_ptr_buf; 8342 l2arc_dev_t *dev; 8343 l2arc_dev_hdr_phys_t *l2dhdr; 8344 list_t *buflist; 8345 arc_buf_hdr_t *head, *hdr, *hdr_prev; 8346 kmutex_t *hash_lock; 8347 int64_t bytes_dropped = 0; 8348 8349 cb = zio->io_private; 8350 ASSERT3P(cb, !=, NULL); 8351 dev = cb->l2wcb_dev; 8352 l2dhdr = dev->l2ad_dev_hdr; 8353 ASSERT3P(dev, !=, NULL); 8354 head = cb->l2wcb_head; 8355 ASSERT3P(head, !=, NULL); 8356 buflist = &dev->l2ad_buflist; 8357 ASSERT3P(buflist, !=, NULL); 8358 DTRACE_PROBE2(l2arc__iodone, zio_t *, zio, 8359 l2arc_write_callback_t *, cb); 8360 8361 /* 8362 * All writes completed, or an error was hit. 8363 */ 8364 top: 8365 mutex_enter(&dev->l2ad_mtx); 8366 for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) { 8367 hdr_prev = list_prev(buflist, hdr); 8368 8369 hash_lock = HDR_LOCK(hdr); 8370 8371 /* 8372 * We cannot use mutex_enter or else we can deadlock 8373 * with l2arc_write_buffers (due to swapping the order 8374 * the hash lock and l2ad_mtx are taken). 8375 */ 8376 if (!mutex_tryenter(hash_lock)) { 8377 /* 8378 * Missed the hash lock. We must retry so we 8379 * don't leave the ARC_FLAG_L2_WRITING bit set. 8380 */ 8381 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry); 8382 8383 /* 8384 * We don't want to rescan the headers we've 8385 * already marked as having been written out, so 8386 * we reinsert the head node so we can pick up 8387 * where we left off. 8388 */ 8389 list_remove(buflist, head); 8390 list_insert_after(buflist, hdr, head); 8391 8392 mutex_exit(&dev->l2ad_mtx); 8393 8394 /* 8395 * We wait for the hash lock to become available 8396 * to try and prevent busy waiting, and increase 8397 * the chance we'll be able to acquire the lock 8398 * the next time around. 8399 */ 8400 mutex_enter(hash_lock); 8401 mutex_exit(hash_lock); 8402 goto top; 8403 } 8404 8405 /* 8406 * We could not have been moved into the arc_l2c_only 8407 * state while in-flight due to our ARC_FLAG_L2_WRITING 8408 * bit being set. Let's just ensure that's being enforced. 8409 */ 8410 ASSERT(HDR_HAS_L1HDR(hdr)); 8411 8412 /* 8413 * Skipped - drop L2ARC entry and mark the header as no 8414 * longer L2 eligibile. 8415 */ 8416 if (zio->io_error != 0) { 8417 /* 8418 * Error - drop L2ARC entry. 8419 */ 8420 list_remove(buflist, hdr); 8421 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 8422 8423 uint64_t psize = HDR_GET_PSIZE(hdr); 8424 l2arc_hdr_arcstats_decrement(hdr); 8425 8426 bytes_dropped += 8427 vdev_psize_to_asize(dev->l2ad_vdev, psize); 8428 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 8429 arc_hdr_size(hdr), hdr); 8430 } 8431 8432 /* 8433 * Allow ARC to begin reads and ghost list evictions to 8434 * this L2ARC entry. 8435 */ 8436 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING); 8437 8438 mutex_exit(hash_lock); 8439 } 8440 8441 /* 8442 * Free the allocated abd buffers for writing the log blocks. 8443 * If the zio failed reclaim the allocated space and remove the 8444 * pointers to these log blocks from the log block pointer list 8445 * of the L2ARC device. 8446 */ 8447 while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) { 8448 abd_free(abd_buf->abd); 8449 zio_buf_free(abd_buf, sizeof (*abd_buf)); 8450 if (zio->io_error != 0) { 8451 lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list); 8452 /* 8453 * L2BLK_GET_PSIZE returns aligned size for log 8454 * blocks. 8455 */ 8456 uint64_t asize = 8457 L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop); 8458 bytes_dropped += asize; 8459 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8460 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8461 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8462 lb_ptr_buf); 8463 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8464 kmem_free(lb_ptr_buf->lb_ptr, 8465 sizeof (l2arc_log_blkptr_t)); 8466 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8467 } 8468 } 8469 list_destroy(&cb->l2wcb_abd_list); 8470 8471 if (zio->io_error != 0) { 8472 ARCSTAT_BUMP(arcstat_l2_writes_error); 8473 8474 /* 8475 * Restore the lbps array in the header to its previous state. 8476 * If the list of log block pointers is empty, zero out the 8477 * log block pointers in the device header. 8478 */ 8479 lb_ptr_buf = list_head(&dev->l2ad_lbptr_list); 8480 for (int i = 0; i < 2; i++) { 8481 if (lb_ptr_buf == NULL) { 8482 /* 8483 * If the list is empty zero out the device 8484 * header. Otherwise zero out the second log 8485 * block pointer in the header. 8486 */ 8487 if (i == 0) { 8488 memset(l2dhdr, 0, 8489 dev->l2ad_dev_hdr_asize); 8490 } else { 8491 memset(&l2dhdr->dh_start_lbps[i], 0, 8492 sizeof (l2arc_log_blkptr_t)); 8493 } 8494 break; 8495 } 8496 memcpy(&l2dhdr->dh_start_lbps[i], lb_ptr_buf->lb_ptr, 8497 sizeof (l2arc_log_blkptr_t)); 8498 lb_ptr_buf = list_next(&dev->l2ad_lbptr_list, 8499 lb_ptr_buf); 8500 } 8501 } 8502 8503 ARCSTAT_BUMP(arcstat_l2_writes_done); 8504 list_remove(buflist, head); 8505 ASSERT(!HDR_HAS_L1HDR(head)); 8506 kmem_cache_free(hdr_l2only_cache, head); 8507 mutex_exit(&dev->l2ad_mtx); 8508 8509 ASSERT(dev->l2ad_vdev != NULL); 8510 vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0); 8511 8512 l2arc_do_free_on_write(); 8513 8514 kmem_free(cb, sizeof (l2arc_write_callback_t)); 8515 } 8516 8517 static int 8518 l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb) 8519 { 8520 int ret; 8521 spa_t *spa = zio->io_spa; 8522 arc_buf_hdr_t *hdr = cb->l2rcb_hdr; 8523 blkptr_t *bp = zio->io_bp; 8524 uint8_t salt[ZIO_DATA_SALT_LEN]; 8525 uint8_t iv[ZIO_DATA_IV_LEN]; 8526 uint8_t mac[ZIO_DATA_MAC_LEN]; 8527 boolean_t no_crypt = B_FALSE; 8528 8529 /* 8530 * ZIL data is never be written to the L2ARC, so we don't need 8531 * special handling for its unique MAC storage. 8532 */ 8533 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 8534 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 8535 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8536 8537 /* 8538 * If the data was encrypted, decrypt it now. Note that 8539 * we must check the bp here and not the hdr, since the 8540 * hdr does not have its encryption parameters updated 8541 * until arc_read_done(). 8542 */ 8543 if (BP_IS_ENCRYPTED(bp)) { 8544 abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8545 ARC_HDR_USE_RESERVE); 8546 8547 zio_crypt_decode_params_bp(bp, salt, iv); 8548 zio_crypt_decode_mac_bp(bp, mac); 8549 8550 ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb, 8551 BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp), 8552 salt, iv, mac, HDR_GET_PSIZE(hdr), eabd, 8553 hdr->b_l1hdr.b_pabd, &no_crypt); 8554 if (ret != 0) { 8555 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8556 goto error; 8557 } 8558 8559 /* 8560 * If we actually performed decryption, replace b_pabd 8561 * with the decrypted data. Otherwise we can just throw 8562 * our decryption buffer away. 8563 */ 8564 if (!no_crypt) { 8565 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8566 arc_hdr_size(hdr), hdr); 8567 hdr->b_l1hdr.b_pabd = eabd; 8568 zio->io_abd = eabd; 8569 } else { 8570 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8571 } 8572 } 8573 8574 /* 8575 * If the L2ARC block was compressed, but ARC compression 8576 * is disabled we decompress the data into a new buffer and 8577 * replace the existing data. 8578 */ 8579 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8580 !HDR_COMPRESSION_ENABLED(hdr)) { 8581 abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8582 ARC_HDR_USE_RESERVE); 8583 void *tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 8584 8585 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 8586 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 8587 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 8588 if (ret != 0) { 8589 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8590 arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr); 8591 goto error; 8592 } 8593 8594 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8595 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8596 arc_hdr_size(hdr), hdr); 8597 hdr->b_l1hdr.b_pabd = cabd; 8598 zio->io_abd = cabd; 8599 zio->io_size = HDR_GET_LSIZE(hdr); 8600 } 8601 8602 return (0); 8603 8604 error: 8605 return (ret); 8606 } 8607 8608 8609 /* 8610 * A read to a cache device completed. Validate buffer contents before 8611 * handing over to the regular ARC routines. 8612 */ 8613 static void 8614 l2arc_read_done(zio_t *zio) 8615 { 8616 int tfm_error = 0; 8617 l2arc_read_callback_t *cb = zio->io_private; 8618 arc_buf_hdr_t *hdr; 8619 kmutex_t *hash_lock; 8620 boolean_t valid_cksum; 8621 boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) && 8622 (cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT)); 8623 8624 ASSERT3P(zio->io_vd, !=, NULL); 8625 ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE); 8626 8627 spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd); 8628 8629 ASSERT3P(cb, !=, NULL); 8630 hdr = cb->l2rcb_hdr; 8631 ASSERT3P(hdr, !=, NULL); 8632 8633 hash_lock = HDR_LOCK(hdr); 8634 mutex_enter(hash_lock); 8635 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 8636 8637 /* 8638 * If the data was read into a temporary buffer, 8639 * move it and free the buffer. 8640 */ 8641 if (cb->l2rcb_abd != NULL) { 8642 ASSERT3U(arc_hdr_size(hdr), <, zio->io_size); 8643 if (zio->io_error == 0) { 8644 if (using_rdata) { 8645 abd_copy(hdr->b_crypt_hdr.b_rabd, 8646 cb->l2rcb_abd, arc_hdr_size(hdr)); 8647 } else { 8648 abd_copy(hdr->b_l1hdr.b_pabd, 8649 cb->l2rcb_abd, arc_hdr_size(hdr)); 8650 } 8651 } 8652 8653 /* 8654 * The following must be done regardless of whether 8655 * there was an error: 8656 * - free the temporary buffer 8657 * - point zio to the real ARC buffer 8658 * - set zio size accordingly 8659 * These are required because zio is either re-used for 8660 * an I/O of the block in the case of the error 8661 * or the zio is passed to arc_read_done() and it 8662 * needs real data. 8663 */ 8664 abd_free(cb->l2rcb_abd); 8665 zio->io_size = zio->io_orig_size = arc_hdr_size(hdr); 8666 8667 if (using_rdata) { 8668 ASSERT(HDR_HAS_RABD(hdr)); 8669 zio->io_abd = zio->io_orig_abd = 8670 hdr->b_crypt_hdr.b_rabd; 8671 } else { 8672 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8673 zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd; 8674 } 8675 } 8676 8677 ASSERT3P(zio->io_abd, !=, NULL); 8678 8679 /* 8680 * Check this survived the L2ARC journey. 8681 */ 8682 ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd || 8683 (HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd)); 8684 zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */ 8685 zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */ 8686 zio->io_prop.zp_complevel = hdr->b_complevel; 8687 8688 valid_cksum = arc_cksum_is_equal(hdr, zio); 8689 8690 /* 8691 * b_rabd will always match the data as it exists on disk if it is 8692 * being used. Therefore if we are reading into b_rabd we do not 8693 * attempt to untransform the data. 8694 */ 8695 if (valid_cksum && !using_rdata) 8696 tfm_error = l2arc_untransform(zio, cb); 8697 8698 if (valid_cksum && tfm_error == 0 && zio->io_error == 0 && 8699 !HDR_L2_EVICTED(hdr)) { 8700 mutex_exit(hash_lock); 8701 zio->io_private = hdr; 8702 arc_read_done(zio); 8703 } else { 8704 /* 8705 * Buffer didn't survive caching. Increment stats and 8706 * reissue to the original storage device. 8707 */ 8708 if (zio->io_error != 0) { 8709 ARCSTAT_BUMP(arcstat_l2_io_error); 8710 } else { 8711 zio->io_error = SET_ERROR(EIO); 8712 } 8713 if (!valid_cksum || tfm_error != 0) 8714 ARCSTAT_BUMP(arcstat_l2_cksum_bad); 8715 8716 /* 8717 * If there's no waiter, issue an async i/o to the primary 8718 * storage now. If there *is* a waiter, the caller must 8719 * issue the i/o in a context where it's OK to block. 8720 */ 8721 if (zio->io_waiter == NULL) { 8722 zio_t *pio = zio_unique_parent(zio); 8723 void *abd = (using_rdata) ? 8724 hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd; 8725 8726 ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL); 8727 8728 zio = zio_read(pio, zio->io_spa, zio->io_bp, 8729 abd, zio->io_size, arc_read_done, 8730 hdr, zio->io_priority, cb->l2rcb_flags, 8731 &cb->l2rcb_zb); 8732 8733 /* 8734 * Original ZIO will be freed, so we need to update 8735 * ARC header with the new ZIO pointer to be used 8736 * by zio_change_priority() in arc_read(). 8737 */ 8738 for (struct arc_callback *acb = hdr->b_l1hdr.b_acb; 8739 acb != NULL; acb = acb->acb_next) 8740 acb->acb_zio_head = zio; 8741 8742 mutex_exit(hash_lock); 8743 zio_nowait(zio); 8744 } else { 8745 mutex_exit(hash_lock); 8746 } 8747 } 8748 8749 kmem_free(cb, sizeof (l2arc_read_callback_t)); 8750 } 8751 8752 /* 8753 * This is the list priority from which the L2ARC will search for pages to 8754 * cache. This is used within loops (0..3) to cycle through lists in the 8755 * desired order. This order can have a significant effect on cache 8756 * performance. 8757 * 8758 * Currently the metadata lists are hit first, MFU then MRU, followed by 8759 * the data lists. This function returns a locked list, and also returns 8760 * the lock pointer. 8761 */ 8762 static multilist_sublist_t * 8763 l2arc_sublist_lock(int list_num) 8764 { 8765 multilist_t *ml = NULL; 8766 unsigned int idx; 8767 8768 ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES); 8769 8770 switch (list_num) { 8771 case 0: 8772 ml = &arc_mfu->arcs_list[ARC_BUFC_METADATA]; 8773 break; 8774 case 1: 8775 ml = &arc_mru->arcs_list[ARC_BUFC_METADATA]; 8776 break; 8777 case 2: 8778 ml = &arc_mfu->arcs_list[ARC_BUFC_DATA]; 8779 break; 8780 case 3: 8781 ml = &arc_mru->arcs_list[ARC_BUFC_DATA]; 8782 break; 8783 default: 8784 return (NULL); 8785 } 8786 8787 /* 8788 * Return a randomly-selected sublist. This is acceptable 8789 * because the caller feeds only a little bit of data for each 8790 * call (8MB). Subsequent calls will result in different 8791 * sublists being selected. 8792 */ 8793 idx = multilist_get_random_index(ml); 8794 return (multilist_sublist_lock(ml, idx)); 8795 } 8796 8797 /* 8798 * Calculates the maximum overhead of L2ARC metadata log blocks for a given 8799 * L2ARC write size. l2arc_evict and l2arc_write_size need to include this 8800 * overhead in processing to make sure there is enough headroom available 8801 * when writing buffers. 8802 */ 8803 static inline uint64_t 8804 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev) 8805 { 8806 if (dev->l2ad_log_entries == 0) { 8807 return (0); 8808 } else { 8809 uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT; 8810 8811 uint64_t log_blocks = (log_entries + 8812 dev->l2ad_log_entries - 1) / 8813 dev->l2ad_log_entries; 8814 8815 return (vdev_psize_to_asize(dev->l2ad_vdev, 8816 sizeof (l2arc_log_blk_phys_t)) * log_blocks); 8817 } 8818 } 8819 8820 /* 8821 * Evict buffers from the device write hand to the distance specified in 8822 * bytes. This distance may span populated buffers, it may span nothing. 8823 * This is clearing a region on the L2ARC device ready for writing. 8824 * If the 'all' boolean is set, every buffer is evicted. 8825 */ 8826 static void 8827 l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all) 8828 { 8829 list_t *buflist; 8830 arc_buf_hdr_t *hdr, *hdr_prev; 8831 kmutex_t *hash_lock; 8832 uint64_t taddr; 8833 l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev; 8834 vdev_t *vd = dev->l2ad_vdev; 8835 boolean_t rerun; 8836 8837 buflist = &dev->l2ad_buflist; 8838 8839 /* 8840 * We need to add in the worst case scenario of log block overhead. 8841 */ 8842 distance += l2arc_log_blk_overhead(distance, dev); 8843 if (vd->vdev_has_trim && l2arc_trim_ahead > 0) { 8844 /* 8845 * Trim ahead of the write size 64MB or (l2arc_trim_ahead/100) 8846 * times the write size, whichever is greater. 8847 */ 8848 distance += MAX(64 * 1024 * 1024, 8849 (distance * l2arc_trim_ahead) / 100); 8850 } 8851 8852 top: 8853 rerun = B_FALSE; 8854 if (dev->l2ad_hand >= (dev->l2ad_end - distance)) { 8855 /* 8856 * When there is no space to accommodate upcoming writes, 8857 * evict to the end. Then bump the write and evict hands 8858 * to the start and iterate. This iteration does not 8859 * happen indefinitely as we make sure in 8860 * l2arc_write_size() that when the write hand is reset, 8861 * the write size does not exceed the end of the device. 8862 */ 8863 rerun = B_TRUE; 8864 taddr = dev->l2ad_end; 8865 } else { 8866 taddr = dev->l2ad_hand + distance; 8867 } 8868 DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist, 8869 uint64_t, taddr, boolean_t, all); 8870 8871 if (!all) { 8872 /* 8873 * This check has to be placed after deciding whether to 8874 * iterate (rerun). 8875 */ 8876 if (dev->l2ad_first) { 8877 /* 8878 * This is the first sweep through the device. There is 8879 * nothing to evict. We have already trimmmed the 8880 * whole device. 8881 */ 8882 goto out; 8883 } else { 8884 /* 8885 * Trim the space to be evicted. 8886 */ 8887 if (vd->vdev_has_trim && dev->l2ad_evict < taddr && 8888 l2arc_trim_ahead > 0) { 8889 /* 8890 * We have to drop the spa_config lock because 8891 * vdev_trim_range() will acquire it. 8892 * l2ad_evict already accounts for the label 8893 * size. To prevent vdev_trim_ranges() from 8894 * adding it again, we subtract it from 8895 * l2ad_evict. 8896 */ 8897 spa_config_exit(dev->l2ad_spa, SCL_L2ARC, dev); 8898 vdev_trim_simple(vd, 8899 dev->l2ad_evict - VDEV_LABEL_START_SIZE, 8900 taddr - dev->l2ad_evict); 8901 spa_config_enter(dev->l2ad_spa, SCL_L2ARC, dev, 8902 RW_READER); 8903 } 8904 8905 /* 8906 * When rebuilding L2ARC we retrieve the evict hand 8907 * from the header of the device. Of note, l2arc_evict() 8908 * does not actually delete buffers from the cache 8909 * device, but trimming may do so depending on the 8910 * hardware implementation. Thus keeping track of the 8911 * evict hand is useful. 8912 */ 8913 dev->l2ad_evict = MAX(dev->l2ad_evict, taddr); 8914 } 8915 } 8916 8917 retry: 8918 mutex_enter(&dev->l2ad_mtx); 8919 /* 8920 * We have to account for evicted log blocks. Run vdev_space_update() 8921 * on log blocks whose offset (in bytes) is before the evicted offset 8922 * (in bytes) by searching in the list of pointers to log blocks 8923 * present in the L2ARC device. 8924 */ 8925 for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf; 8926 lb_ptr_buf = lb_ptr_buf_prev) { 8927 8928 lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf); 8929 8930 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 8931 uint64_t asize = L2BLK_GET_PSIZE( 8932 (lb_ptr_buf->lb_ptr)->lbp_prop); 8933 8934 /* 8935 * We don't worry about log blocks left behind (ie 8936 * lbp_payload_start < l2ad_hand) because l2arc_write_buffers() 8937 * will never write more than l2arc_evict() evicts. 8938 */ 8939 if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) { 8940 break; 8941 } else { 8942 vdev_space_update(vd, -asize, 0, 0); 8943 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8944 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8945 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8946 lb_ptr_buf); 8947 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8948 list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf); 8949 kmem_free(lb_ptr_buf->lb_ptr, 8950 sizeof (l2arc_log_blkptr_t)); 8951 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8952 } 8953 } 8954 8955 for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) { 8956 hdr_prev = list_prev(buflist, hdr); 8957 8958 ASSERT(!HDR_EMPTY(hdr)); 8959 hash_lock = HDR_LOCK(hdr); 8960 8961 /* 8962 * We cannot use mutex_enter or else we can deadlock 8963 * with l2arc_write_buffers (due to swapping the order 8964 * the hash lock and l2ad_mtx are taken). 8965 */ 8966 if (!mutex_tryenter(hash_lock)) { 8967 /* 8968 * Missed the hash lock. Retry. 8969 */ 8970 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry); 8971 mutex_exit(&dev->l2ad_mtx); 8972 mutex_enter(hash_lock); 8973 mutex_exit(hash_lock); 8974 goto retry; 8975 } 8976 8977 /* 8978 * A header can't be on this list if it doesn't have L2 header. 8979 */ 8980 ASSERT(HDR_HAS_L2HDR(hdr)); 8981 8982 /* Ensure this header has finished being written. */ 8983 ASSERT(!HDR_L2_WRITING(hdr)); 8984 ASSERT(!HDR_L2_WRITE_HEAD(hdr)); 8985 8986 if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict || 8987 hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) { 8988 /* 8989 * We've evicted to the target address, 8990 * or the end of the device. 8991 */ 8992 mutex_exit(hash_lock); 8993 break; 8994 } 8995 8996 if (!HDR_HAS_L1HDR(hdr)) { 8997 ASSERT(!HDR_L2_READING(hdr)); 8998 /* 8999 * This doesn't exist in the ARC. Destroy. 9000 * arc_hdr_destroy() will call list_remove() 9001 * and decrement arcstat_l2_lsize. 9002 */ 9003 arc_change_state(arc_anon, hdr); 9004 arc_hdr_destroy(hdr); 9005 } else { 9006 ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only); 9007 ARCSTAT_BUMP(arcstat_l2_evict_l1cached); 9008 /* 9009 * Invalidate issued or about to be issued 9010 * reads, since we may be about to write 9011 * over this location. 9012 */ 9013 if (HDR_L2_READING(hdr)) { 9014 ARCSTAT_BUMP(arcstat_l2_evict_reading); 9015 arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED); 9016 } 9017 9018 arc_hdr_l2hdr_destroy(hdr); 9019 } 9020 mutex_exit(hash_lock); 9021 } 9022 mutex_exit(&dev->l2ad_mtx); 9023 9024 out: 9025 /* 9026 * We need to check if we evict all buffers, otherwise we may iterate 9027 * unnecessarily. 9028 */ 9029 if (!all && rerun) { 9030 /* 9031 * Bump device hand to the device start if it is approaching the 9032 * end. l2arc_evict() has already evicted ahead for this case. 9033 */ 9034 dev->l2ad_hand = dev->l2ad_start; 9035 dev->l2ad_evict = dev->l2ad_start; 9036 dev->l2ad_first = B_FALSE; 9037 goto top; 9038 } 9039 9040 if (!all) { 9041 /* 9042 * In case of cache device removal (all) the following 9043 * assertions may be violated without functional consequences 9044 * as the device is about to be removed. 9045 */ 9046 ASSERT3U(dev->l2ad_hand + distance, <, dev->l2ad_end); 9047 if (!dev->l2ad_first) 9048 ASSERT3U(dev->l2ad_hand, <, dev->l2ad_evict); 9049 } 9050 } 9051 9052 /* 9053 * Handle any abd transforms that might be required for writing to the L2ARC. 9054 * If successful, this function will always return an abd with the data 9055 * transformed as it is on disk in a new abd of asize bytes. 9056 */ 9057 static int 9058 l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize, 9059 abd_t **abd_out) 9060 { 9061 int ret; 9062 void *tmp = NULL; 9063 abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd; 9064 enum zio_compress compress = HDR_GET_COMPRESS(hdr); 9065 uint64_t psize = HDR_GET_PSIZE(hdr); 9066 uint64_t size = arc_hdr_size(hdr); 9067 boolean_t ismd = HDR_ISTYPE_METADATA(hdr); 9068 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 9069 dsl_crypto_key_t *dck = NULL; 9070 uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 }; 9071 boolean_t no_crypt = B_FALSE; 9072 9073 ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 9074 !HDR_COMPRESSION_ENABLED(hdr)) || 9075 HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize); 9076 ASSERT3U(psize, <=, asize); 9077 9078 /* 9079 * If this data simply needs its own buffer, we simply allocate it 9080 * and copy the data. This may be done to eliminate a dependency on a 9081 * shared buffer or to reallocate the buffer to match asize. 9082 */ 9083 if (HDR_HAS_RABD(hdr) && asize != psize) { 9084 ASSERT3U(asize, >=, psize); 9085 to_write = abd_alloc_for_io(asize, ismd); 9086 abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize); 9087 if (psize != asize) 9088 abd_zero_off(to_write, psize, asize - psize); 9089 goto out; 9090 } 9091 9092 if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) && 9093 !HDR_ENCRYPTED(hdr)) { 9094 ASSERT3U(size, ==, psize); 9095 to_write = abd_alloc_for_io(asize, ismd); 9096 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); 9097 if (size != asize) 9098 abd_zero_off(to_write, size, asize - size); 9099 goto out; 9100 } 9101 9102 if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) { 9103 /* 9104 * In some cases, we can wind up with size > asize, so 9105 * we need to opt for the larger allocation option here. 9106 * 9107 * (We also need abd_return_buf_copy in all cases because 9108 * it's an ASSERT() to modify the buffer before returning it 9109 * with arc_return_buf(), and all the compressors 9110 * write things before deciding to fail compression in nearly 9111 * every case.) 9112 */ 9113 cabd = abd_alloc_for_io(size, ismd); 9114 tmp = abd_borrow_buf(cabd, size); 9115 9116 psize = zio_compress_data(compress, to_write, &tmp, size, 9117 hdr->b_complevel); 9118 9119 if (psize >= asize) { 9120 psize = HDR_GET_PSIZE(hdr); 9121 abd_return_buf_copy(cabd, tmp, size); 9122 HDR_SET_COMPRESS(hdr, ZIO_COMPRESS_OFF); 9123 to_write = cabd; 9124 abd_copy(to_write, hdr->b_l1hdr.b_pabd, psize); 9125 if (psize != asize) 9126 abd_zero_off(to_write, psize, asize - psize); 9127 goto encrypt; 9128 } 9129 ASSERT3U(psize, <=, HDR_GET_PSIZE(hdr)); 9130 if (psize < asize) 9131 memset((char *)tmp + psize, 0, asize - psize); 9132 psize = HDR_GET_PSIZE(hdr); 9133 abd_return_buf_copy(cabd, tmp, size); 9134 to_write = cabd; 9135 } 9136 9137 encrypt: 9138 if (HDR_ENCRYPTED(hdr)) { 9139 eabd = abd_alloc_for_io(asize, ismd); 9140 9141 /* 9142 * If the dataset was disowned before the buffer 9143 * made it to this point, the key to re-encrypt 9144 * it won't be available. In this case we simply 9145 * won't write the buffer to the L2ARC. 9146 */ 9147 ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj, 9148 FTAG, &dck); 9149 if (ret != 0) 9150 goto error; 9151 9152 ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key, 9153 hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt, 9154 hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd, 9155 &no_crypt); 9156 if (ret != 0) 9157 goto error; 9158 9159 if (no_crypt) 9160 abd_copy(eabd, to_write, psize); 9161 9162 if (psize != asize) 9163 abd_zero_off(eabd, psize, asize - psize); 9164 9165 /* assert that the MAC we got here matches the one we saved */ 9166 ASSERT0(memcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN)); 9167 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9168 9169 if (to_write == cabd) 9170 abd_free(cabd); 9171 9172 to_write = eabd; 9173 } 9174 9175 out: 9176 ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd); 9177 *abd_out = to_write; 9178 return (0); 9179 9180 error: 9181 if (dck != NULL) 9182 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9183 if (cabd != NULL) 9184 abd_free(cabd); 9185 if (eabd != NULL) 9186 abd_free(eabd); 9187 9188 *abd_out = NULL; 9189 return (ret); 9190 } 9191 9192 static void 9193 l2arc_blk_fetch_done(zio_t *zio) 9194 { 9195 l2arc_read_callback_t *cb; 9196 9197 cb = zio->io_private; 9198 if (cb->l2rcb_abd != NULL) 9199 abd_free(cb->l2rcb_abd); 9200 kmem_free(cb, sizeof (l2arc_read_callback_t)); 9201 } 9202 9203 /* 9204 * Find and write ARC buffers to the L2ARC device. 9205 * 9206 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid 9207 * for reading until they have completed writing. 9208 * The headroom_boost is an in-out parameter used to maintain headroom boost 9209 * state between calls to this function. 9210 * 9211 * Returns the number of bytes actually written (which may be smaller than 9212 * the delta by which the device hand has changed due to alignment and the 9213 * writing of log blocks). 9214 */ 9215 static uint64_t 9216 l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz) 9217 { 9218 arc_buf_hdr_t *hdr, *hdr_prev, *head; 9219 uint64_t write_asize, write_psize, write_lsize, headroom; 9220 boolean_t full; 9221 l2arc_write_callback_t *cb = NULL; 9222 zio_t *pio, *wzio; 9223 uint64_t guid = spa_load_guid(spa); 9224 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9225 9226 ASSERT3P(dev->l2ad_vdev, !=, NULL); 9227 9228 pio = NULL; 9229 write_lsize = write_asize = write_psize = 0; 9230 full = B_FALSE; 9231 head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE); 9232 arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR); 9233 9234 /* 9235 * Copy buffers for L2ARC writing. 9236 */ 9237 for (int pass = 0; pass < L2ARC_FEED_TYPES; pass++) { 9238 /* 9239 * If pass == 1 or 3, we cache MRU metadata and data 9240 * respectively. 9241 */ 9242 if (l2arc_mfuonly) { 9243 if (pass == 1 || pass == 3) 9244 continue; 9245 } 9246 9247 multilist_sublist_t *mls = l2arc_sublist_lock(pass); 9248 uint64_t passed_sz = 0; 9249 9250 VERIFY3P(mls, !=, NULL); 9251 9252 /* 9253 * L2ARC fast warmup. 9254 * 9255 * Until the ARC is warm and starts to evict, read from the 9256 * head of the ARC lists rather than the tail. 9257 */ 9258 if (arc_warm == B_FALSE) 9259 hdr = multilist_sublist_head(mls); 9260 else 9261 hdr = multilist_sublist_tail(mls); 9262 9263 headroom = target_sz * l2arc_headroom; 9264 if (zfs_compressed_arc_enabled) 9265 headroom = (headroom * l2arc_headroom_boost) / 100; 9266 9267 for (; hdr; hdr = hdr_prev) { 9268 kmutex_t *hash_lock; 9269 abd_t *to_write = NULL; 9270 9271 if (arc_warm == B_FALSE) 9272 hdr_prev = multilist_sublist_next(mls, hdr); 9273 else 9274 hdr_prev = multilist_sublist_prev(mls, hdr); 9275 9276 hash_lock = HDR_LOCK(hdr); 9277 if (!mutex_tryenter(hash_lock)) { 9278 /* 9279 * Skip this buffer rather than waiting. 9280 */ 9281 continue; 9282 } 9283 9284 passed_sz += HDR_GET_LSIZE(hdr); 9285 if (l2arc_headroom != 0 && passed_sz > headroom) { 9286 /* 9287 * Searched too far. 9288 */ 9289 mutex_exit(hash_lock); 9290 break; 9291 } 9292 9293 if (!l2arc_write_eligible(guid, hdr)) { 9294 mutex_exit(hash_lock); 9295 continue; 9296 } 9297 9298 ASSERT(HDR_HAS_L1HDR(hdr)); 9299 9300 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 9301 ASSERT3U(arc_hdr_size(hdr), >, 0); 9302 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 9303 HDR_HAS_RABD(hdr)); 9304 uint64_t psize = HDR_GET_PSIZE(hdr); 9305 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, 9306 psize); 9307 9308 if ((write_asize + asize) > target_sz) { 9309 full = B_TRUE; 9310 mutex_exit(hash_lock); 9311 break; 9312 } 9313 9314 /* 9315 * We rely on the L1 portion of the header below, so 9316 * it's invalid for this header to have been evicted out 9317 * of the ghost cache, prior to being written out. The 9318 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 9319 */ 9320 arc_hdr_set_flags(hdr, ARC_FLAG_L2_WRITING); 9321 9322 /* 9323 * If this header has b_rabd, we can use this since it 9324 * must always match the data exactly as it exists on 9325 * disk. Otherwise, the L2ARC can normally use the 9326 * hdr's data, but if we're sharing data between the 9327 * hdr and one of its bufs, L2ARC needs its own copy of 9328 * the data so that the ZIO below can't race with the 9329 * buf consumer. To ensure that this copy will be 9330 * available for the lifetime of the ZIO and be cleaned 9331 * up afterwards, we add it to the l2arc_free_on_write 9332 * queue. If we need to apply any transforms to the 9333 * data (compression, encryption) we will also need the 9334 * extra buffer. 9335 */ 9336 if (HDR_HAS_RABD(hdr) && psize == asize) { 9337 to_write = hdr->b_crypt_hdr.b_rabd; 9338 } else if ((HDR_COMPRESSION_ENABLED(hdr) || 9339 HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) && 9340 !HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) && 9341 psize == asize) { 9342 to_write = hdr->b_l1hdr.b_pabd; 9343 } else { 9344 int ret; 9345 arc_buf_contents_t type = arc_buf_type(hdr); 9346 9347 ret = l2arc_apply_transforms(spa, hdr, asize, 9348 &to_write); 9349 if (ret != 0) { 9350 arc_hdr_clear_flags(hdr, 9351 ARC_FLAG_L2_WRITING); 9352 mutex_exit(hash_lock); 9353 continue; 9354 } 9355 9356 l2arc_free_abd_on_write(to_write, asize, type); 9357 } 9358 9359 if (pio == NULL) { 9360 /* 9361 * Insert a dummy header on the buflist so 9362 * l2arc_write_done() can find where the 9363 * write buffers begin without searching. 9364 */ 9365 mutex_enter(&dev->l2ad_mtx); 9366 list_insert_head(&dev->l2ad_buflist, head); 9367 mutex_exit(&dev->l2ad_mtx); 9368 9369 cb = kmem_alloc( 9370 sizeof (l2arc_write_callback_t), KM_SLEEP); 9371 cb->l2wcb_dev = dev; 9372 cb->l2wcb_head = head; 9373 /* 9374 * Create a list to save allocated abd buffers 9375 * for l2arc_log_blk_commit(). 9376 */ 9377 list_create(&cb->l2wcb_abd_list, 9378 sizeof (l2arc_lb_abd_buf_t), 9379 offsetof(l2arc_lb_abd_buf_t, node)); 9380 pio = zio_root(spa, l2arc_write_done, cb, 9381 ZIO_FLAG_CANFAIL); 9382 } 9383 9384 hdr->b_l2hdr.b_dev = dev; 9385 hdr->b_l2hdr.b_hits = 0; 9386 9387 hdr->b_l2hdr.b_daddr = dev->l2ad_hand; 9388 hdr->b_l2hdr.b_arcs_state = 9389 hdr->b_l1hdr.b_state->arcs_state; 9390 arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR); 9391 9392 mutex_enter(&dev->l2ad_mtx); 9393 list_insert_head(&dev->l2ad_buflist, hdr); 9394 mutex_exit(&dev->l2ad_mtx); 9395 9396 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 9397 arc_hdr_size(hdr), hdr); 9398 9399 wzio = zio_write_phys(pio, dev->l2ad_vdev, 9400 hdr->b_l2hdr.b_daddr, asize, to_write, 9401 ZIO_CHECKSUM_OFF, NULL, hdr, 9402 ZIO_PRIORITY_ASYNC_WRITE, 9403 ZIO_FLAG_CANFAIL, B_FALSE); 9404 9405 write_lsize += HDR_GET_LSIZE(hdr); 9406 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, 9407 zio_t *, wzio); 9408 9409 write_psize += psize; 9410 write_asize += asize; 9411 dev->l2ad_hand += asize; 9412 l2arc_hdr_arcstats_increment(hdr); 9413 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9414 9415 mutex_exit(hash_lock); 9416 9417 /* 9418 * Append buf info to current log and commit if full. 9419 * arcstat_l2_{size,asize} kstats are updated 9420 * internally. 9421 */ 9422 if (l2arc_log_blk_insert(dev, hdr)) 9423 l2arc_log_blk_commit(dev, pio, cb); 9424 9425 zio_nowait(wzio); 9426 } 9427 9428 multilist_sublist_unlock(mls); 9429 9430 if (full == B_TRUE) 9431 break; 9432 } 9433 9434 /* No buffers selected for writing? */ 9435 if (pio == NULL) { 9436 ASSERT0(write_lsize); 9437 ASSERT(!HDR_HAS_L1HDR(head)); 9438 kmem_cache_free(hdr_l2only_cache, head); 9439 9440 /* 9441 * Although we did not write any buffers l2ad_evict may 9442 * have advanced. 9443 */ 9444 if (dev->l2ad_evict != l2dhdr->dh_evict) 9445 l2arc_dev_hdr_update(dev); 9446 9447 return (0); 9448 } 9449 9450 if (!dev->l2ad_first) 9451 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 9452 9453 ASSERT3U(write_asize, <=, target_sz); 9454 ARCSTAT_BUMP(arcstat_l2_writes_sent); 9455 ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize); 9456 9457 dev->l2ad_writing = B_TRUE; 9458 (void) zio_wait(pio); 9459 dev->l2ad_writing = B_FALSE; 9460 9461 /* 9462 * Update the device header after the zio completes as 9463 * l2arc_write_done() may have updated the memory holding the log block 9464 * pointers in the device header. 9465 */ 9466 l2arc_dev_hdr_update(dev); 9467 9468 return (write_asize); 9469 } 9470 9471 static boolean_t 9472 l2arc_hdr_limit_reached(void) 9473 { 9474 int64_t s = aggsum_upper_bound(&arc_sums.arcstat_l2_hdr_size); 9475 9476 return (arc_reclaim_needed() || 9477 (s > (arc_warm ? arc_c : arc_c_max) * l2arc_meta_percent / 100)); 9478 } 9479 9480 /* 9481 * This thread feeds the L2ARC at regular intervals. This is the beating 9482 * heart of the L2ARC. 9483 */ 9484 static __attribute__((noreturn)) void 9485 l2arc_feed_thread(void *unused) 9486 { 9487 (void) unused; 9488 callb_cpr_t cpr; 9489 l2arc_dev_t *dev; 9490 spa_t *spa; 9491 uint64_t size, wrote; 9492 clock_t begin, next = ddi_get_lbolt(); 9493 fstrans_cookie_t cookie; 9494 9495 CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG); 9496 9497 mutex_enter(&l2arc_feed_thr_lock); 9498 9499 cookie = spl_fstrans_mark(); 9500 while (l2arc_thread_exit == 0) { 9501 CALLB_CPR_SAFE_BEGIN(&cpr); 9502 (void) cv_timedwait_idle(&l2arc_feed_thr_cv, 9503 &l2arc_feed_thr_lock, next); 9504 CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock); 9505 next = ddi_get_lbolt() + hz; 9506 9507 /* 9508 * Quick check for L2ARC devices. 9509 */ 9510 mutex_enter(&l2arc_dev_mtx); 9511 if (l2arc_ndev == 0) { 9512 mutex_exit(&l2arc_dev_mtx); 9513 continue; 9514 } 9515 mutex_exit(&l2arc_dev_mtx); 9516 begin = ddi_get_lbolt(); 9517 9518 /* 9519 * This selects the next l2arc device to write to, and in 9520 * doing so the next spa to feed from: dev->l2ad_spa. This 9521 * will return NULL if there are now no l2arc devices or if 9522 * they are all faulted. 9523 * 9524 * If a device is returned, its spa's config lock is also 9525 * held to prevent device removal. l2arc_dev_get_next() 9526 * will grab and release l2arc_dev_mtx. 9527 */ 9528 if ((dev = l2arc_dev_get_next()) == NULL) 9529 continue; 9530 9531 spa = dev->l2ad_spa; 9532 ASSERT3P(spa, !=, NULL); 9533 9534 /* 9535 * If the pool is read-only then force the feed thread to 9536 * sleep a little longer. 9537 */ 9538 if (!spa_writeable(spa)) { 9539 next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz; 9540 spa_config_exit(spa, SCL_L2ARC, dev); 9541 continue; 9542 } 9543 9544 /* 9545 * Avoid contributing to memory pressure. 9546 */ 9547 if (l2arc_hdr_limit_reached()) { 9548 ARCSTAT_BUMP(arcstat_l2_abort_lowmem); 9549 spa_config_exit(spa, SCL_L2ARC, dev); 9550 continue; 9551 } 9552 9553 ARCSTAT_BUMP(arcstat_l2_feeds); 9554 9555 size = l2arc_write_size(dev); 9556 9557 /* 9558 * Evict L2ARC buffers that will be overwritten. 9559 */ 9560 l2arc_evict(dev, size, B_FALSE); 9561 9562 /* 9563 * Write ARC buffers. 9564 */ 9565 wrote = l2arc_write_buffers(spa, dev, size); 9566 9567 /* 9568 * Calculate interval between writes. 9569 */ 9570 next = l2arc_write_interval(begin, size, wrote); 9571 spa_config_exit(spa, SCL_L2ARC, dev); 9572 } 9573 spl_fstrans_unmark(cookie); 9574 9575 l2arc_thread_exit = 0; 9576 cv_broadcast(&l2arc_feed_thr_cv); 9577 CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */ 9578 thread_exit(); 9579 } 9580 9581 boolean_t 9582 l2arc_vdev_present(vdev_t *vd) 9583 { 9584 return (l2arc_vdev_get(vd) != NULL); 9585 } 9586 9587 /* 9588 * Returns the l2arc_dev_t associated with a particular vdev_t or NULL if 9589 * the vdev_t isn't an L2ARC device. 9590 */ 9591 l2arc_dev_t * 9592 l2arc_vdev_get(vdev_t *vd) 9593 { 9594 l2arc_dev_t *dev; 9595 9596 mutex_enter(&l2arc_dev_mtx); 9597 for (dev = list_head(l2arc_dev_list); dev != NULL; 9598 dev = list_next(l2arc_dev_list, dev)) { 9599 if (dev->l2ad_vdev == vd) 9600 break; 9601 } 9602 mutex_exit(&l2arc_dev_mtx); 9603 9604 return (dev); 9605 } 9606 9607 static void 9608 l2arc_rebuild_dev(l2arc_dev_t *dev, boolean_t reopen) 9609 { 9610 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9611 uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9612 spa_t *spa = dev->l2ad_spa; 9613 9614 /* 9615 * The L2ARC has to hold at least the payload of one log block for 9616 * them to be restored (persistent L2ARC). The payload of a log block 9617 * depends on the amount of its log entries. We always write log blocks 9618 * with 1022 entries. How many of them are committed or restored depends 9619 * on the size of the L2ARC device. Thus the maximum payload of 9620 * one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device 9621 * is less than that, we reduce the amount of committed and restored 9622 * log entries per block so as to enable persistence. 9623 */ 9624 if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) { 9625 dev->l2ad_log_entries = 0; 9626 } else { 9627 dev->l2ad_log_entries = MIN((dev->l2ad_end - 9628 dev->l2ad_start) >> SPA_MAXBLOCKSHIFT, 9629 L2ARC_LOG_BLK_MAX_ENTRIES); 9630 } 9631 9632 /* 9633 * Read the device header, if an error is returned do not rebuild L2ARC. 9634 */ 9635 if (l2arc_dev_hdr_read(dev) == 0 && dev->l2ad_log_entries > 0) { 9636 /* 9637 * If we are onlining a cache device (vdev_reopen) that was 9638 * still present (l2arc_vdev_present()) and rebuild is enabled, 9639 * we should evict all ARC buffers and pointers to log blocks 9640 * and reclaim their space before restoring its contents to 9641 * L2ARC. 9642 */ 9643 if (reopen) { 9644 if (!l2arc_rebuild_enabled) { 9645 return; 9646 } else { 9647 l2arc_evict(dev, 0, B_TRUE); 9648 /* start a new log block */ 9649 dev->l2ad_log_ent_idx = 0; 9650 dev->l2ad_log_blk_payload_asize = 0; 9651 dev->l2ad_log_blk_payload_start = 0; 9652 } 9653 } 9654 /* 9655 * Just mark the device as pending for a rebuild. We won't 9656 * be starting a rebuild in line here as it would block pool 9657 * import. Instead spa_load_impl will hand that off to an 9658 * async task which will call l2arc_spa_rebuild_start. 9659 */ 9660 dev->l2ad_rebuild = B_TRUE; 9661 } else if (spa_writeable(spa)) { 9662 /* 9663 * In this case TRIM the whole device if l2arc_trim_ahead > 0, 9664 * otherwise create a new header. We zero out the memory holding 9665 * the header to reset dh_start_lbps. If we TRIM the whole 9666 * device the new header will be written by 9667 * vdev_trim_l2arc_thread() at the end of the TRIM to update the 9668 * trim_state in the header too. When reading the header, if 9669 * trim_state is not VDEV_TRIM_COMPLETE and l2arc_trim_ahead > 0 9670 * we opt to TRIM the whole device again. 9671 */ 9672 if (l2arc_trim_ahead > 0) { 9673 dev->l2ad_trim_all = B_TRUE; 9674 } else { 9675 memset(l2dhdr, 0, l2dhdr_asize); 9676 l2arc_dev_hdr_update(dev); 9677 } 9678 } 9679 } 9680 9681 /* 9682 * Add a vdev for use by the L2ARC. By this point the spa has already 9683 * validated the vdev and opened it. 9684 */ 9685 void 9686 l2arc_add_vdev(spa_t *spa, vdev_t *vd) 9687 { 9688 l2arc_dev_t *adddev; 9689 uint64_t l2dhdr_asize; 9690 9691 ASSERT(!l2arc_vdev_present(vd)); 9692 9693 /* 9694 * Create a new l2arc device entry. 9695 */ 9696 adddev = vmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP); 9697 adddev->l2ad_spa = spa; 9698 adddev->l2ad_vdev = vd; 9699 /* leave extra size for an l2arc device header */ 9700 l2dhdr_asize = adddev->l2ad_dev_hdr_asize = 9701 MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift); 9702 adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize; 9703 adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd); 9704 ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end); 9705 adddev->l2ad_hand = adddev->l2ad_start; 9706 adddev->l2ad_evict = adddev->l2ad_start; 9707 adddev->l2ad_first = B_TRUE; 9708 adddev->l2ad_writing = B_FALSE; 9709 adddev->l2ad_trim_all = B_FALSE; 9710 list_link_init(&adddev->l2ad_node); 9711 adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP); 9712 9713 mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL); 9714 /* 9715 * This is a list of all ARC buffers that are still valid on the 9716 * device. 9717 */ 9718 list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t), 9719 offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node)); 9720 9721 /* 9722 * This is a list of pointers to log blocks that are still present 9723 * on the device. 9724 */ 9725 list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t), 9726 offsetof(l2arc_lb_ptr_buf_t, node)); 9727 9728 vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand); 9729 zfs_refcount_create(&adddev->l2ad_alloc); 9730 zfs_refcount_create(&adddev->l2ad_lb_asize); 9731 zfs_refcount_create(&adddev->l2ad_lb_count); 9732 9733 /* 9734 * Decide if dev is eligible for L2ARC rebuild or whole device 9735 * trimming. This has to happen before the device is added in the 9736 * cache device list and l2arc_dev_mtx is released. Otherwise 9737 * l2arc_feed_thread() might already start writing on the 9738 * device. 9739 */ 9740 l2arc_rebuild_dev(adddev, B_FALSE); 9741 9742 /* 9743 * Add device to global list 9744 */ 9745 mutex_enter(&l2arc_dev_mtx); 9746 list_insert_head(l2arc_dev_list, adddev); 9747 atomic_inc_64(&l2arc_ndev); 9748 mutex_exit(&l2arc_dev_mtx); 9749 } 9750 9751 /* 9752 * Decide if a vdev is eligible for L2ARC rebuild, called from vdev_reopen() 9753 * in case of onlining a cache device. 9754 */ 9755 void 9756 l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen) 9757 { 9758 l2arc_dev_t *dev = NULL; 9759 9760 dev = l2arc_vdev_get(vd); 9761 ASSERT3P(dev, !=, NULL); 9762 9763 /* 9764 * In contrast to l2arc_add_vdev() we do not have to worry about 9765 * l2arc_feed_thread() invalidating previous content when onlining a 9766 * cache device. The device parameters (l2ad*) are not cleared when 9767 * offlining the device and writing new buffers will not invalidate 9768 * all previous content. In worst case only buffers that have not had 9769 * their log block written to the device will be lost. 9770 * When onlining the cache device (ie offline->online without exporting 9771 * the pool in between) this happens: 9772 * vdev_reopen() -> vdev_open() -> l2arc_rebuild_vdev() 9773 * | | 9774 * vdev_is_dead() = B_FALSE l2ad_rebuild = B_TRUE 9775 * During the time where vdev_is_dead = B_FALSE and until l2ad_rebuild 9776 * is set to B_TRUE we might write additional buffers to the device. 9777 */ 9778 l2arc_rebuild_dev(dev, reopen); 9779 } 9780 9781 /* 9782 * Remove a vdev from the L2ARC. 9783 */ 9784 void 9785 l2arc_remove_vdev(vdev_t *vd) 9786 { 9787 l2arc_dev_t *remdev = NULL; 9788 9789 /* 9790 * Find the device by vdev 9791 */ 9792 remdev = l2arc_vdev_get(vd); 9793 ASSERT3P(remdev, !=, NULL); 9794 9795 /* 9796 * Cancel any ongoing or scheduled rebuild. 9797 */ 9798 mutex_enter(&l2arc_rebuild_thr_lock); 9799 if (remdev->l2ad_rebuild_began == B_TRUE) { 9800 remdev->l2ad_rebuild_cancel = B_TRUE; 9801 while (remdev->l2ad_rebuild == B_TRUE) 9802 cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock); 9803 } 9804 mutex_exit(&l2arc_rebuild_thr_lock); 9805 9806 /* 9807 * Remove device from global list 9808 */ 9809 mutex_enter(&l2arc_dev_mtx); 9810 list_remove(l2arc_dev_list, remdev); 9811 l2arc_dev_last = NULL; /* may have been invalidated */ 9812 atomic_dec_64(&l2arc_ndev); 9813 mutex_exit(&l2arc_dev_mtx); 9814 9815 /* 9816 * Clear all buflists and ARC references. L2ARC device flush. 9817 */ 9818 l2arc_evict(remdev, 0, B_TRUE); 9819 list_destroy(&remdev->l2ad_buflist); 9820 ASSERT(list_is_empty(&remdev->l2ad_lbptr_list)); 9821 list_destroy(&remdev->l2ad_lbptr_list); 9822 mutex_destroy(&remdev->l2ad_mtx); 9823 zfs_refcount_destroy(&remdev->l2ad_alloc); 9824 zfs_refcount_destroy(&remdev->l2ad_lb_asize); 9825 zfs_refcount_destroy(&remdev->l2ad_lb_count); 9826 kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize); 9827 vmem_free(remdev, sizeof (l2arc_dev_t)); 9828 } 9829 9830 void 9831 l2arc_init(void) 9832 { 9833 l2arc_thread_exit = 0; 9834 l2arc_ndev = 0; 9835 9836 mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9837 cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL); 9838 mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9839 cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL); 9840 mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL); 9841 mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL); 9842 9843 l2arc_dev_list = &L2ARC_dev_list; 9844 l2arc_free_on_write = &L2ARC_free_on_write; 9845 list_create(l2arc_dev_list, sizeof (l2arc_dev_t), 9846 offsetof(l2arc_dev_t, l2ad_node)); 9847 list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t), 9848 offsetof(l2arc_data_free_t, l2df_list_node)); 9849 } 9850 9851 void 9852 l2arc_fini(void) 9853 { 9854 mutex_destroy(&l2arc_feed_thr_lock); 9855 cv_destroy(&l2arc_feed_thr_cv); 9856 mutex_destroy(&l2arc_rebuild_thr_lock); 9857 cv_destroy(&l2arc_rebuild_thr_cv); 9858 mutex_destroy(&l2arc_dev_mtx); 9859 mutex_destroy(&l2arc_free_on_write_mtx); 9860 9861 list_destroy(l2arc_dev_list); 9862 list_destroy(l2arc_free_on_write); 9863 } 9864 9865 void 9866 l2arc_start(void) 9867 { 9868 if (!(spa_mode_global & SPA_MODE_WRITE)) 9869 return; 9870 9871 (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0, 9872 TS_RUN, defclsyspri); 9873 } 9874 9875 void 9876 l2arc_stop(void) 9877 { 9878 if (!(spa_mode_global & SPA_MODE_WRITE)) 9879 return; 9880 9881 mutex_enter(&l2arc_feed_thr_lock); 9882 cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */ 9883 l2arc_thread_exit = 1; 9884 while (l2arc_thread_exit != 0) 9885 cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock); 9886 mutex_exit(&l2arc_feed_thr_lock); 9887 } 9888 9889 /* 9890 * Punches out rebuild threads for the L2ARC devices in a spa. This should 9891 * be called after pool import from the spa async thread, since starting 9892 * these threads directly from spa_import() will make them part of the 9893 * "zpool import" context and delay process exit (and thus pool import). 9894 */ 9895 void 9896 l2arc_spa_rebuild_start(spa_t *spa) 9897 { 9898 ASSERT(MUTEX_HELD(&spa_namespace_lock)); 9899 9900 /* 9901 * Locate the spa's l2arc devices and kick off rebuild threads. 9902 */ 9903 for (int i = 0; i < spa->spa_l2cache.sav_count; i++) { 9904 l2arc_dev_t *dev = 9905 l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]); 9906 if (dev == NULL) { 9907 /* Don't attempt a rebuild if the vdev is UNAVAIL */ 9908 continue; 9909 } 9910 mutex_enter(&l2arc_rebuild_thr_lock); 9911 if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) { 9912 dev->l2ad_rebuild_began = B_TRUE; 9913 (void) thread_create(NULL, 0, l2arc_dev_rebuild_thread, 9914 dev, 0, &p0, TS_RUN, minclsyspri); 9915 } 9916 mutex_exit(&l2arc_rebuild_thr_lock); 9917 } 9918 } 9919 9920 /* 9921 * Main entry point for L2ARC rebuilding. 9922 */ 9923 static __attribute__((noreturn)) void 9924 l2arc_dev_rebuild_thread(void *arg) 9925 { 9926 l2arc_dev_t *dev = arg; 9927 9928 VERIFY(!dev->l2ad_rebuild_cancel); 9929 VERIFY(dev->l2ad_rebuild); 9930 (void) l2arc_rebuild(dev); 9931 mutex_enter(&l2arc_rebuild_thr_lock); 9932 dev->l2ad_rebuild_began = B_FALSE; 9933 dev->l2ad_rebuild = B_FALSE; 9934 mutex_exit(&l2arc_rebuild_thr_lock); 9935 9936 thread_exit(); 9937 } 9938 9939 /* 9940 * This function implements the actual L2ARC metadata rebuild. It: 9941 * starts reading the log block chain and restores each block's contents 9942 * to memory (reconstructing arc_buf_hdr_t's). 9943 * 9944 * Operation stops under any of the following conditions: 9945 * 9946 * 1) We reach the end of the log block chain. 9947 * 2) We encounter *any* error condition (cksum errors, io errors) 9948 */ 9949 static int 9950 l2arc_rebuild(l2arc_dev_t *dev) 9951 { 9952 vdev_t *vd = dev->l2ad_vdev; 9953 spa_t *spa = vd->vdev_spa; 9954 int err = 0; 9955 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9956 l2arc_log_blk_phys_t *this_lb, *next_lb; 9957 zio_t *this_io = NULL, *next_io = NULL; 9958 l2arc_log_blkptr_t lbps[2]; 9959 l2arc_lb_ptr_buf_t *lb_ptr_buf; 9960 boolean_t lock_held; 9961 9962 this_lb = vmem_zalloc(sizeof (*this_lb), KM_SLEEP); 9963 next_lb = vmem_zalloc(sizeof (*next_lb), KM_SLEEP); 9964 9965 /* 9966 * We prevent device removal while issuing reads to the device, 9967 * then during the rebuilding phases we drop this lock again so 9968 * that a spa_unload or device remove can be initiated - this is 9969 * safe, because the spa will signal us to stop before removing 9970 * our device and wait for us to stop. 9971 */ 9972 spa_config_enter(spa, SCL_L2ARC, vd, RW_READER); 9973 lock_held = B_TRUE; 9974 9975 /* 9976 * Retrieve the persistent L2ARC device state. 9977 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9978 */ 9979 dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start); 9980 dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr + 9981 L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop), 9982 dev->l2ad_start); 9983 dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST); 9984 9985 vd->vdev_trim_action_time = l2dhdr->dh_trim_action_time; 9986 vd->vdev_trim_state = l2dhdr->dh_trim_state; 9987 9988 /* 9989 * In case the zfs module parameter l2arc_rebuild_enabled is false 9990 * we do not start the rebuild process. 9991 */ 9992 if (!l2arc_rebuild_enabled) 9993 goto out; 9994 9995 /* Prepare the rebuild process */ 9996 memcpy(lbps, l2dhdr->dh_start_lbps, sizeof (lbps)); 9997 9998 /* Start the rebuild process */ 9999 for (;;) { 10000 if (!l2arc_log_blkptr_valid(dev, &lbps[0])) 10001 break; 10002 10003 if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1], 10004 this_lb, next_lb, this_io, &next_io)) != 0) 10005 goto out; 10006 10007 /* 10008 * Our memory pressure valve. If the system is running low 10009 * on memory, rather than swamping memory with new ARC buf 10010 * hdrs, we opt not to rebuild the L2ARC. At this point, 10011 * however, we have already set up our L2ARC dev to chain in 10012 * new metadata log blocks, so the user may choose to offline/ 10013 * online the L2ARC dev at a later time (or re-import the pool) 10014 * to reconstruct it (when there's less memory pressure). 10015 */ 10016 if (l2arc_hdr_limit_reached()) { 10017 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem); 10018 cmn_err(CE_NOTE, "System running low on memory, " 10019 "aborting L2ARC rebuild."); 10020 err = SET_ERROR(ENOMEM); 10021 goto out; 10022 } 10023 10024 spa_config_exit(spa, SCL_L2ARC, vd); 10025 lock_held = B_FALSE; 10026 10027 /* 10028 * Now that we know that the next_lb checks out alright, we 10029 * can start reconstruction from this log block. 10030 * L2BLK_GET_PSIZE returns aligned size for log blocks. 10031 */ 10032 uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop); 10033 l2arc_log_blk_restore(dev, this_lb, asize); 10034 10035 /* 10036 * log block restored, include its pointer in the list of 10037 * pointers to log blocks present in the L2ARC device. 10038 */ 10039 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 10040 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), 10041 KM_SLEEP); 10042 memcpy(lb_ptr_buf->lb_ptr, &lbps[0], 10043 sizeof (l2arc_log_blkptr_t)); 10044 mutex_enter(&dev->l2ad_mtx); 10045 list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf); 10046 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 10047 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 10048 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 10049 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 10050 mutex_exit(&dev->l2ad_mtx); 10051 vdev_space_update(vd, asize, 0, 0); 10052 10053 /* 10054 * Protection against loops of log blocks: 10055 * 10056 * l2ad_hand l2ad_evict 10057 * V V 10058 * l2ad_start |=======================================| l2ad_end 10059 * -----|||----|||---|||----||| 10060 * (3) (2) (1) (0) 10061 * ---|||---|||----|||---||| 10062 * (7) (6) (5) (4) 10063 * 10064 * In this situation the pointer of log block (4) passes 10065 * l2arc_log_blkptr_valid() but the log block should not be 10066 * restored as it is overwritten by the payload of log block 10067 * (0). Only log blocks (0)-(3) should be restored. We check 10068 * whether l2ad_evict lies in between the payload starting 10069 * offset of the next log block (lbps[1].lbp_payload_start) 10070 * and the payload starting offset of the present log block 10071 * (lbps[0].lbp_payload_start). If true and this isn't the 10072 * first pass, we are looping from the beginning and we should 10073 * stop. 10074 */ 10075 if (l2arc_range_check_overlap(lbps[1].lbp_payload_start, 10076 lbps[0].lbp_payload_start, dev->l2ad_evict) && 10077 !dev->l2ad_first) 10078 goto out; 10079 10080 kpreempt(KPREEMPT_SYNC); 10081 for (;;) { 10082 mutex_enter(&l2arc_rebuild_thr_lock); 10083 if (dev->l2ad_rebuild_cancel) { 10084 dev->l2ad_rebuild = B_FALSE; 10085 cv_signal(&l2arc_rebuild_thr_cv); 10086 mutex_exit(&l2arc_rebuild_thr_lock); 10087 err = SET_ERROR(ECANCELED); 10088 goto out; 10089 } 10090 mutex_exit(&l2arc_rebuild_thr_lock); 10091 if (spa_config_tryenter(spa, SCL_L2ARC, vd, 10092 RW_READER)) { 10093 lock_held = B_TRUE; 10094 break; 10095 } 10096 /* 10097 * L2ARC config lock held by somebody in writer, 10098 * possibly due to them trying to remove us. They'll 10099 * likely to want us to shut down, so after a little 10100 * delay, we check l2ad_rebuild_cancel and retry 10101 * the lock again. 10102 */ 10103 delay(1); 10104 } 10105 10106 /* 10107 * Continue with the next log block. 10108 */ 10109 lbps[0] = lbps[1]; 10110 lbps[1] = this_lb->lb_prev_lbp; 10111 PTR_SWAP(this_lb, next_lb); 10112 this_io = next_io; 10113 next_io = NULL; 10114 } 10115 10116 if (this_io != NULL) 10117 l2arc_log_blk_fetch_abort(this_io); 10118 out: 10119 if (next_io != NULL) 10120 l2arc_log_blk_fetch_abort(next_io); 10121 vmem_free(this_lb, sizeof (*this_lb)); 10122 vmem_free(next_lb, sizeof (*next_lb)); 10123 10124 if (!l2arc_rebuild_enabled) { 10125 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10126 "disabled"); 10127 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) { 10128 ARCSTAT_BUMP(arcstat_l2_rebuild_success); 10129 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10130 "successful, restored %llu blocks", 10131 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10132 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) { 10133 /* 10134 * No error but also nothing restored, meaning the lbps array 10135 * in the device header points to invalid/non-present log 10136 * blocks. Reset the header. 10137 */ 10138 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10139 "no valid log blocks"); 10140 memset(l2dhdr, 0, dev->l2ad_dev_hdr_asize); 10141 l2arc_dev_hdr_update(dev); 10142 } else if (err == ECANCELED) { 10143 /* 10144 * In case the rebuild was canceled do not log to spa history 10145 * log as the pool may be in the process of being removed. 10146 */ 10147 zfs_dbgmsg("L2ARC rebuild aborted, restored %llu blocks", 10148 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10149 } else if (err != 0) { 10150 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10151 "aborted, restored %llu blocks", 10152 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10153 } 10154 10155 if (lock_held) 10156 spa_config_exit(spa, SCL_L2ARC, vd); 10157 10158 return (err); 10159 } 10160 10161 /* 10162 * Attempts to read the device header on the provided L2ARC device and writes 10163 * it to `hdr'. On success, this function returns 0, otherwise the appropriate 10164 * error code is returned. 10165 */ 10166 static int 10167 l2arc_dev_hdr_read(l2arc_dev_t *dev) 10168 { 10169 int err; 10170 uint64_t guid; 10171 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10172 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10173 abd_t *abd; 10174 10175 guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10176 10177 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10178 10179 err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev, 10180 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, 10181 ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_SYNC_READ, 10182 ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | 10183 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | 10184 ZIO_FLAG_SPECULATIVE, B_FALSE)); 10185 10186 abd_free(abd); 10187 10188 if (err != 0) { 10189 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors); 10190 zfs_dbgmsg("L2ARC IO error (%d) while reading device header, " 10191 "vdev guid: %llu", err, 10192 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10193 return (err); 10194 } 10195 10196 if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC)) 10197 byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr)); 10198 10199 if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC || 10200 l2dhdr->dh_spa_guid != guid || 10201 l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid || 10202 l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION || 10203 l2dhdr->dh_log_entries != dev->l2ad_log_entries || 10204 l2dhdr->dh_end != dev->l2ad_end || 10205 !l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end, 10206 l2dhdr->dh_evict) || 10207 (l2dhdr->dh_trim_state != VDEV_TRIM_COMPLETE && 10208 l2arc_trim_ahead > 0)) { 10209 /* 10210 * Attempt to rebuild a device containing no actual dev hdr 10211 * or containing a header from some other pool or from another 10212 * version of persistent L2ARC. 10213 */ 10214 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported); 10215 return (SET_ERROR(ENOTSUP)); 10216 } 10217 10218 return (0); 10219 } 10220 10221 /* 10222 * Reads L2ARC log blocks from storage and validates their contents. 10223 * 10224 * This function implements a simple fetcher to make sure that while 10225 * we're processing one buffer the L2ARC is already fetching the next 10226 * one in the chain. 10227 * 10228 * The arguments this_lp and next_lp point to the current and next log block 10229 * address in the block chain. Similarly, this_lb and next_lb hold the 10230 * l2arc_log_blk_phys_t's of the current and next L2ARC blk. 10231 * 10232 * The `this_io' and `next_io' arguments are used for block fetching. 10233 * When issuing the first blk IO during rebuild, you should pass NULL for 10234 * `this_io'. This function will then issue a sync IO to read the block and 10235 * also issue an async IO to fetch the next block in the block chain. The 10236 * fetched IO is returned in `next_io'. On subsequent calls to this 10237 * function, pass the value returned in `next_io' from the previous call 10238 * as `this_io' and a fresh `next_io' pointer to hold the next fetch IO. 10239 * Prior to the call, you should initialize your `next_io' pointer to be 10240 * NULL. If no fetch IO was issued, the pointer is left set at NULL. 10241 * 10242 * On success, this function returns 0, otherwise it returns an appropriate 10243 * error code. On error the fetching IO is aborted and cleared before 10244 * returning from this function. Therefore, if we return `success', the 10245 * caller can assume that we have taken care of cleanup of fetch IOs. 10246 */ 10247 static int 10248 l2arc_log_blk_read(l2arc_dev_t *dev, 10249 const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp, 10250 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 10251 zio_t *this_io, zio_t **next_io) 10252 { 10253 int err = 0; 10254 zio_cksum_t cksum; 10255 abd_t *abd = NULL; 10256 uint64_t asize; 10257 10258 ASSERT(this_lbp != NULL && next_lbp != NULL); 10259 ASSERT(this_lb != NULL && next_lb != NULL); 10260 ASSERT(next_io != NULL && *next_io == NULL); 10261 ASSERT(l2arc_log_blkptr_valid(dev, this_lbp)); 10262 10263 /* 10264 * Check to see if we have issued the IO for this log block in a 10265 * previous run. If not, this is the first call, so issue it now. 10266 */ 10267 if (this_io == NULL) { 10268 this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp, 10269 this_lb); 10270 } 10271 10272 /* 10273 * Peek to see if we can start issuing the next IO immediately. 10274 */ 10275 if (l2arc_log_blkptr_valid(dev, next_lbp)) { 10276 /* 10277 * Start issuing IO for the next log block early - this 10278 * should help keep the L2ARC device busy while we 10279 * decompress and restore this log block. 10280 */ 10281 *next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp, 10282 next_lb); 10283 } 10284 10285 /* Wait for the IO to read this log block to complete */ 10286 if ((err = zio_wait(this_io)) != 0) { 10287 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors); 10288 zfs_dbgmsg("L2ARC IO error (%d) while reading log block, " 10289 "offset: %llu, vdev guid: %llu", err, 10290 (u_longlong_t)this_lbp->lbp_daddr, 10291 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10292 goto cleanup; 10293 } 10294 10295 /* 10296 * Make sure the buffer checks out. 10297 * L2BLK_GET_PSIZE returns aligned size for log blocks. 10298 */ 10299 asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop); 10300 fletcher_4_native(this_lb, asize, NULL, &cksum); 10301 if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) { 10302 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors); 10303 zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, " 10304 "vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu", 10305 (u_longlong_t)this_lbp->lbp_daddr, 10306 (u_longlong_t)dev->l2ad_vdev->vdev_guid, 10307 (u_longlong_t)dev->l2ad_hand, 10308 (u_longlong_t)dev->l2ad_evict); 10309 err = SET_ERROR(ECKSUM); 10310 goto cleanup; 10311 } 10312 10313 /* Now we can take our time decoding this buffer */ 10314 switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) { 10315 case ZIO_COMPRESS_OFF: 10316 break; 10317 case ZIO_COMPRESS_LZ4: 10318 abd = abd_alloc_for_io(asize, B_TRUE); 10319 abd_copy_from_buf_off(abd, this_lb, 0, asize); 10320 if ((err = zio_decompress_data( 10321 L2BLK_GET_COMPRESS((this_lbp)->lbp_prop), 10322 abd, this_lb, asize, sizeof (*this_lb), NULL)) != 0) { 10323 err = SET_ERROR(EINVAL); 10324 goto cleanup; 10325 } 10326 break; 10327 default: 10328 err = SET_ERROR(EINVAL); 10329 goto cleanup; 10330 } 10331 if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC)) 10332 byteswap_uint64_array(this_lb, sizeof (*this_lb)); 10333 if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) { 10334 err = SET_ERROR(EINVAL); 10335 goto cleanup; 10336 } 10337 cleanup: 10338 /* Abort an in-flight fetch I/O in case of error */ 10339 if (err != 0 && *next_io != NULL) { 10340 l2arc_log_blk_fetch_abort(*next_io); 10341 *next_io = NULL; 10342 } 10343 if (abd != NULL) 10344 abd_free(abd); 10345 return (err); 10346 } 10347 10348 /* 10349 * Restores the payload of a log block to ARC. This creates empty ARC hdr 10350 * entries which only contain an l2arc hdr, essentially restoring the 10351 * buffers to their L2ARC evicted state. This function also updates space 10352 * usage on the L2ARC vdev to make sure it tracks restored buffers. 10353 */ 10354 static void 10355 l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb, 10356 uint64_t lb_asize) 10357 { 10358 uint64_t size = 0, asize = 0; 10359 uint64_t log_entries = dev->l2ad_log_entries; 10360 10361 /* 10362 * Usually arc_adapt() is called only for data, not headers, but 10363 * since we may allocate significant amount of memory here, let ARC 10364 * grow its arc_c. 10365 */ 10366 arc_adapt(log_entries * HDR_L2ONLY_SIZE); 10367 10368 for (int i = log_entries - 1; i >= 0; i--) { 10369 /* 10370 * Restore goes in the reverse temporal direction to preserve 10371 * correct temporal ordering of buffers in the l2ad_buflist. 10372 * l2arc_hdr_restore also does a list_insert_tail instead of 10373 * list_insert_head on the l2ad_buflist: 10374 * 10375 * LIST l2ad_buflist LIST 10376 * HEAD <------ (time) ------ TAIL 10377 * direction +-----+-----+-----+-----+-----+ direction 10378 * of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild 10379 * fill +-----+-----+-----+-----+-----+ 10380 * ^ ^ 10381 * | | 10382 * | | 10383 * l2arc_feed_thread l2arc_rebuild 10384 * will place new bufs here restores bufs here 10385 * 10386 * During l2arc_rebuild() the device is not used by 10387 * l2arc_feed_thread() as dev->l2ad_rebuild is set to true. 10388 */ 10389 size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop); 10390 asize += vdev_psize_to_asize(dev->l2ad_vdev, 10391 L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop)); 10392 l2arc_hdr_restore(&lb->lb_entries[i], dev); 10393 } 10394 10395 /* 10396 * Record rebuild stats: 10397 * size Logical size of restored buffers in the L2ARC 10398 * asize Aligned size of restored buffers in the L2ARC 10399 */ 10400 ARCSTAT_INCR(arcstat_l2_rebuild_size, size); 10401 ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize); 10402 ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries); 10403 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize); 10404 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize); 10405 ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks); 10406 } 10407 10408 /* 10409 * Restores a single ARC buf hdr from a log entry. The ARC buffer is put 10410 * into a state indicating that it has been evicted to L2ARC. 10411 */ 10412 static void 10413 l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev) 10414 { 10415 arc_buf_hdr_t *hdr, *exists; 10416 kmutex_t *hash_lock; 10417 arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop); 10418 uint64_t asize; 10419 10420 /* 10421 * Do all the allocation before grabbing any locks, this lets us 10422 * sleep if memory is full and we don't have to deal with failed 10423 * allocations. 10424 */ 10425 hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type, 10426 dev, le->le_dva, le->le_daddr, 10427 L2BLK_GET_PSIZE((le)->le_prop), le->le_birth, 10428 L2BLK_GET_COMPRESS((le)->le_prop), le->le_complevel, 10429 L2BLK_GET_PROTECTED((le)->le_prop), 10430 L2BLK_GET_PREFETCH((le)->le_prop), 10431 L2BLK_GET_STATE((le)->le_prop)); 10432 asize = vdev_psize_to_asize(dev->l2ad_vdev, 10433 L2BLK_GET_PSIZE((le)->le_prop)); 10434 10435 /* 10436 * vdev_space_update() has to be called before arc_hdr_destroy() to 10437 * avoid underflow since the latter also calls vdev_space_update(). 10438 */ 10439 l2arc_hdr_arcstats_increment(hdr); 10440 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10441 10442 mutex_enter(&dev->l2ad_mtx); 10443 list_insert_tail(&dev->l2ad_buflist, hdr); 10444 (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr); 10445 mutex_exit(&dev->l2ad_mtx); 10446 10447 exists = buf_hash_insert(hdr, &hash_lock); 10448 if (exists) { 10449 /* Buffer was already cached, no need to restore it. */ 10450 arc_hdr_destroy(hdr); 10451 /* 10452 * If the buffer is already cached, check whether it has 10453 * L2ARC metadata. If not, enter them and update the flag. 10454 * This is important is case of onlining a cache device, since 10455 * we previously evicted all L2ARC metadata from ARC. 10456 */ 10457 if (!HDR_HAS_L2HDR(exists)) { 10458 arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR); 10459 exists->b_l2hdr.b_dev = dev; 10460 exists->b_l2hdr.b_daddr = le->le_daddr; 10461 exists->b_l2hdr.b_arcs_state = 10462 L2BLK_GET_STATE((le)->le_prop); 10463 mutex_enter(&dev->l2ad_mtx); 10464 list_insert_tail(&dev->l2ad_buflist, exists); 10465 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 10466 arc_hdr_size(exists), exists); 10467 mutex_exit(&dev->l2ad_mtx); 10468 l2arc_hdr_arcstats_increment(exists); 10469 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10470 } 10471 ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached); 10472 } 10473 10474 mutex_exit(hash_lock); 10475 } 10476 10477 /* 10478 * Starts an asynchronous read IO to read a log block. This is used in log 10479 * block reconstruction to start reading the next block before we are done 10480 * decoding and reconstructing the current block, to keep the l2arc device 10481 * nice and hot with read IO to process. 10482 * The returned zio will contain a newly allocated memory buffers for the IO 10483 * data which should then be freed by the caller once the zio is no longer 10484 * needed (i.e. due to it having completed). If you wish to abort this 10485 * zio, you should do so using l2arc_log_blk_fetch_abort, which takes 10486 * care of disposing of the allocated buffers correctly. 10487 */ 10488 static zio_t * 10489 l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp, 10490 l2arc_log_blk_phys_t *lb) 10491 { 10492 uint32_t asize; 10493 zio_t *pio; 10494 l2arc_read_callback_t *cb; 10495 10496 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10497 asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10498 ASSERT(asize <= sizeof (l2arc_log_blk_phys_t)); 10499 10500 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP); 10501 cb->l2rcb_abd = abd_get_from_buf(lb, asize); 10502 pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb, 10503 ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | 10504 ZIO_FLAG_DONT_RETRY); 10505 (void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize, 10506 cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10507 ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_DONT_CACHE | ZIO_FLAG_CANFAIL | 10508 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE)); 10509 10510 return (pio); 10511 } 10512 10513 /* 10514 * Aborts a zio returned from l2arc_log_blk_fetch and frees the data 10515 * buffers allocated for it. 10516 */ 10517 static void 10518 l2arc_log_blk_fetch_abort(zio_t *zio) 10519 { 10520 (void) zio_wait(zio); 10521 } 10522 10523 /* 10524 * Creates a zio to update the device header on an l2arc device. 10525 */ 10526 void 10527 l2arc_dev_hdr_update(l2arc_dev_t *dev) 10528 { 10529 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10530 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10531 abd_t *abd; 10532 int err; 10533 10534 VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER)); 10535 10536 l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC; 10537 l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION; 10538 l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10539 l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid; 10540 l2dhdr->dh_log_entries = dev->l2ad_log_entries; 10541 l2dhdr->dh_evict = dev->l2ad_evict; 10542 l2dhdr->dh_start = dev->l2ad_start; 10543 l2dhdr->dh_end = dev->l2ad_end; 10544 l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize); 10545 l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count); 10546 l2dhdr->dh_flags = 0; 10547 l2dhdr->dh_trim_action_time = dev->l2ad_vdev->vdev_trim_action_time; 10548 l2dhdr->dh_trim_state = dev->l2ad_vdev->vdev_trim_state; 10549 if (dev->l2ad_first) 10550 l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST; 10551 10552 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10553 10554 err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev, 10555 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL, 10556 NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE)); 10557 10558 abd_free(abd); 10559 10560 if (err != 0) { 10561 zfs_dbgmsg("L2ARC IO error (%d) while writing device header, " 10562 "vdev guid: %llu", err, 10563 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10564 } 10565 } 10566 10567 /* 10568 * Commits a log block to the L2ARC device. This routine is invoked from 10569 * l2arc_write_buffers when the log block fills up. 10570 * This function allocates some memory to temporarily hold the serialized 10571 * buffer to be written. This is then released in l2arc_write_done. 10572 */ 10573 static void 10574 l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb) 10575 { 10576 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10577 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10578 uint64_t psize, asize; 10579 zio_t *wzio; 10580 l2arc_lb_abd_buf_t *abd_buf; 10581 uint8_t *tmpbuf = NULL; 10582 l2arc_lb_ptr_buf_t *lb_ptr_buf; 10583 10584 VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries); 10585 10586 abd_buf = zio_buf_alloc(sizeof (*abd_buf)); 10587 abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb)); 10588 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 10589 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP); 10590 10591 /* link the buffer into the block chain */ 10592 lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1]; 10593 lb->lb_magic = L2ARC_LOG_BLK_MAGIC; 10594 10595 /* 10596 * l2arc_log_blk_commit() may be called multiple times during a single 10597 * l2arc_write_buffers() call. Save the allocated abd buffers in a list 10598 * so we can free them in l2arc_write_done() later on. 10599 */ 10600 list_insert_tail(&cb->l2wcb_abd_list, abd_buf); 10601 10602 /* try to compress the buffer */ 10603 psize = zio_compress_data(ZIO_COMPRESS_LZ4, 10604 abd_buf->abd, (void **) &tmpbuf, sizeof (*lb), 0); 10605 10606 /* a log block is never entirely zero */ 10607 ASSERT(psize != 0); 10608 asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 10609 ASSERT(asize <= sizeof (*lb)); 10610 10611 /* 10612 * Update the start log block pointer in the device header to point 10613 * to the log block we're about to write. 10614 */ 10615 l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0]; 10616 l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand; 10617 l2dhdr->dh_start_lbps[0].lbp_payload_asize = 10618 dev->l2ad_log_blk_payload_asize; 10619 l2dhdr->dh_start_lbps[0].lbp_payload_start = 10620 dev->l2ad_log_blk_payload_start; 10621 L2BLK_SET_LSIZE( 10622 (&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb)); 10623 L2BLK_SET_PSIZE( 10624 (&l2dhdr->dh_start_lbps[0])->lbp_prop, asize); 10625 L2BLK_SET_CHECKSUM( 10626 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10627 ZIO_CHECKSUM_FLETCHER_4); 10628 if (asize < sizeof (*lb)) { 10629 /* compression succeeded */ 10630 memset(tmpbuf + psize, 0, asize - psize); 10631 L2BLK_SET_COMPRESS( 10632 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10633 ZIO_COMPRESS_LZ4); 10634 } else { 10635 /* compression failed */ 10636 memcpy(tmpbuf, lb, sizeof (*lb)); 10637 L2BLK_SET_COMPRESS( 10638 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10639 ZIO_COMPRESS_OFF); 10640 } 10641 10642 /* checksum what we're about to write */ 10643 fletcher_4_native(tmpbuf, asize, NULL, 10644 &l2dhdr->dh_start_lbps[0].lbp_cksum); 10645 10646 abd_free(abd_buf->abd); 10647 10648 /* perform the write itself */ 10649 abd_buf->abd = abd_get_from_buf(tmpbuf, sizeof (*lb)); 10650 abd_take_ownership_of_buf(abd_buf->abd, B_TRUE); 10651 wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand, 10652 asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10653 ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE); 10654 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio); 10655 (void) zio_nowait(wzio); 10656 10657 dev->l2ad_hand += asize; 10658 /* 10659 * Include the committed log block's pointer in the list of pointers 10660 * to log blocks present in the L2ARC device. 10661 */ 10662 memcpy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[0], 10663 sizeof (l2arc_log_blkptr_t)); 10664 mutex_enter(&dev->l2ad_mtx); 10665 list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf); 10666 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 10667 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 10668 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 10669 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 10670 mutex_exit(&dev->l2ad_mtx); 10671 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10672 10673 /* bump the kstats */ 10674 ARCSTAT_INCR(arcstat_l2_write_bytes, asize); 10675 ARCSTAT_BUMP(arcstat_l2_log_blk_writes); 10676 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize); 10677 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, 10678 dev->l2ad_log_blk_payload_asize / asize); 10679 10680 /* start a new log block */ 10681 dev->l2ad_log_ent_idx = 0; 10682 dev->l2ad_log_blk_payload_asize = 0; 10683 dev->l2ad_log_blk_payload_start = 0; 10684 } 10685 10686 /* 10687 * Validates an L2ARC log block address to make sure that it can be read 10688 * from the provided L2ARC device. 10689 */ 10690 boolean_t 10691 l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp) 10692 { 10693 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10694 uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10695 uint64_t end = lbp->lbp_daddr + asize - 1; 10696 uint64_t start = lbp->lbp_payload_start; 10697 boolean_t evicted = B_FALSE; 10698 10699 /* 10700 * A log block is valid if all of the following conditions are true: 10701 * - it fits entirely (including its payload) between l2ad_start and 10702 * l2ad_end 10703 * - it has a valid size 10704 * - neither the log block itself nor part of its payload was evicted 10705 * by l2arc_evict(): 10706 * 10707 * l2ad_hand l2ad_evict 10708 * | | lbp_daddr 10709 * | start | | end 10710 * | | | | | 10711 * V V V V V 10712 * l2ad_start ============================================ l2ad_end 10713 * --------------------------|||| 10714 * ^ ^ 10715 * | log block 10716 * payload 10717 */ 10718 10719 evicted = 10720 l2arc_range_check_overlap(start, end, dev->l2ad_hand) || 10721 l2arc_range_check_overlap(start, end, dev->l2ad_evict) || 10722 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) || 10723 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end); 10724 10725 return (start >= dev->l2ad_start && end <= dev->l2ad_end && 10726 asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) && 10727 (!evicted || dev->l2ad_first)); 10728 } 10729 10730 /* 10731 * Inserts ARC buffer header `hdr' into the current L2ARC log block on 10732 * the device. The buffer being inserted must be present in L2ARC. 10733 * Returns B_TRUE if the L2ARC log block is full and needs to be committed 10734 * to L2ARC, or B_FALSE if it still has room for more ARC buffers. 10735 */ 10736 static boolean_t 10737 l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr) 10738 { 10739 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10740 l2arc_log_ent_phys_t *le; 10741 10742 if (dev->l2ad_log_entries == 0) 10743 return (B_FALSE); 10744 10745 int index = dev->l2ad_log_ent_idx++; 10746 10747 ASSERT3S(index, <, dev->l2ad_log_entries); 10748 ASSERT(HDR_HAS_L2HDR(hdr)); 10749 10750 le = &lb->lb_entries[index]; 10751 memset(le, 0, sizeof (*le)); 10752 le->le_dva = hdr->b_dva; 10753 le->le_birth = hdr->b_birth; 10754 le->le_daddr = hdr->b_l2hdr.b_daddr; 10755 if (index == 0) 10756 dev->l2ad_log_blk_payload_start = le->le_daddr; 10757 L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr)); 10758 L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr)); 10759 L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr)); 10760 le->le_complevel = hdr->b_complevel; 10761 L2BLK_SET_TYPE((le)->le_prop, hdr->b_type); 10762 L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr))); 10763 L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr))); 10764 L2BLK_SET_STATE((le)->le_prop, hdr->b_l1hdr.b_state->arcs_state); 10765 10766 dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev, 10767 HDR_GET_PSIZE(hdr)); 10768 10769 return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries); 10770 } 10771 10772 /* 10773 * Checks whether a given L2ARC device address sits in a time-sequential 10774 * range. The trick here is that the L2ARC is a rotary buffer, so we can't 10775 * just do a range comparison, we need to handle the situation in which the 10776 * range wraps around the end of the L2ARC device. Arguments: 10777 * bottom -- Lower end of the range to check (written to earlier). 10778 * top -- Upper end of the range to check (written to later). 10779 * check -- The address for which we want to determine if it sits in 10780 * between the top and bottom. 10781 * 10782 * The 3-way conditional below represents the following cases: 10783 * 10784 * bottom < top : Sequentially ordered case: 10785 * <check>--------+-------------------+ 10786 * | (overlap here?) | 10787 * L2ARC dev V V 10788 * |---------------<bottom>============<top>--------------| 10789 * 10790 * bottom > top: Looped-around case: 10791 * <check>--------+------------------+ 10792 * | (overlap here?) | 10793 * L2ARC dev V V 10794 * |===============<top>---------------<bottom>===========| 10795 * ^ ^ 10796 * | (or here?) | 10797 * +---------------+---------<check> 10798 * 10799 * top == bottom : Just a single address comparison. 10800 */ 10801 boolean_t 10802 l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check) 10803 { 10804 if (bottom < top) 10805 return (bottom <= check && check <= top); 10806 else if (bottom > top) 10807 return (check <= top || bottom <= check); 10808 else 10809 return (check == top); 10810 } 10811 10812 EXPORT_SYMBOL(arc_buf_size); 10813 EXPORT_SYMBOL(arc_write); 10814 EXPORT_SYMBOL(arc_read); 10815 EXPORT_SYMBOL(arc_buf_info); 10816 EXPORT_SYMBOL(arc_getbuf_func); 10817 EXPORT_SYMBOL(arc_add_prune_callback); 10818 EXPORT_SYMBOL(arc_remove_prune_callback); 10819 10820 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_min, 10821 spl_param_get_u64, ZMOD_RW, "Minimum ARC size in bytes"); 10822 10823 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_max, 10824 spl_param_get_u64, ZMOD_RW, "Maximum ARC size in bytes"); 10825 10826 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_balance, UINT, ZMOD_RW, 10827 "Balance between metadata and data on ghost hits."); 10828 10829 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, grow_retry, param_set_arc_int, 10830 param_get_uint, ZMOD_RW, "Seconds before growing ARC size"); 10831 10832 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, shrink_shift, param_set_arc_int, 10833 param_get_uint, ZMOD_RW, "log2(fraction of ARC to reclaim)"); 10834 10835 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, pc_percent, UINT, ZMOD_RW, 10836 "Percent of pagecache to reclaim ARC to"); 10837 10838 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, average_blocksize, UINT, ZMOD_RD, 10839 "Target average block size"); 10840 10841 ZFS_MODULE_PARAM(zfs, zfs_, compressed_arc_enabled, INT, ZMOD_RW, 10842 "Disable compressed ARC buffers"); 10843 10844 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prefetch_ms, param_set_arc_int, 10845 param_get_uint, ZMOD_RW, "Min life of prefetch block in ms"); 10846 10847 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms, 10848 param_set_arc_int, param_get_uint, ZMOD_RW, 10849 "Min life of prescient prefetched block in ms"); 10850 10851 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, U64, ZMOD_RW, 10852 "Max write bytes per interval"); 10853 10854 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, U64, ZMOD_RW, 10855 "Extra write bytes during device warmup"); 10856 10857 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, U64, ZMOD_RW, 10858 "Number of max device writes to precache"); 10859 10860 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, U64, ZMOD_RW, 10861 "Compressed l2arc_headroom multiplier"); 10862 10863 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, U64, ZMOD_RW, 10864 "TRIM ahead L2ARC write size multiplier"); 10865 10866 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, U64, ZMOD_RW, 10867 "Seconds between L2ARC writing"); 10868 10869 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, U64, ZMOD_RW, 10870 "Min feed interval in milliseconds"); 10871 10872 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW, 10873 "Skip caching prefetched buffers"); 10874 10875 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_again, INT, ZMOD_RW, 10876 "Turbo L2ARC warmup"); 10877 10878 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, norw, INT, ZMOD_RW, 10879 "No reads during writes"); 10880 10881 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, UINT, ZMOD_RW, 10882 "Percent of ARC size allowed for L2ARC-only headers"); 10883 10884 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW, 10885 "Rebuild the L2ARC when importing a pool"); 10886 10887 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, U64, ZMOD_RW, 10888 "Min size in bytes to write rebuild log blocks in L2ARC"); 10889 10890 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW, 10891 "Cache only MFU data from ARC into L2ARC"); 10892 10893 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, exclude_special, INT, ZMOD_RW, 10894 "Exclude dbufs on special vdevs from being cached to L2ARC if set."); 10895 10896 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int, 10897 param_get_uint, ZMOD_RW, "System free memory I/O throttle in bytes"); 10898 10899 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_u64, 10900 spl_param_get_u64, ZMOD_RW, "System free memory target size in bytes"); 10901 10902 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_u64, 10903 spl_param_get_u64, ZMOD_RW, "Minimum bytes of dnodes in ARC"); 10904 10905 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent, 10906 param_set_arc_int, param_get_uint, ZMOD_RW, 10907 "Percent of ARC meta buffers for dnodes"); 10908 10909 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, UINT, ZMOD_RW, 10910 "Percentage of excess dnodes to try to unpin"); 10911 10912 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, UINT, ZMOD_RW, 10913 "When full, ARC allocation waits for eviction of this % of alloc size"); 10914 10915 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_batch_limit, UINT, ZMOD_RW, 10916 "The number of headers to evict per sublist before moving to the next"); 10917 10918 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, prune_task_threads, INT, ZMOD_RW, 10919 "Number of arc_prune threads"); 10920