1 /* 2 * CDDL HEADER START 3 * 4 * The contents of this file are subject to the terms of the 5 * Common Development and Distribution License (the "License"). 6 * You may not use this file except in compliance with the License. 7 * 8 * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 9 * or https://opensource.org/licenses/CDDL-1.0. 10 * See the License for the specific language governing permissions 11 * and limitations under the License. 12 * 13 * When distributing Covered Code, include this CDDL HEADER in each 14 * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 15 * If applicable, add the following below this CDDL HEADER, with the 16 * fields enclosed by brackets "[]" replaced with your own identifying 17 * information: Portions Copyright [yyyy] [name of copyright owner] 18 * 19 * CDDL HEADER END 20 */ 21 /* 22 * Copyright (c) 2005, 2010, Oracle and/or its affiliates. All rights reserved. 23 * Copyright (c) 2018, Joyent, Inc. 24 * Copyright (c) 2011, 2020, Delphix. All rights reserved. 25 * Copyright (c) 2014, Saso Kiselkov. All rights reserved. 26 * Copyright (c) 2017, Nexenta Systems, Inc. All rights reserved. 27 * Copyright (c) 2019, loli10K <ezomori.nozomu@gmail.com>. All rights reserved. 28 * Copyright (c) 2020, George Amanakis. All rights reserved. 29 * Copyright (c) 2019, Klara Inc. 30 * Copyright (c) 2019, Allan Jude 31 * Copyright (c) 2020, The FreeBSD Foundation [1] 32 * 33 * [1] Portions of this software were developed by Allan Jude 34 * under sponsorship from the FreeBSD Foundation. 35 */ 36 37 /* 38 * DVA-based Adjustable Replacement Cache 39 * 40 * While much of the theory of operation used here is 41 * based on the self-tuning, low overhead replacement cache 42 * presented by Megiddo and Modha at FAST 2003, there are some 43 * significant differences: 44 * 45 * 1. The Megiddo and Modha model assumes any page is evictable. 46 * Pages in its cache cannot be "locked" into memory. This makes 47 * the eviction algorithm simple: evict the last page in the list. 48 * This also make the performance characteristics easy to reason 49 * about. Our cache is not so simple. At any given moment, some 50 * subset of the blocks in the cache are un-evictable because we 51 * have handed out a reference to them. Blocks are only evictable 52 * when there are no external references active. This makes 53 * eviction far more problematic: we choose to evict the evictable 54 * blocks that are the "lowest" in the list. 55 * 56 * There are times when it is not possible to evict the requested 57 * space. In these circumstances we are unable to adjust the cache 58 * size. To prevent the cache growing unbounded at these times we 59 * implement a "cache throttle" that slows the flow of new data 60 * into the cache until we can make space available. 61 * 62 * 2. The Megiddo and Modha model assumes a fixed cache size. 63 * Pages are evicted when the cache is full and there is a cache 64 * miss. Our model has a variable sized cache. It grows with 65 * high use, but also tries to react to memory pressure from the 66 * operating system: decreasing its size when system memory is 67 * tight. 68 * 69 * 3. The Megiddo and Modha model assumes a fixed page size. All 70 * elements of the cache are therefore exactly the same size. So 71 * when adjusting the cache size following a cache miss, its simply 72 * a matter of choosing a single page to evict. In our model, we 73 * have variable sized cache blocks (ranging from 512 bytes to 74 * 128K bytes). We therefore choose a set of blocks to evict to make 75 * space for a cache miss that approximates as closely as possible 76 * the space used by the new block. 77 * 78 * See also: "ARC: A Self-Tuning, Low Overhead Replacement Cache" 79 * by N. Megiddo & D. Modha, FAST 2003 80 */ 81 82 /* 83 * The locking model: 84 * 85 * A new reference to a cache buffer can be obtained in two 86 * ways: 1) via a hash table lookup using the DVA as a key, 87 * or 2) via one of the ARC lists. The arc_read() interface 88 * uses method 1, while the internal ARC algorithms for 89 * adjusting the cache use method 2. We therefore provide two 90 * types of locks: 1) the hash table lock array, and 2) the 91 * ARC list locks. 92 * 93 * Buffers do not have their own mutexes, rather they rely on the 94 * hash table mutexes for the bulk of their protection (i.e. most 95 * fields in the arc_buf_hdr_t are protected by these mutexes). 96 * 97 * buf_hash_find() returns the appropriate mutex (held) when it 98 * locates the requested buffer in the hash table. It returns 99 * NULL for the mutex if the buffer was not in the table. 100 * 101 * buf_hash_remove() expects the appropriate hash mutex to be 102 * already held before it is invoked. 103 * 104 * Each ARC state also has a mutex which is used to protect the 105 * buffer list associated with the state. When attempting to 106 * obtain a hash table lock while holding an ARC list lock you 107 * must use: mutex_tryenter() to avoid deadlock. Also note that 108 * the active state mutex must be held before the ghost state mutex. 109 * 110 * It as also possible to register a callback which is run when the 111 * metadata limit is reached and no buffers can be safely evicted. In 112 * this case the arc user should drop a reference on some arc buffers so 113 * they can be reclaimed. For example, when using the ZPL each dentry 114 * holds a references on a znode. These dentries must be pruned before 115 * the arc buffer holding the znode can be safely evicted. 116 * 117 * Note that the majority of the performance stats are manipulated 118 * with atomic operations. 119 * 120 * The L2ARC uses the l2ad_mtx on each vdev for the following: 121 * 122 * - L2ARC buflist creation 123 * - L2ARC buflist eviction 124 * - L2ARC write completion, which walks L2ARC buflists 125 * - ARC header destruction, as it removes from L2ARC buflists 126 * - ARC header release, as it removes from L2ARC buflists 127 */ 128 129 /* 130 * ARC operation: 131 * 132 * Every block that is in the ARC is tracked by an arc_buf_hdr_t structure. 133 * This structure can point either to a block that is still in the cache or to 134 * one that is only accessible in an L2 ARC device, or it can provide 135 * information about a block that was recently evicted. If a block is 136 * only accessible in the L2ARC, then the arc_buf_hdr_t only has enough 137 * information to retrieve it from the L2ARC device. This information is 138 * stored in the l2arc_buf_hdr_t sub-structure of the arc_buf_hdr_t. A block 139 * that is in this state cannot access the data directly. 140 * 141 * Blocks that are actively being referenced or have not been evicted 142 * are cached in the L1ARC. The L1ARC (l1arc_buf_hdr_t) is a structure within 143 * the arc_buf_hdr_t that will point to the data block in memory. A block can 144 * only be read by a consumer if it has an l1arc_buf_hdr_t. The L1ARC 145 * caches data in two ways -- in a list of ARC buffers (arc_buf_t) and 146 * also in the arc_buf_hdr_t's private physical data block pointer (b_pabd). 147 * 148 * The L1ARC's data pointer may or may not be uncompressed. The ARC has the 149 * ability to store the physical data (b_pabd) associated with the DVA of the 150 * arc_buf_hdr_t. Since the b_pabd is a copy of the on-disk physical block, 151 * it will match its on-disk compression characteristics. This behavior can be 152 * disabled by setting 'zfs_compressed_arc_enabled' to B_FALSE. When the 153 * compressed ARC functionality is disabled, the b_pabd will point to an 154 * uncompressed version of the on-disk data. 155 * 156 * Data in the L1ARC is not accessed by consumers of the ARC directly. Each 157 * arc_buf_hdr_t can have multiple ARC buffers (arc_buf_t) which reference it. 158 * Each ARC buffer (arc_buf_t) is being actively accessed by a specific ARC 159 * consumer. The ARC will provide references to this data and will keep it 160 * cached until it is no longer in use. The ARC caches only the L1ARC's physical 161 * data block and will evict any arc_buf_t that is no longer referenced. The 162 * amount of memory consumed by the arc_buf_ts' data buffers can be seen via the 163 * "overhead_size" kstat. 164 * 165 * Depending on the consumer, an arc_buf_t can be requested in uncompressed or 166 * compressed form. The typical case is that consumers will want uncompressed 167 * data, and when that happens a new data buffer is allocated where the data is 168 * decompressed for them to use. Currently the only consumer who wants 169 * compressed arc_buf_t's is "zfs send", when it streams data exactly as it 170 * exists on disk. When this happens, the arc_buf_t's data buffer is shared 171 * with the arc_buf_hdr_t. 172 * 173 * Here is a diagram showing an arc_buf_hdr_t referenced by two arc_buf_t's. The 174 * first one is owned by a compressed send consumer (and therefore references 175 * the same compressed data buffer as the arc_buf_hdr_t) and the second could be 176 * used by any other consumer (and has its own uncompressed copy of the data 177 * buffer). 178 * 179 * arc_buf_hdr_t 180 * +-----------+ 181 * | fields | 182 * | common to | 183 * | L1- and | 184 * | L2ARC | 185 * +-----------+ 186 * | l2arc_buf_hdr_t 187 * | | 188 * +-----------+ 189 * | l1arc_buf_hdr_t 190 * | | arc_buf_t 191 * | b_buf +------------>+-----------+ arc_buf_t 192 * | b_pabd +-+ |b_next +---->+-----------+ 193 * +-----------+ | |-----------| |b_next +-->NULL 194 * | |b_comp = T | +-----------+ 195 * | |b_data +-+ |b_comp = F | 196 * | +-----------+ | |b_data +-+ 197 * +->+------+ | +-----------+ | 198 * compressed | | | | 199 * data | |<--------------+ | uncompressed 200 * +------+ compressed, | data 201 * shared +-->+------+ 202 * data | | 203 * | | 204 * +------+ 205 * 206 * When a consumer reads a block, the ARC must first look to see if the 207 * arc_buf_hdr_t is cached. If the hdr is cached then the ARC allocates a new 208 * arc_buf_t and either copies uncompressed data into a new data buffer from an 209 * existing uncompressed arc_buf_t, decompresses the hdr's b_pabd buffer into a 210 * new data buffer, or shares the hdr's b_pabd buffer, depending on whether the 211 * hdr is compressed and the desired compression characteristics of the 212 * arc_buf_t consumer. If the arc_buf_t ends up sharing data with the 213 * arc_buf_hdr_t and both of them are uncompressed then the arc_buf_t must be 214 * the last buffer in the hdr's b_buf list, however a shared compressed buf can 215 * be anywhere in the hdr's list. 216 * 217 * The diagram below shows an example of an uncompressed ARC hdr that is 218 * sharing its data with an arc_buf_t (note that the shared uncompressed buf is 219 * the last element in the buf list): 220 * 221 * arc_buf_hdr_t 222 * +-----------+ 223 * | | 224 * | | 225 * | | 226 * +-----------+ 227 * l2arc_buf_hdr_t| | 228 * | | 229 * +-----------+ 230 * l1arc_buf_hdr_t| | 231 * | | arc_buf_t (shared) 232 * | b_buf +------------>+---------+ arc_buf_t 233 * | | |b_next +---->+---------+ 234 * | b_pabd +-+ |---------| |b_next +-->NULL 235 * +-----------+ | | | +---------+ 236 * | |b_data +-+ | | 237 * | +---------+ | |b_data +-+ 238 * +->+------+ | +---------+ | 239 * | | | | 240 * uncompressed | | | | 241 * data +------+ | | 242 * ^ +->+------+ | 243 * | uncompressed | | | 244 * | data | | | 245 * | +------+ | 246 * +---------------------------------+ 247 * 248 * Writing to the ARC requires that the ARC first discard the hdr's b_pabd 249 * since the physical block is about to be rewritten. The new data contents 250 * will be contained in the arc_buf_t. As the I/O pipeline performs the write, 251 * it may compress the data before writing it to disk. The ARC will be called 252 * with the transformed data and will memcpy the transformed on-disk block into 253 * a newly allocated b_pabd. Writes are always done into buffers which have 254 * either been loaned (and hence are new and don't have other readers) or 255 * buffers which have been released (and hence have their own hdr, if there 256 * were originally other readers of the buf's original hdr). This ensures that 257 * the ARC only needs to update a single buf and its hdr after a write occurs. 258 * 259 * When the L2ARC is in use, it will also take advantage of the b_pabd. The 260 * L2ARC will always write the contents of b_pabd to the L2ARC. This means 261 * that when compressed ARC is enabled that the L2ARC blocks are identical 262 * to the on-disk block in the main data pool. This provides a significant 263 * advantage since the ARC can leverage the bp's checksum when reading from the 264 * L2ARC to determine if the contents are valid. However, if the compressed 265 * ARC is disabled, then the L2ARC's block must be transformed to look 266 * like the physical block in the main data pool before comparing the 267 * checksum and determining its validity. 268 * 269 * The L1ARC has a slightly different system for storing encrypted data. 270 * Raw (encrypted + possibly compressed) data has a few subtle differences from 271 * data that is just compressed. The biggest difference is that it is not 272 * possible to decrypt encrypted data (or vice-versa) if the keys aren't loaded. 273 * The other difference is that encryption cannot be treated as a suggestion. 274 * If a caller would prefer compressed data, but they actually wind up with 275 * uncompressed data the worst thing that could happen is there might be a 276 * performance hit. If the caller requests encrypted data, however, we must be 277 * sure they actually get it or else secret information could be leaked. Raw 278 * data is stored in hdr->b_crypt_hdr.b_rabd. An encrypted header, therefore, 279 * may have both an encrypted version and a decrypted version of its data at 280 * once. When a caller needs a raw arc_buf_t, it is allocated and the data is 281 * copied out of this header. To avoid complications with b_pabd, raw buffers 282 * cannot be shared. 283 */ 284 285 #include <sys/spa.h> 286 #include <sys/zio.h> 287 #include <sys/spa_impl.h> 288 #include <sys/zio_compress.h> 289 #include <sys/zio_checksum.h> 290 #include <sys/zfs_context.h> 291 #include <sys/arc.h> 292 #include <sys/zfs_refcount.h> 293 #include <sys/vdev.h> 294 #include <sys/vdev_impl.h> 295 #include <sys/dsl_pool.h> 296 #include <sys/multilist.h> 297 #include <sys/abd.h> 298 #include <sys/zil.h> 299 #include <sys/fm/fs/zfs.h> 300 #include <sys/callb.h> 301 #include <sys/kstat.h> 302 #include <sys/zthr.h> 303 #include <zfs_fletcher.h> 304 #include <sys/arc_impl.h> 305 #include <sys/trace_zfs.h> 306 #include <sys/aggsum.h> 307 #include <sys/wmsum.h> 308 #include <cityhash.h> 309 #include <sys/vdev_trim.h> 310 #include <sys/zfs_racct.h> 311 #include <sys/zstd/zstd.h> 312 313 #ifndef _KERNEL 314 /* set with ZFS_DEBUG=watch, to enable watchpoints on frozen buffers */ 315 boolean_t arc_watch = B_FALSE; 316 #endif 317 318 /* 319 * This thread's job is to keep enough free memory in the system, by 320 * calling arc_kmem_reap_soon() plus arc_reduce_target_size(), which improves 321 * arc_available_memory(). 322 */ 323 static zthr_t *arc_reap_zthr; 324 325 /* 326 * This thread's job is to keep arc_size under arc_c, by calling 327 * arc_evict(), which improves arc_is_overflowing(). 328 */ 329 static zthr_t *arc_evict_zthr; 330 static arc_buf_hdr_t **arc_state_evict_markers; 331 static int arc_state_evict_marker_count; 332 333 static kmutex_t arc_evict_lock; 334 static boolean_t arc_evict_needed = B_FALSE; 335 static clock_t arc_last_uncached_flush; 336 337 /* 338 * Count of bytes evicted since boot. 339 */ 340 static uint64_t arc_evict_count; 341 342 /* 343 * List of arc_evict_waiter_t's, representing threads waiting for the 344 * arc_evict_count to reach specific values. 345 */ 346 static list_t arc_evict_waiters; 347 348 /* 349 * When arc_is_overflowing(), arc_get_data_impl() waits for this percent of 350 * the requested amount of data to be evicted. For example, by default for 351 * every 2KB that's evicted, 1KB of it may be "reused" by a new allocation. 352 * Since this is above 100%, it ensures that progress is made towards getting 353 * arc_size under arc_c. Since this is finite, it ensures that allocations 354 * can still happen, even during the potentially long time that arc_size is 355 * more than arc_c. 356 */ 357 static uint_t zfs_arc_eviction_pct = 200; 358 359 /* 360 * The number of headers to evict in arc_evict_state_impl() before 361 * dropping the sublist lock and evicting from another sublist. A lower 362 * value means we're more likely to evict the "correct" header (i.e. the 363 * oldest header in the arc state), but comes with higher overhead 364 * (i.e. more invocations of arc_evict_state_impl()). 365 */ 366 static uint_t zfs_arc_evict_batch_limit = 10; 367 368 /* number of seconds before growing cache again */ 369 uint_t arc_grow_retry = 5; 370 371 /* 372 * Minimum time between calls to arc_kmem_reap_soon(). 373 */ 374 static const int arc_kmem_cache_reap_retry_ms = 1000; 375 376 /* shift of arc_c for calculating overflow limit in arc_get_data_impl */ 377 static int zfs_arc_overflow_shift = 8; 378 379 /* log2(fraction of arc to reclaim) */ 380 uint_t arc_shrink_shift = 7; 381 382 /* percent of pagecache to reclaim arc to */ 383 #ifdef _KERNEL 384 uint_t zfs_arc_pc_percent = 0; 385 #endif 386 387 /* 388 * log2(fraction of ARC which must be free to allow growing). 389 * I.e. If there is less than arc_c >> arc_no_grow_shift free memory, 390 * when reading a new block into the ARC, we will evict an equal-sized block 391 * from the ARC. 392 * 393 * This must be less than arc_shrink_shift, so that when we shrink the ARC, 394 * we will still not allow it to grow. 395 */ 396 uint_t arc_no_grow_shift = 5; 397 398 399 /* 400 * minimum lifespan of a prefetch block in clock ticks 401 * (initialized in arc_init()) 402 */ 403 static uint_t arc_min_prefetch_ms; 404 static uint_t arc_min_prescient_prefetch_ms; 405 406 /* 407 * If this percent of memory is free, don't throttle. 408 */ 409 uint_t arc_lotsfree_percent = 10; 410 411 /* 412 * The arc has filled available memory and has now warmed up. 413 */ 414 boolean_t arc_warm; 415 416 /* 417 * These tunables are for performance analysis. 418 */ 419 uint64_t zfs_arc_max = 0; 420 uint64_t zfs_arc_min = 0; 421 static uint64_t zfs_arc_dnode_limit = 0; 422 static uint_t zfs_arc_dnode_reduce_percent = 10; 423 static uint_t zfs_arc_grow_retry = 0; 424 static uint_t zfs_arc_shrink_shift = 0; 425 uint_t zfs_arc_average_blocksize = 8 * 1024; /* 8KB */ 426 427 /* 428 * ARC dirty data constraints for arc_tempreserve_space() throttle: 429 * * total dirty data limit 430 * * anon block dirty limit 431 * * each pool's anon allowance 432 */ 433 static const unsigned long zfs_arc_dirty_limit_percent = 50; 434 static const unsigned long zfs_arc_anon_limit_percent = 25; 435 static const unsigned long zfs_arc_pool_dirty_percent = 20; 436 437 /* 438 * Enable or disable compressed arc buffers. 439 */ 440 int zfs_compressed_arc_enabled = B_TRUE; 441 442 /* 443 * Balance between metadata and data on ghost hits. Values above 100 444 * increase metadata caching by proportionally reducing effect of ghost 445 * data hits on target data/metadata rate. 446 */ 447 static uint_t zfs_arc_meta_balance = 500; 448 449 /* 450 * Percentage that can be consumed by dnodes of ARC meta buffers. 451 */ 452 static uint_t zfs_arc_dnode_limit_percent = 10; 453 454 /* 455 * These tunables are Linux-specific 456 */ 457 static uint64_t zfs_arc_sys_free = 0; 458 static uint_t zfs_arc_min_prefetch_ms = 0; 459 static uint_t zfs_arc_min_prescient_prefetch_ms = 0; 460 static uint_t zfs_arc_lotsfree_percent = 10; 461 462 /* 463 * Number of arc_prune threads 464 */ 465 static int zfs_arc_prune_task_threads = 1; 466 467 /* The 7 states: */ 468 arc_state_t ARC_anon; 469 arc_state_t ARC_mru; 470 arc_state_t ARC_mru_ghost; 471 arc_state_t ARC_mfu; 472 arc_state_t ARC_mfu_ghost; 473 arc_state_t ARC_l2c_only; 474 arc_state_t ARC_uncached; 475 476 arc_stats_t arc_stats = { 477 { "hits", KSTAT_DATA_UINT64 }, 478 { "iohits", KSTAT_DATA_UINT64 }, 479 { "misses", KSTAT_DATA_UINT64 }, 480 { "demand_data_hits", KSTAT_DATA_UINT64 }, 481 { "demand_data_iohits", KSTAT_DATA_UINT64 }, 482 { "demand_data_misses", KSTAT_DATA_UINT64 }, 483 { "demand_metadata_hits", KSTAT_DATA_UINT64 }, 484 { "demand_metadata_iohits", KSTAT_DATA_UINT64 }, 485 { "demand_metadata_misses", KSTAT_DATA_UINT64 }, 486 { "prefetch_data_hits", KSTAT_DATA_UINT64 }, 487 { "prefetch_data_iohits", KSTAT_DATA_UINT64 }, 488 { "prefetch_data_misses", KSTAT_DATA_UINT64 }, 489 { "prefetch_metadata_hits", KSTAT_DATA_UINT64 }, 490 { "prefetch_metadata_iohits", KSTAT_DATA_UINT64 }, 491 { "prefetch_metadata_misses", KSTAT_DATA_UINT64 }, 492 { "mru_hits", KSTAT_DATA_UINT64 }, 493 { "mru_ghost_hits", KSTAT_DATA_UINT64 }, 494 { "mfu_hits", KSTAT_DATA_UINT64 }, 495 { "mfu_ghost_hits", KSTAT_DATA_UINT64 }, 496 { "uncached_hits", KSTAT_DATA_UINT64 }, 497 { "deleted", KSTAT_DATA_UINT64 }, 498 { "mutex_miss", KSTAT_DATA_UINT64 }, 499 { "access_skip", KSTAT_DATA_UINT64 }, 500 { "evict_skip", KSTAT_DATA_UINT64 }, 501 { "evict_not_enough", KSTAT_DATA_UINT64 }, 502 { "evict_l2_cached", KSTAT_DATA_UINT64 }, 503 { "evict_l2_eligible", KSTAT_DATA_UINT64 }, 504 { "evict_l2_eligible_mfu", KSTAT_DATA_UINT64 }, 505 { "evict_l2_eligible_mru", KSTAT_DATA_UINT64 }, 506 { "evict_l2_ineligible", KSTAT_DATA_UINT64 }, 507 { "evict_l2_skip", KSTAT_DATA_UINT64 }, 508 { "hash_elements", KSTAT_DATA_UINT64 }, 509 { "hash_elements_max", KSTAT_DATA_UINT64 }, 510 { "hash_collisions", KSTAT_DATA_UINT64 }, 511 { "hash_chains", KSTAT_DATA_UINT64 }, 512 { "hash_chain_max", KSTAT_DATA_UINT64 }, 513 { "meta", KSTAT_DATA_UINT64 }, 514 { "pd", KSTAT_DATA_UINT64 }, 515 { "pm", KSTAT_DATA_UINT64 }, 516 { "c", KSTAT_DATA_UINT64 }, 517 { "c_min", KSTAT_DATA_UINT64 }, 518 { "c_max", KSTAT_DATA_UINT64 }, 519 { "size", KSTAT_DATA_UINT64 }, 520 { "compressed_size", KSTAT_DATA_UINT64 }, 521 { "uncompressed_size", KSTAT_DATA_UINT64 }, 522 { "overhead_size", KSTAT_DATA_UINT64 }, 523 { "hdr_size", KSTAT_DATA_UINT64 }, 524 { "data_size", KSTAT_DATA_UINT64 }, 525 { "metadata_size", KSTAT_DATA_UINT64 }, 526 { "dbuf_size", KSTAT_DATA_UINT64 }, 527 { "dnode_size", KSTAT_DATA_UINT64 }, 528 { "bonus_size", KSTAT_DATA_UINT64 }, 529 #if defined(COMPAT_FREEBSD11) 530 { "other_size", KSTAT_DATA_UINT64 }, 531 #endif 532 { "anon_size", KSTAT_DATA_UINT64 }, 533 { "anon_data", KSTAT_DATA_UINT64 }, 534 { "anon_metadata", KSTAT_DATA_UINT64 }, 535 { "anon_evictable_data", KSTAT_DATA_UINT64 }, 536 { "anon_evictable_metadata", KSTAT_DATA_UINT64 }, 537 { "mru_size", KSTAT_DATA_UINT64 }, 538 { "mru_data", KSTAT_DATA_UINT64 }, 539 { "mru_metadata", KSTAT_DATA_UINT64 }, 540 { "mru_evictable_data", KSTAT_DATA_UINT64 }, 541 { "mru_evictable_metadata", KSTAT_DATA_UINT64 }, 542 { "mru_ghost_size", KSTAT_DATA_UINT64 }, 543 { "mru_ghost_data", KSTAT_DATA_UINT64 }, 544 { "mru_ghost_metadata", KSTAT_DATA_UINT64 }, 545 { "mru_ghost_evictable_data", KSTAT_DATA_UINT64 }, 546 { "mru_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 547 { "mfu_size", KSTAT_DATA_UINT64 }, 548 { "mfu_data", KSTAT_DATA_UINT64 }, 549 { "mfu_metadata", KSTAT_DATA_UINT64 }, 550 { "mfu_evictable_data", KSTAT_DATA_UINT64 }, 551 { "mfu_evictable_metadata", KSTAT_DATA_UINT64 }, 552 { "mfu_ghost_size", KSTAT_DATA_UINT64 }, 553 { "mfu_ghost_data", KSTAT_DATA_UINT64 }, 554 { "mfu_ghost_metadata", KSTAT_DATA_UINT64 }, 555 { "mfu_ghost_evictable_data", KSTAT_DATA_UINT64 }, 556 { "mfu_ghost_evictable_metadata", KSTAT_DATA_UINT64 }, 557 { "uncached_size", KSTAT_DATA_UINT64 }, 558 { "uncached_data", KSTAT_DATA_UINT64 }, 559 { "uncached_metadata", KSTAT_DATA_UINT64 }, 560 { "uncached_evictable_data", KSTAT_DATA_UINT64 }, 561 { "uncached_evictable_metadata", KSTAT_DATA_UINT64 }, 562 { "l2_hits", KSTAT_DATA_UINT64 }, 563 { "l2_misses", KSTAT_DATA_UINT64 }, 564 { "l2_prefetch_asize", KSTAT_DATA_UINT64 }, 565 { "l2_mru_asize", KSTAT_DATA_UINT64 }, 566 { "l2_mfu_asize", KSTAT_DATA_UINT64 }, 567 { "l2_bufc_data_asize", KSTAT_DATA_UINT64 }, 568 { "l2_bufc_metadata_asize", KSTAT_DATA_UINT64 }, 569 { "l2_feeds", KSTAT_DATA_UINT64 }, 570 { "l2_rw_clash", KSTAT_DATA_UINT64 }, 571 { "l2_read_bytes", KSTAT_DATA_UINT64 }, 572 { "l2_write_bytes", KSTAT_DATA_UINT64 }, 573 { "l2_writes_sent", KSTAT_DATA_UINT64 }, 574 { "l2_writes_done", KSTAT_DATA_UINT64 }, 575 { "l2_writes_error", KSTAT_DATA_UINT64 }, 576 { "l2_writes_lock_retry", KSTAT_DATA_UINT64 }, 577 { "l2_evict_lock_retry", KSTAT_DATA_UINT64 }, 578 { "l2_evict_reading", KSTAT_DATA_UINT64 }, 579 { "l2_evict_l1cached", KSTAT_DATA_UINT64 }, 580 { "l2_free_on_write", KSTAT_DATA_UINT64 }, 581 { "l2_abort_lowmem", KSTAT_DATA_UINT64 }, 582 { "l2_cksum_bad", KSTAT_DATA_UINT64 }, 583 { "l2_io_error", KSTAT_DATA_UINT64 }, 584 { "l2_size", KSTAT_DATA_UINT64 }, 585 { "l2_asize", KSTAT_DATA_UINT64 }, 586 { "l2_hdr_size", KSTAT_DATA_UINT64 }, 587 { "l2_log_blk_writes", KSTAT_DATA_UINT64 }, 588 { "l2_log_blk_avg_asize", KSTAT_DATA_UINT64 }, 589 { "l2_log_blk_asize", KSTAT_DATA_UINT64 }, 590 { "l2_log_blk_count", KSTAT_DATA_UINT64 }, 591 { "l2_data_to_meta_ratio", KSTAT_DATA_UINT64 }, 592 { "l2_rebuild_success", KSTAT_DATA_UINT64 }, 593 { "l2_rebuild_unsupported", KSTAT_DATA_UINT64 }, 594 { "l2_rebuild_io_errors", KSTAT_DATA_UINT64 }, 595 { "l2_rebuild_dh_errors", KSTAT_DATA_UINT64 }, 596 { "l2_rebuild_cksum_lb_errors", KSTAT_DATA_UINT64 }, 597 { "l2_rebuild_lowmem", KSTAT_DATA_UINT64 }, 598 { "l2_rebuild_size", KSTAT_DATA_UINT64 }, 599 { "l2_rebuild_asize", KSTAT_DATA_UINT64 }, 600 { "l2_rebuild_bufs", KSTAT_DATA_UINT64 }, 601 { "l2_rebuild_bufs_precached", KSTAT_DATA_UINT64 }, 602 { "l2_rebuild_log_blks", KSTAT_DATA_UINT64 }, 603 { "memory_throttle_count", KSTAT_DATA_UINT64 }, 604 { "memory_direct_count", KSTAT_DATA_UINT64 }, 605 { "memory_indirect_count", KSTAT_DATA_UINT64 }, 606 { "memory_all_bytes", KSTAT_DATA_UINT64 }, 607 { "memory_free_bytes", KSTAT_DATA_UINT64 }, 608 { "memory_available_bytes", KSTAT_DATA_INT64 }, 609 { "arc_no_grow", KSTAT_DATA_UINT64 }, 610 { "arc_tempreserve", KSTAT_DATA_UINT64 }, 611 { "arc_loaned_bytes", KSTAT_DATA_UINT64 }, 612 { "arc_prune", KSTAT_DATA_UINT64 }, 613 { "arc_meta_used", KSTAT_DATA_UINT64 }, 614 { "arc_dnode_limit", KSTAT_DATA_UINT64 }, 615 { "async_upgrade_sync", KSTAT_DATA_UINT64 }, 616 { "predictive_prefetch", KSTAT_DATA_UINT64 }, 617 { "demand_hit_predictive_prefetch", KSTAT_DATA_UINT64 }, 618 { "demand_iohit_predictive_prefetch", KSTAT_DATA_UINT64 }, 619 { "prescient_prefetch", KSTAT_DATA_UINT64 }, 620 { "demand_hit_prescient_prefetch", KSTAT_DATA_UINT64 }, 621 { "demand_iohit_prescient_prefetch", KSTAT_DATA_UINT64 }, 622 { "arc_need_free", KSTAT_DATA_UINT64 }, 623 { "arc_sys_free", KSTAT_DATA_UINT64 }, 624 { "arc_raw_size", KSTAT_DATA_UINT64 }, 625 { "cached_only_in_progress", KSTAT_DATA_UINT64 }, 626 { "abd_chunk_waste_size", KSTAT_DATA_UINT64 }, 627 }; 628 629 arc_sums_t arc_sums; 630 631 #define ARCSTAT_MAX(stat, val) { \ 632 uint64_t m; \ 633 while ((val) > (m = arc_stats.stat.value.ui64) && \ 634 (m != atomic_cas_64(&arc_stats.stat.value.ui64, m, (val)))) \ 635 continue; \ 636 } 637 638 /* 639 * We define a macro to allow ARC hits/misses to be easily broken down by 640 * two separate conditions, giving a total of four different subtypes for 641 * each of hits and misses (so eight statistics total). 642 */ 643 #define ARCSTAT_CONDSTAT(cond1, stat1, notstat1, cond2, stat2, notstat2, stat) \ 644 if (cond1) { \ 645 if (cond2) { \ 646 ARCSTAT_BUMP(arcstat_##stat1##_##stat2##_##stat); \ 647 } else { \ 648 ARCSTAT_BUMP(arcstat_##stat1##_##notstat2##_##stat); \ 649 } \ 650 } else { \ 651 if (cond2) { \ 652 ARCSTAT_BUMP(arcstat_##notstat1##_##stat2##_##stat); \ 653 } else { \ 654 ARCSTAT_BUMP(arcstat_##notstat1##_##notstat2##_##stat);\ 655 } \ 656 } 657 658 /* 659 * This macro allows us to use kstats as floating averages. Each time we 660 * update this kstat, we first factor it and the update value by 661 * ARCSTAT_AVG_FACTOR to shrink the new value's contribution to the overall 662 * average. This macro assumes that integer loads and stores are atomic, but 663 * is not safe for multiple writers updating the kstat in parallel (only the 664 * last writer's update will remain). 665 */ 666 #define ARCSTAT_F_AVG_FACTOR 3 667 #define ARCSTAT_F_AVG(stat, value) \ 668 do { \ 669 uint64_t x = ARCSTAT(stat); \ 670 x = x - x / ARCSTAT_F_AVG_FACTOR + \ 671 (value) / ARCSTAT_F_AVG_FACTOR; \ 672 ARCSTAT(stat) = x; \ 673 } while (0) 674 675 static kstat_t *arc_ksp; 676 677 /* 678 * There are several ARC variables that are critical to export as kstats -- 679 * but we don't want to have to grovel around in the kstat whenever we wish to 680 * manipulate them. For these variables, we therefore define them to be in 681 * terms of the statistic variable. This assures that we are not introducing 682 * the possibility of inconsistency by having shadow copies of the variables, 683 * while still allowing the code to be readable. 684 */ 685 #define arc_tempreserve ARCSTAT(arcstat_tempreserve) 686 #define arc_loaned_bytes ARCSTAT(arcstat_loaned_bytes) 687 #define arc_dnode_limit ARCSTAT(arcstat_dnode_limit) /* max size for dnodes */ 688 #define arc_need_free ARCSTAT(arcstat_need_free) /* waiting to be evicted */ 689 690 hrtime_t arc_growtime; 691 list_t arc_prune_list; 692 kmutex_t arc_prune_mtx; 693 taskq_t *arc_prune_taskq; 694 695 #define GHOST_STATE(state) \ 696 ((state) == arc_mru_ghost || (state) == arc_mfu_ghost || \ 697 (state) == arc_l2c_only) 698 699 #define HDR_IN_HASH_TABLE(hdr) ((hdr)->b_flags & ARC_FLAG_IN_HASH_TABLE) 700 #define HDR_IO_IN_PROGRESS(hdr) ((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) 701 #define HDR_IO_ERROR(hdr) ((hdr)->b_flags & ARC_FLAG_IO_ERROR) 702 #define HDR_PREFETCH(hdr) ((hdr)->b_flags & ARC_FLAG_PREFETCH) 703 #define HDR_PRESCIENT_PREFETCH(hdr) \ 704 ((hdr)->b_flags & ARC_FLAG_PRESCIENT_PREFETCH) 705 #define HDR_COMPRESSION_ENABLED(hdr) \ 706 ((hdr)->b_flags & ARC_FLAG_COMPRESSED_ARC) 707 708 #define HDR_L2CACHE(hdr) ((hdr)->b_flags & ARC_FLAG_L2CACHE) 709 #define HDR_UNCACHED(hdr) ((hdr)->b_flags & ARC_FLAG_UNCACHED) 710 #define HDR_L2_READING(hdr) \ 711 (((hdr)->b_flags & ARC_FLAG_IO_IN_PROGRESS) && \ 712 ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR)) 713 #define HDR_L2_WRITING(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITING) 714 #define HDR_L2_EVICTED(hdr) ((hdr)->b_flags & ARC_FLAG_L2_EVICTED) 715 #define HDR_L2_WRITE_HEAD(hdr) ((hdr)->b_flags & ARC_FLAG_L2_WRITE_HEAD) 716 #define HDR_PROTECTED(hdr) ((hdr)->b_flags & ARC_FLAG_PROTECTED) 717 #define HDR_NOAUTH(hdr) ((hdr)->b_flags & ARC_FLAG_NOAUTH) 718 #define HDR_SHARED_DATA(hdr) ((hdr)->b_flags & ARC_FLAG_SHARED_DATA) 719 720 #define HDR_ISTYPE_METADATA(hdr) \ 721 ((hdr)->b_flags & ARC_FLAG_BUFC_METADATA) 722 #define HDR_ISTYPE_DATA(hdr) (!HDR_ISTYPE_METADATA(hdr)) 723 724 #define HDR_HAS_L1HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L1HDR) 725 #define HDR_HAS_L2HDR(hdr) ((hdr)->b_flags & ARC_FLAG_HAS_L2HDR) 726 #define HDR_HAS_RABD(hdr) \ 727 (HDR_HAS_L1HDR(hdr) && HDR_PROTECTED(hdr) && \ 728 (hdr)->b_crypt_hdr.b_rabd != NULL) 729 #define HDR_ENCRYPTED(hdr) \ 730 (HDR_PROTECTED(hdr) && DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 731 #define HDR_AUTHENTICATED(hdr) \ 732 (HDR_PROTECTED(hdr) && !DMU_OT_IS_ENCRYPTED((hdr)->b_crypt_hdr.b_ot)) 733 734 /* For storing compression mode in b_flags */ 735 #define HDR_COMPRESS_OFFSET (highbit64(ARC_FLAG_COMPRESS_0) - 1) 736 737 #define HDR_GET_COMPRESS(hdr) ((enum zio_compress)BF32_GET((hdr)->b_flags, \ 738 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS)) 739 #define HDR_SET_COMPRESS(hdr, cmp) BF32_SET((hdr)->b_flags, \ 740 HDR_COMPRESS_OFFSET, SPA_COMPRESSBITS, (cmp)); 741 742 #define ARC_BUF_LAST(buf) ((buf)->b_next == NULL) 743 #define ARC_BUF_SHARED(buf) ((buf)->b_flags & ARC_BUF_FLAG_SHARED) 744 #define ARC_BUF_COMPRESSED(buf) ((buf)->b_flags & ARC_BUF_FLAG_COMPRESSED) 745 #define ARC_BUF_ENCRYPTED(buf) ((buf)->b_flags & ARC_BUF_FLAG_ENCRYPTED) 746 747 /* 748 * Other sizes 749 */ 750 751 #define HDR_FULL_CRYPT_SIZE ((int64_t)sizeof (arc_buf_hdr_t)) 752 #define HDR_FULL_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_crypt_hdr)) 753 #define HDR_L2ONLY_SIZE ((int64_t)offsetof(arc_buf_hdr_t, b_l1hdr)) 754 755 /* 756 * Hash table routines 757 */ 758 759 #define BUF_LOCKS 2048 760 typedef struct buf_hash_table { 761 uint64_t ht_mask; 762 arc_buf_hdr_t **ht_table; 763 kmutex_t ht_locks[BUF_LOCKS] ____cacheline_aligned; 764 } buf_hash_table_t; 765 766 static buf_hash_table_t buf_hash_table; 767 768 #define BUF_HASH_INDEX(spa, dva, birth) \ 769 (buf_hash(spa, dva, birth) & buf_hash_table.ht_mask) 770 #define BUF_HASH_LOCK(idx) (&buf_hash_table.ht_locks[idx & (BUF_LOCKS-1)]) 771 #define HDR_LOCK(hdr) \ 772 (BUF_HASH_LOCK(BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth))) 773 774 uint64_t zfs_crc64_table[256]; 775 776 /* 777 * Level 2 ARC 778 */ 779 780 #define L2ARC_WRITE_SIZE (8 * 1024 * 1024) /* initial write max */ 781 #define L2ARC_HEADROOM 2 /* num of writes */ 782 783 /* 784 * If we discover during ARC scan any buffers to be compressed, we boost 785 * our headroom for the next scanning cycle by this percentage multiple. 786 */ 787 #define L2ARC_HEADROOM_BOOST 200 788 #define L2ARC_FEED_SECS 1 /* caching interval secs */ 789 #define L2ARC_FEED_MIN_MS 200 /* min caching interval ms */ 790 791 /* 792 * We can feed L2ARC from two states of ARC buffers, mru and mfu, 793 * and each of the state has two types: data and metadata. 794 */ 795 #define L2ARC_FEED_TYPES 4 796 797 /* L2ARC Performance Tunables */ 798 uint64_t l2arc_write_max = L2ARC_WRITE_SIZE; /* def max write size */ 799 uint64_t l2arc_write_boost = L2ARC_WRITE_SIZE; /* extra warmup write */ 800 uint64_t l2arc_headroom = L2ARC_HEADROOM; /* # of dev writes */ 801 uint64_t l2arc_headroom_boost = L2ARC_HEADROOM_BOOST; 802 uint64_t l2arc_feed_secs = L2ARC_FEED_SECS; /* interval seconds */ 803 uint64_t l2arc_feed_min_ms = L2ARC_FEED_MIN_MS; /* min interval msecs */ 804 int l2arc_noprefetch = B_TRUE; /* don't cache prefetch bufs */ 805 int l2arc_feed_again = B_TRUE; /* turbo warmup */ 806 int l2arc_norw = B_FALSE; /* no reads during writes */ 807 static uint_t l2arc_meta_percent = 33; /* limit on headers size */ 808 809 /* 810 * L2ARC Internals 811 */ 812 static list_t L2ARC_dev_list; /* device list */ 813 static list_t *l2arc_dev_list; /* device list pointer */ 814 static kmutex_t l2arc_dev_mtx; /* device list mutex */ 815 static l2arc_dev_t *l2arc_dev_last; /* last device used */ 816 static list_t L2ARC_free_on_write; /* free after write buf list */ 817 static list_t *l2arc_free_on_write; /* free after write list ptr */ 818 static kmutex_t l2arc_free_on_write_mtx; /* mutex for list */ 819 static uint64_t l2arc_ndev; /* number of devices */ 820 821 typedef struct l2arc_read_callback { 822 arc_buf_hdr_t *l2rcb_hdr; /* read header */ 823 blkptr_t l2rcb_bp; /* original blkptr */ 824 zbookmark_phys_t l2rcb_zb; /* original bookmark */ 825 int l2rcb_flags; /* original flags */ 826 abd_t *l2rcb_abd; /* temporary buffer */ 827 } l2arc_read_callback_t; 828 829 typedef struct l2arc_data_free { 830 /* protected by l2arc_free_on_write_mtx */ 831 abd_t *l2df_abd; 832 size_t l2df_size; 833 arc_buf_contents_t l2df_type; 834 list_node_t l2df_list_node; 835 } l2arc_data_free_t; 836 837 typedef enum arc_fill_flags { 838 ARC_FILL_LOCKED = 1 << 0, /* hdr lock is held */ 839 ARC_FILL_COMPRESSED = 1 << 1, /* fill with compressed data */ 840 ARC_FILL_ENCRYPTED = 1 << 2, /* fill with encrypted data */ 841 ARC_FILL_NOAUTH = 1 << 3, /* don't attempt to authenticate */ 842 ARC_FILL_IN_PLACE = 1 << 4 /* fill in place (special case) */ 843 } arc_fill_flags_t; 844 845 typedef enum arc_ovf_level { 846 ARC_OVF_NONE, /* ARC within target size. */ 847 ARC_OVF_SOME, /* ARC is slightly overflowed. */ 848 ARC_OVF_SEVERE /* ARC is severely overflowed. */ 849 } arc_ovf_level_t; 850 851 static kmutex_t l2arc_feed_thr_lock; 852 static kcondvar_t l2arc_feed_thr_cv; 853 static uint8_t l2arc_thread_exit; 854 855 static kmutex_t l2arc_rebuild_thr_lock; 856 static kcondvar_t l2arc_rebuild_thr_cv; 857 858 enum arc_hdr_alloc_flags { 859 ARC_HDR_ALLOC_RDATA = 0x1, 860 ARC_HDR_USE_RESERVE = 0x4, 861 ARC_HDR_ALLOC_LINEAR = 0x8, 862 }; 863 864 865 static abd_t *arc_get_data_abd(arc_buf_hdr_t *, uint64_t, const void *, int); 866 static void *arc_get_data_buf(arc_buf_hdr_t *, uint64_t, const void *); 867 static void arc_get_data_impl(arc_buf_hdr_t *, uint64_t, const void *, int); 868 static void arc_free_data_abd(arc_buf_hdr_t *, abd_t *, uint64_t, const void *); 869 static void arc_free_data_buf(arc_buf_hdr_t *, void *, uint64_t, const void *); 870 static void arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, 871 const void *tag); 872 static void arc_hdr_free_abd(arc_buf_hdr_t *, boolean_t); 873 static void arc_hdr_alloc_abd(arc_buf_hdr_t *, int); 874 static void arc_hdr_destroy(arc_buf_hdr_t *); 875 static void arc_access(arc_buf_hdr_t *, arc_flags_t, boolean_t); 876 static void arc_buf_watch(arc_buf_t *); 877 static void arc_change_state(arc_state_t *, arc_buf_hdr_t *); 878 879 static arc_buf_contents_t arc_buf_type(arc_buf_hdr_t *); 880 static uint32_t arc_bufc_to_flags(arc_buf_contents_t); 881 static inline void arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 882 static inline void arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags); 883 884 static boolean_t l2arc_write_eligible(uint64_t, arc_buf_hdr_t *); 885 static void l2arc_read_done(zio_t *); 886 static void l2arc_do_free_on_write(void); 887 static void l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 888 boolean_t state_only); 889 890 #define l2arc_hdr_arcstats_increment(hdr) \ 891 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_FALSE) 892 #define l2arc_hdr_arcstats_decrement(hdr) \ 893 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_FALSE) 894 #define l2arc_hdr_arcstats_increment_state(hdr) \ 895 l2arc_hdr_arcstats_update((hdr), B_TRUE, B_TRUE) 896 #define l2arc_hdr_arcstats_decrement_state(hdr) \ 897 l2arc_hdr_arcstats_update((hdr), B_FALSE, B_TRUE) 898 899 /* 900 * l2arc_exclude_special : A zfs module parameter that controls whether buffers 901 * present on special vdevs are eligibile for caching in L2ARC. If 902 * set to 1, exclude dbufs on special vdevs from being cached to 903 * L2ARC. 904 */ 905 int l2arc_exclude_special = 0; 906 907 /* 908 * l2arc_mfuonly : A ZFS module parameter that controls whether only MFU 909 * metadata and data are cached from ARC into L2ARC. 910 */ 911 static int l2arc_mfuonly = 0; 912 913 /* 914 * L2ARC TRIM 915 * l2arc_trim_ahead : A ZFS module parameter that controls how much ahead of 916 * the current write size (l2arc_write_max) we should TRIM if we 917 * have filled the device. It is defined as a percentage of the 918 * write size. If set to 100 we trim twice the space required to 919 * accommodate upcoming writes. A minimum of 64MB will be trimmed. 920 * It also enables TRIM of the whole L2ARC device upon creation or 921 * addition to an existing pool or if the header of the device is 922 * invalid upon importing a pool or onlining a cache device. The 923 * default is 0, which disables TRIM on L2ARC altogether as it can 924 * put significant stress on the underlying storage devices. This 925 * will vary depending of how well the specific device handles 926 * these commands. 927 */ 928 static uint64_t l2arc_trim_ahead = 0; 929 930 /* 931 * Performance tuning of L2ARC persistence: 932 * 933 * l2arc_rebuild_enabled : A ZFS module parameter that controls whether adding 934 * an L2ARC device (either at pool import or later) will attempt 935 * to rebuild L2ARC buffer contents. 936 * l2arc_rebuild_blocks_min_l2size : A ZFS module parameter that controls 937 * whether log blocks are written to the L2ARC device. If the L2ARC 938 * device is less than 1GB, the amount of data l2arc_evict() 939 * evicts is significant compared to the amount of restored L2ARC 940 * data. In this case do not write log blocks in L2ARC in order 941 * not to waste space. 942 */ 943 static int l2arc_rebuild_enabled = B_TRUE; 944 static uint64_t l2arc_rebuild_blocks_min_l2size = 1024 * 1024 * 1024; 945 946 /* L2ARC persistence rebuild control routines. */ 947 void l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen); 948 static __attribute__((noreturn)) void l2arc_dev_rebuild_thread(void *arg); 949 static int l2arc_rebuild(l2arc_dev_t *dev); 950 951 /* L2ARC persistence read I/O routines. */ 952 static int l2arc_dev_hdr_read(l2arc_dev_t *dev); 953 static int l2arc_log_blk_read(l2arc_dev_t *dev, 954 const l2arc_log_blkptr_t *this_lp, const l2arc_log_blkptr_t *next_lp, 955 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 956 zio_t *this_io, zio_t **next_io); 957 static zio_t *l2arc_log_blk_fetch(vdev_t *vd, 958 const l2arc_log_blkptr_t *lp, l2arc_log_blk_phys_t *lb); 959 static void l2arc_log_blk_fetch_abort(zio_t *zio); 960 961 /* L2ARC persistence block restoration routines. */ 962 static void l2arc_log_blk_restore(l2arc_dev_t *dev, 963 const l2arc_log_blk_phys_t *lb, uint64_t lb_asize); 964 static void l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, 965 l2arc_dev_t *dev); 966 967 /* L2ARC persistence write I/O routines. */ 968 static uint64_t l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, 969 l2arc_write_callback_t *cb); 970 971 /* L2ARC persistence auxiliary routines. */ 972 boolean_t l2arc_log_blkptr_valid(l2arc_dev_t *dev, 973 const l2arc_log_blkptr_t *lbp); 974 static boolean_t l2arc_log_blk_insert(l2arc_dev_t *dev, 975 const arc_buf_hdr_t *ab); 976 boolean_t l2arc_range_check_overlap(uint64_t bottom, 977 uint64_t top, uint64_t check); 978 static void l2arc_blk_fetch_done(zio_t *zio); 979 static inline uint64_t 980 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev); 981 982 /* 983 * We use Cityhash for this. It's fast, and has good hash properties without 984 * requiring any large static buffers. 985 */ 986 static uint64_t 987 buf_hash(uint64_t spa, const dva_t *dva, uint64_t birth) 988 { 989 return (cityhash4(spa, dva->dva_word[0], dva->dva_word[1], birth)); 990 } 991 992 #define HDR_EMPTY(hdr) \ 993 ((hdr)->b_dva.dva_word[0] == 0 && \ 994 (hdr)->b_dva.dva_word[1] == 0) 995 996 #define HDR_EMPTY_OR_LOCKED(hdr) \ 997 (HDR_EMPTY(hdr) || MUTEX_HELD(HDR_LOCK(hdr))) 998 999 #define HDR_EQUAL(spa, dva, birth, hdr) \ 1000 ((hdr)->b_dva.dva_word[0] == (dva)->dva_word[0]) && \ 1001 ((hdr)->b_dva.dva_word[1] == (dva)->dva_word[1]) && \ 1002 ((hdr)->b_birth == birth) && ((hdr)->b_spa == spa) 1003 1004 static void 1005 buf_discard_identity(arc_buf_hdr_t *hdr) 1006 { 1007 hdr->b_dva.dva_word[0] = 0; 1008 hdr->b_dva.dva_word[1] = 0; 1009 hdr->b_birth = 0; 1010 } 1011 1012 static arc_buf_hdr_t * 1013 buf_hash_find(uint64_t spa, const blkptr_t *bp, kmutex_t **lockp) 1014 { 1015 const dva_t *dva = BP_IDENTITY(bp); 1016 uint64_t birth = BP_PHYSICAL_BIRTH(bp); 1017 uint64_t idx = BUF_HASH_INDEX(spa, dva, birth); 1018 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1019 arc_buf_hdr_t *hdr; 1020 1021 mutex_enter(hash_lock); 1022 for (hdr = buf_hash_table.ht_table[idx]; hdr != NULL; 1023 hdr = hdr->b_hash_next) { 1024 if (HDR_EQUAL(spa, dva, birth, hdr)) { 1025 *lockp = hash_lock; 1026 return (hdr); 1027 } 1028 } 1029 mutex_exit(hash_lock); 1030 *lockp = NULL; 1031 return (NULL); 1032 } 1033 1034 /* 1035 * Insert an entry into the hash table. If there is already an element 1036 * equal to elem in the hash table, then the already existing element 1037 * will be returned and the new element will not be inserted. 1038 * Otherwise returns NULL. 1039 * If lockp == NULL, the caller is assumed to already hold the hash lock. 1040 */ 1041 static arc_buf_hdr_t * 1042 buf_hash_insert(arc_buf_hdr_t *hdr, kmutex_t **lockp) 1043 { 1044 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1045 kmutex_t *hash_lock = BUF_HASH_LOCK(idx); 1046 arc_buf_hdr_t *fhdr; 1047 uint32_t i; 1048 1049 ASSERT(!DVA_IS_EMPTY(&hdr->b_dva)); 1050 ASSERT(hdr->b_birth != 0); 1051 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 1052 1053 if (lockp != NULL) { 1054 *lockp = hash_lock; 1055 mutex_enter(hash_lock); 1056 } else { 1057 ASSERT(MUTEX_HELD(hash_lock)); 1058 } 1059 1060 for (fhdr = buf_hash_table.ht_table[idx], i = 0; fhdr != NULL; 1061 fhdr = fhdr->b_hash_next, i++) { 1062 if (HDR_EQUAL(hdr->b_spa, &hdr->b_dva, hdr->b_birth, fhdr)) 1063 return (fhdr); 1064 } 1065 1066 hdr->b_hash_next = buf_hash_table.ht_table[idx]; 1067 buf_hash_table.ht_table[idx] = hdr; 1068 arc_hdr_set_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1069 1070 /* collect some hash table performance data */ 1071 if (i > 0) { 1072 ARCSTAT_BUMP(arcstat_hash_collisions); 1073 if (i == 1) 1074 ARCSTAT_BUMP(arcstat_hash_chains); 1075 1076 ARCSTAT_MAX(arcstat_hash_chain_max, i); 1077 } 1078 uint64_t he = atomic_inc_64_nv( 1079 &arc_stats.arcstat_hash_elements.value.ui64); 1080 ARCSTAT_MAX(arcstat_hash_elements_max, he); 1081 1082 return (NULL); 1083 } 1084 1085 static void 1086 buf_hash_remove(arc_buf_hdr_t *hdr) 1087 { 1088 arc_buf_hdr_t *fhdr, **hdrp; 1089 uint64_t idx = BUF_HASH_INDEX(hdr->b_spa, &hdr->b_dva, hdr->b_birth); 1090 1091 ASSERT(MUTEX_HELD(BUF_HASH_LOCK(idx))); 1092 ASSERT(HDR_IN_HASH_TABLE(hdr)); 1093 1094 hdrp = &buf_hash_table.ht_table[idx]; 1095 while ((fhdr = *hdrp) != hdr) { 1096 ASSERT3P(fhdr, !=, NULL); 1097 hdrp = &fhdr->b_hash_next; 1098 } 1099 *hdrp = hdr->b_hash_next; 1100 hdr->b_hash_next = NULL; 1101 arc_hdr_clear_flags(hdr, ARC_FLAG_IN_HASH_TABLE); 1102 1103 /* collect some hash table performance data */ 1104 atomic_dec_64(&arc_stats.arcstat_hash_elements.value.ui64); 1105 1106 if (buf_hash_table.ht_table[idx] && 1107 buf_hash_table.ht_table[idx]->b_hash_next == NULL) 1108 ARCSTAT_BUMPDOWN(arcstat_hash_chains); 1109 } 1110 1111 /* 1112 * Global data structures and functions for the buf kmem cache. 1113 */ 1114 1115 static kmem_cache_t *hdr_full_cache; 1116 static kmem_cache_t *hdr_full_crypt_cache; 1117 static kmem_cache_t *hdr_l2only_cache; 1118 static kmem_cache_t *buf_cache; 1119 1120 static void 1121 buf_fini(void) 1122 { 1123 #if defined(_KERNEL) 1124 /* 1125 * Large allocations which do not require contiguous pages 1126 * should be using vmem_free() in the linux kernel\ 1127 */ 1128 vmem_free(buf_hash_table.ht_table, 1129 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1130 #else 1131 kmem_free(buf_hash_table.ht_table, 1132 (buf_hash_table.ht_mask + 1) * sizeof (void *)); 1133 #endif 1134 for (int i = 0; i < BUF_LOCKS; i++) 1135 mutex_destroy(BUF_HASH_LOCK(i)); 1136 kmem_cache_destroy(hdr_full_cache); 1137 kmem_cache_destroy(hdr_full_crypt_cache); 1138 kmem_cache_destroy(hdr_l2only_cache); 1139 kmem_cache_destroy(buf_cache); 1140 } 1141 1142 /* 1143 * Constructor callback - called when the cache is empty 1144 * and a new buf is requested. 1145 */ 1146 static int 1147 hdr_full_cons(void *vbuf, void *unused, int kmflag) 1148 { 1149 (void) unused, (void) kmflag; 1150 arc_buf_hdr_t *hdr = vbuf; 1151 1152 memset(hdr, 0, HDR_FULL_SIZE); 1153 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 1154 cv_init(&hdr->b_l1hdr.b_cv, NULL, CV_DEFAULT, NULL); 1155 zfs_refcount_create(&hdr->b_l1hdr.b_refcnt); 1156 #ifdef ZFS_DEBUG 1157 mutex_init(&hdr->b_l1hdr.b_freeze_lock, NULL, MUTEX_DEFAULT, NULL); 1158 #endif 1159 multilist_link_init(&hdr->b_l1hdr.b_arc_node); 1160 list_link_init(&hdr->b_l2hdr.b_l2node); 1161 arc_space_consume(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1162 1163 return (0); 1164 } 1165 1166 static int 1167 hdr_full_crypt_cons(void *vbuf, void *unused, int kmflag) 1168 { 1169 (void) unused; 1170 arc_buf_hdr_t *hdr = vbuf; 1171 1172 hdr_full_cons(vbuf, unused, kmflag); 1173 memset(&hdr->b_crypt_hdr, 0, sizeof (hdr->b_crypt_hdr)); 1174 arc_space_consume(sizeof (hdr->b_crypt_hdr), ARC_SPACE_HDRS); 1175 1176 return (0); 1177 } 1178 1179 static int 1180 hdr_l2only_cons(void *vbuf, void *unused, int kmflag) 1181 { 1182 (void) unused, (void) kmflag; 1183 arc_buf_hdr_t *hdr = vbuf; 1184 1185 memset(hdr, 0, HDR_L2ONLY_SIZE); 1186 arc_space_consume(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1187 1188 return (0); 1189 } 1190 1191 static int 1192 buf_cons(void *vbuf, void *unused, int kmflag) 1193 { 1194 (void) unused, (void) kmflag; 1195 arc_buf_t *buf = vbuf; 1196 1197 memset(buf, 0, sizeof (arc_buf_t)); 1198 arc_space_consume(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1199 1200 return (0); 1201 } 1202 1203 /* 1204 * Destructor callback - called when a cached buf is 1205 * no longer required. 1206 */ 1207 static void 1208 hdr_full_dest(void *vbuf, void *unused) 1209 { 1210 (void) unused; 1211 arc_buf_hdr_t *hdr = vbuf; 1212 1213 ASSERT(HDR_EMPTY(hdr)); 1214 cv_destroy(&hdr->b_l1hdr.b_cv); 1215 zfs_refcount_destroy(&hdr->b_l1hdr.b_refcnt); 1216 #ifdef ZFS_DEBUG 1217 mutex_destroy(&hdr->b_l1hdr.b_freeze_lock); 1218 #endif 1219 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 1220 arc_space_return(HDR_FULL_SIZE, ARC_SPACE_HDRS); 1221 } 1222 1223 static void 1224 hdr_full_crypt_dest(void *vbuf, void *unused) 1225 { 1226 (void) vbuf, (void) unused; 1227 1228 hdr_full_dest(vbuf, unused); 1229 arc_space_return(sizeof (((arc_buf_hdr_t *)NULL)->b_crypt_hdr), 1230 ARC_SPACE_HDRS); 1231 } 1232 1233 static void 1234 hdr_l2only_dest(void *vbuf, void *unused) 1235 { 1236 (void) unused; 1237 arc_buf_hdr_t *hdr = vbuf; 1238 1239 ASSERT(HDR_EMPTY(hdr)); 1240 arc_space_return(HDR_L2ONLY_SIZE, ARC_SPACE_L2HDRS); 1241 } 1242 1243 static void 1244 buf_dest(void *vbuf, void *unused) 1245 { 1246 (void) unused; 1247 (void) vbuf; 1248 1249 arc_space_return(sizeof (arc_buf_t), ARC_SPACE_HDRS); 1250 } 1251 1252 static void 1253 buf_init(void) 1254 { 1255 uint64_t *ct = NULL; 1256 uint64_t hsize = 1ULL << 12; 1257 int i, j; 1258 1259 /* 1260 * The hash table is big enough to fill all of physical memory 1261 * with an average block size of zfs_arc_average_blocksize (default 8K). 1262 * By default, the table will take up 1263 * totalmem * sizeof(void*) / 8K (1MB per GB with 8-byte pointers). 1264 */ 1265 while (hsize * zfs_arc_average_blocksize < arc_all_memory()) 1266 hsize <<= 1; 1267 retry: 1268 buf_hash_table.ht_mask = hsize - 1; 1269 #if defined(_KERNEL) 1270 /* 1271 * Large allocations which do not require contiguous pages 1272 * should be using vmem_alloc() in the linux kernel 1273 */ 1274 buf_hash_table.ht_table = 1275 vmem_zalloc(hsize * sizeof (void*), KM_SLEEP); 1276 #else 1277 buf_hash_table.ht_table = 1278 kmem_zalloc(hsize * sizeof (void*), KM_NOSLEEP); 1279 #endif 1280 if (buf_hash_table.ht_table == NULL) { 1281 ASSERT(hsize > (1ULL << 8)); 1282 hsize >>= 1; 1283 goto retry; 1284 } 1285 1286 hdr_full_cache = kmem_cache_create("arc_buf_hdr_t_full", HDR_FULL_SIZE, 1287 0, hdr_full_cons, hdr_full_dest, NULL, NULL, NULL, 0); 1288 hdr_full_crypt_cache = kmem_cache_create("arc_buf_hdr_t_full_crypt", 1289 HDR_FULL_CRYPT_SIZE, 0, hdr_full_crypt_cons, hdr_full_crypt_dest, 1290 NULL, NULL, NULL, 0); 1291 hdr_l2only_cache = kmem_cache_create("arc_buf_hdr_t_l2only", 1292 HDR_L2ONLY_SIZE, 0, hdr_l2only_cons, hdr_l2only_dest, NULL, 1293 NULL, NULL, 0); 1294 buf_cache = kmem_cache_create("arc_buf_t", sizeof (arc_buf_t), 1295 0, buf_cons, buf_dest, NULL, NULL, NULL, 0); 1296 1297 for (i = 0; i < 256; i++) 1298 for (ct = zfs_crc64_table + i, *ct = i, j = 8; j > 0; j--) 1299 *ct = (*ct >> 1) ^ (-(*ct & 1) & ZFS_CRC64_POLY); 1300 1301 for (i = 0; i < BUF_LOCKS; i++) 1302 mutex_init(BUF_HASH_LOCK(i), NULL, MUTEX_DEFAULT, NULL); 1303 } 1304 1305 #define ARC_MINTIME (hz>>4) /* 62 ms */ 1306 1307 /* 1308 * This is the size that the buf occupies in memory. If the buf is compressed, 1309 * it will correspond to the compressed size. You should use this method of 1310 * getting the buf size unless you explicitly need the logical size. 1311 */ 1312 uint64_t 1313 arc_buf_size(arc_buf_t *buf) 1314 { 1315 return (ARC_BUF_COMPRESSED(buf) ? 1316 HDR_GET_PSIZE(buf->b_hdr) : HDR_GET_LSIZE(buf->b_hdr)); 1317 } 1318 1319 uint64_t 1320 arc_buf_lsize(arc_buf_t *buf) 1321 { 1322 return (HDR_GET_LSIZE(buf->b_hdr)); 1323 } 1324 1325 /* 1326 * This function will return B_TRUE if the buffer is encrypted in memory. 1327 * This buffer can be decrypted by calling arc_untransform(). 1328 */ 1329 boolean_t 1330 arc_is_encrypted(arc_buf_t *buf) 1331 { 1332 return (ARC_BUF_ENCRYPTED(buf) != 0); 1333 } 1334 1335 /* 1336 * Returns B_TRUE if the buffer represents data that has not had its MAC 1337 * verified yet. 1338 */ 1339 boolean_t 1340 arc_is_unauthenticated(arc_buf_t *buf) 1341 { 1342 return (HDR_NOAUTH(buf->b_hdr) != 0); 1343 } 1344 1345 void 1346 arc_get_raw_params(arc_buf_t *buf, boolean_t *byteorder, uint8_t *salt, 1347 uint8_t *iv, uint8_t *mac) 1348 { 1349 arc_buf_hdr_t *hdr = buf->b_hdr; 1350 1351 ASSERT(HDR_PROTECTED(hdr)); 1352 1353 memcpy(salt, hdr->b_crypt_hdr.b_salt, ZIO_DATA_SALT_LEN); 1354 memcpy(iv, hdr->b_crypt_hdr.b_iv, ZIO_DATA_IV_LEN); 1355 memcpy(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN); 1356 *byteorder = (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 1357 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 1358 } 1359 1360 /* 1361 * Indicates how this buffer is compressed in memory. If it is not compressed 1362 * the value will be ZIO_COMPRESS_OFF. It can be made normally readable with 1363 * arc_untransform() as long as it is also unencrypted. 1364 */ 1365 enum zio_compress 1366 arc_get_compression(arc_buf_t *buf) 1367 { 1368 return (ARC_BUF_COMPRESSED(buf) ? 1369 HDR_GET_COMPRESS(buf->b_hdr) : ZIO_COMPRESS_OFF); 1370 } 1371 1372 /* 1373 * Return the compression algorithm used to store this data in the ARC. If ARC 1374 * compression is enabled or this is an encrypted block, this will be the same 1375 * as what's used to store it on-disk. Otherwise, this will be ZIO_COMPRESS_OFF. 1376 */ 1377 static inline enum zio_compress 1378 arc_hdr_get_compress(arc_buf_hdr_t *hdr) 1379 { 1380 return (HDR_COMPRESSION_ENABLED(hdr) ? 1381 HDR_GET_COMPRESS(hdr) : ZIO_COMPRESS_OFF); 1382 } 1383 1384 uint8_t 1385 arc_get_complevel(arc_buf_t *buf) 1386 { 1387 return (buf->b_hdr->b_complevel); 1388 } 1389 1390 static inline boolean_t 1391 arc_buf_is_shared(arc_buf_t *buf) 1392 { 1393 boolean_t shared = (buf->b_data != NULL && 1394 buf->b_hdr->b_l1hdr.b_pabd != NULL && 1395 abd_is_linear(buf->b_hdr->b_l1hdr.b_pabd) && 1396 buf->b_data == abd_to_buf(buf->b_hdr->b_l1hdr.b_pabd)); 1397 IMPLY(shared, HDR_SHARED_DATA(buf->b_hdr)); 1398 IMPLY(shared, ARC_BUF_SHARED(buf)); 1399 IMPLY(shared, ARC_BUF_COMPRESSED(buf) || ARC_BUF_LAST(buf)); 1400 1401 /* 1402 * It would be nice to assert arc_can_share() too, but the "hdr isn't 1403 * already being shared" requirement prevents us from doing that. 1404 */ 1405 1406 return (shared); 1407 } 1408 1409 /* 1410 * Free the checksum associated with this header. If there is no checksum, this 1411 * is a no-op. 1412 */ 1413 static inline void 1414 arc_cksum_free(arc_buf_hdr_t *hdr) 1415 { 1416 #ifdef ZFS_DEBUG 1417 ASSERT(HDR_HAS_L1HDR(hdr)); 1418 1419 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1420 if (hdr->b_l1hdr.b_freeze_cksum != NULL) { 1421 kmem_free(hdr->b_l1hdr.b_freeze_cksum, sizeof (zio_cksum_t)); 1422 hdr->b_l1hdr.b_freeze_cksum = NULL; 1423 } 1424 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1425 #endif 1426 } 1427 1428 /* 1429 * Return true iff at least one of the bufs on hdr is not compressed. 1430 * Encrypted buffers count as compressed. 1431 */ 1432 static boolean_t 1433 arc_hdr_has_uncompressed_buf(arc_buf_hdr_t *hdr) 1434 { 1435 ASSERT(hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY_OR_LOCKED(hdr)); 1436 1437 for (arc_buf_t *b = hdr->b_l1hdr.b_buf; b != NULL; b = b->b_next) { 1438 if (!ARC_BUF_COMPRESSED(b)) { 1439 return (B_TRUE); 1440 } 1441 } 1442 return (B_FALSE); 1443 } 1444 1445 1446 /* 1447 * If we've turned on the ZFS_DEBUG_MODIFY flag, verify that the buf's data 1448 * matches the checksum that is stored in the hdr. If there is no checksum, 1449 * or if the buf is compressed, this is a no-op. 1450 */ 1451 static void 1452 arc_cksum_verify(arc_buf_t *buf) 1453 { 1454 #ifdef ZFS_DEBUG 1455 arc_buf_hdr_t *hdr = buf->b_hdr; 1456 zio_cksum_t zc; 1457 1458 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1459 return; 1460 1461 if (ARC_BUF_COMPRESSED(buf)) 1462 return; 1463 1464 ASSERT(HDR_HAS_L1HDR(hdr)); 1465 1466 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1467 1468 if (hdr->b_l1hdr.b_freeze_cksum == NULL || HDR_IO_ERROR(hdr)) { 1469 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1470 return; 1471 } 1472 1473 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, &zc); 1474 if (!ZIO_CHECKSUM_EQUAL(*hdr->b_l1hdr.b_freeze_cksum, zc)) 1475 panic("buffer modified while frozen!"); 1476 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1477 #endif 1478 } 1479 1480 /* 1481 * This function makes the assumption that data stored in the L2ARC 1482 * will be transformed exactly as it is in the main pool. Because of 1483 * this we can verify the checksum against the reading process's bp. 1484 */ 1485 static boolean_t 1486 arc_cksum_is_equal(arc_buf_hdr_t *hdr, zio_t *zio) 1487 { 1488 ASSERT(!BP_IS_EMBEDDED(zio->io_bp)); 1489 VERIFY3U(BP_GET_PSIZE(zio->io_bp), ==, HDR_GET_PSIZE(hdr)); 1490 1491 /* 1492 * Block pointers always store the checksum for the logical data. 1493 * If the block pointer has the gang bit set, then the checksum 1494 * it represents is for the reconstituted data and not for an 1495 * individual gang member. The zio pipeline, however, must be able to 1496 * determine the checksum of each of the gang constituents so it 1497 * treats the checksum comparison differently than what we need 1498 * for l2arc blocks. This prevents us from using the 1499 * zio_checksum_error() interface directly. Instead we must call the 1500 * zio_checksum_error_impl() so that we can ensure the checksum is 1501 * generated using the correct checksum algorithm and accounts for the 1502 * logical I/O size and not just a gang fragment. 1503 */ 1504 return (zio_checksum_error_impl(zio->io_spa, zio->io_bp, 1505 BP_GET_CHECKSUM(zio->io_bp), zio->io_abd, zio->io_size, 1506 zio->io_offset, NULL) == 0); 1507 } 1508 1509 /* 1510 * Given a buf full of data, if ZFS_DEBUG_MODIFY is enabled this computes a 1511 * checksum and attaches it to the buf's hdr so that we can ensure that the buf 1512 * isn't modified later on. If buf is compressed or there is already a checksum 1513 * on the hdr, this is a no-op (we only checksum uncompressed bufs). 1514 */ 1515 static void 1516 arc_cksum_compute(arc_buf_t *buf) 1517 { 1518 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1519 return; 1520 1521 #ifdef ZFS_DEBUG 1522 arc_buf_hdr_t *hdr = buf->b_hdr; 1523 ASSERT(HDR_HAS_L1HDR(hdr)); 1524 mutex_enter(&hdr->b_l1hdr.b_freeze_lock); 1525 if (hdr->b_l1hdr.b_freeze_cksum != NULL || ARC_BUF_COMPRESSED(buf)) { 1526 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1527 return; 1528 } 1529 1530 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 1531 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1532 hdr->b_l1hdr.b_freeze_cksum = kmem_alloc(sizeof (zio_cksum_t), 1533 KM_SLEEP); 1534 fletcher_2_native(buf->b_data, arc_buf_size(buf), NULL, 1535 hdr->b_l1hdr.b_freeze_cksum); 1536 mutex_exit(&hdr->b_l1hdr.b_freeze_lock); 1537 #endif 1538 arc_buf_watch(buf); 1539 } 1540 1541 #ifndef _KERNEL 1542 void 1543 arc_buf_sigsegv(int sig, siginfo_t *si, void *unused) 1544 { 1545 (void) sig, (void) unused; 1546 panic("Got SIGSEGV at address: 0x%lx\n", (long)si->si_addr); 1547 } 1548 #endif 1549 1550 static void 1551 arc_buf_unwatch(arc_buf_t *buf) 1552 { 1553 #ifndef _KERNEL 1554 if (arc_watch) { 1555 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1556 PROT_READ | PROT_WRITE)); 1557 } 1558 #else 1559 (void) buf; 1560 #endif 1561 } 1562 1563 static void 1564 arc_buf_watch(arc_buf_t *buf) 1565 { 1566 #ifndef _KERNEL 1567 if (arc_watch) 1568 ASSERT0(mprotect(buf->b_data, arc_buf_size(buf), 1569 PROT_READ)); 1570 #else 1571 (void) buf; 1572 #endif 1573 } 1574 1575 static arc_buf_contents_t 1576 arc_buf_type(arc_buf_hdr_t *hdr) 1577 { 1578 arc_buf_contents_t type; 1579 if (HDR_ISTYPE_METADATA(hdr)) { 1580 type = ARC_BUFC_METADATA; 1581 } else { 1582 type = ARC_BUFC_DATA; 1583 } 1584 VERIFY3U(hdr->b_type, ==, type); 1585 return (type); 1586 } 1587 1588 boolean_t 1589 arc_is_metadata(arc_buf_t *buf) 1590 { 1591 return (HDR_ISTYPE_METADATA(buf->b_hdr) != 0); 1592 } 1593 1594 static uint32_t 1595 arc_bufc_to_flags(arc_buf_contents_t type) 1596 { 1597 switch (type) { 1598 case ARC_BUFC_DATA: 1599 /* metadata field is 0 if buffer contains normal data */ 1600 return (0); 1601 case ARC_BUFC_METADATA: 1602 return (ARC_FLAG_BUFC_METADATA); 1603 default: 1604 break; 1605 } 1606 panic("undefined ARC buffer type!"); 1607 return ((uint32_t)-1); 1608 } 1609 1610 void 1611 arc_buf_thaw(arc_buf_t *buf) 1612 { 1613 arc_buf_hdr_t *hdr = buf->b_hdr; 1614 1615 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 1616 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 1617 1618 arc_cksum_verify(buf); 1619 1620 /* 1621 * Compressed buffers do not manipulate the b_freeze_cksum. 1622 */ 1623 if (ARC_BUF_COMPRESSED(buf)) 1624 return; 1625 1626 ASSERT(HDR_HAS_L1HDR(hdr)); 1627 arc_cksum_free(hdr); 1628 arc_buf_unwatch(buf); 1629 } 1630 1631 void 1632 arc_buf_freeze(arc_buf_t *buf) 1633 { 1634 if (!(zfs_flags & ZFS_DEBUG_MODIFY)) 1635 return; 1636 1637 if (ARC_BUF_COMPRESSED(buf)) 1638 return; 1639 1640 ASSERT(HDR_HAS_L1HDR(buf->b_hdr)); 1641 arc_cksum_compute(buf); 1642 } 1643 1644 /* 1645 * The arc_buf_hdr_t's b_flags should never be modified directly. Instead, 1646 * the following functions should be used to ensure that the flags are 1647 * updated in a thread-safe way. When manipulating the flags either 1648 * the hash_lock must be held or the hdr must be undiscoverable. This 1649 * ensures that we're not racing with any other threads when updating 1650 * the flags. 1651 */ 1652 static inline void 1653 arc_hdr_set_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1654 { 1655 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1656 hdr->b_flags |= flags; 1657 } 1658 1659 static inline void 1660 arc_hdr_clear_flags(arc_buf_hdr_t *hdr, arc_flags_t flags) 1661 { 1662 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1663 hdr->b_flags &= ~flags; 1664 } 1665 1666 /* 1667 * Setting the compression bits in the arc_buf_hdr_t's b_flags is 1668 * done in a special way since we have to clear and set bits 1669 * at the same time. Consumers that wish to set the compression bits 1670 * must use this function to ensure that the flags are updated in 1671 * thread-safe manner. 1672 */ 1673 static void 1674 arc_hdr_set_compress(arc_buf_hdr_t *hdr, enum zio_compress cmp) 1675 { 1676 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1677 1678 /* 1679 * Holes and embedded blocks will always have a psize = 0 so 1680 * we ignore the compression of the blkptr and set the 1681 * want to uncompress them. Mark them as uncompressed. 1682 */ 1683 if (!zfs_compressed_arc_enabled || HDR_GET_PSIZE(hdr) == 0) { 1684 arc_hdr_clear_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1685 ASSERT(!HDR_COMPRESSION_ENABLED(hdr)); 1686 } else { 1687 arc_hdr_set_flags(hdr, ARC_FLAG_COMPRESSED_ARC); 1688 ASSERT(HDR_COMPRESSION_ENABLED(hdr)); 1689 } 1690 1691 HDR_SET_COMPRESS(hdr, cmp); 1692 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, cmp); 1693 } 1694 1695 /* 1696 * Looks for another buf on the same hdr which has the data decompressed, copies 1697 * from it, and returns true. If no such buf exists, returns false. 1698 */ 1699 static boolean_t 1700 arc_buf_try_copy_decompressed_data(arc_buf_t *buf) 1701 { 1702 arc_buf_hdr_t *hdr = buf->b_hdr; 1703 boolean_t copied = B_FALSE; 1704 1705 ASSERT(HDR_HAS_L1HDR(hdr)); 1706 ASSERT3P(buf->b_data, !=, NULL); 1707 ASSERT(!ARC_BUF_COMPRESSED(buf)); 1708 1709 for (arc_buf_t *from = hdr->b_l1hdr.b_buf; from != NULL; 1710 from = from->b_next) { 1711 /* can't use our own data buffer */ 1712 if (from == buf) { 1713 continue; 1714 } 1715 1716 if (!ARC_BUF_COMPRESSED(from)) { 1717 memcpy(buf->b_data, from->b_data, arc_buf_size(buf)); 1718 copied = B_TRUE; 1719 break; 1720 } 1721 } 1722 1723 #ifdef ZFS_DEBUG 1724 /* 1725 * There were no decompressed bufs, so there should not be a 1726 * checksum on the hdr either. 1727 */ 1728 if (zfs_flags & ZFS_DEBUG_MODIFY) 1729 EQUIV(!copied, hdr->b_l1hdr.b_freeze_cksum == NULL); 1730 #endif 1731 1732 return (copied); 1733 } 1734 1735 /* 1736 * Allocates an ARC buf header that's in an evicted & L2-cached state. 1737 * This is used during l2arc reconstruction to make empty ARC buffers 1738 * which circumvent the regular disk->arc->l2arc path and instead come 1739 * into being in the reverse order, i.e. l2arc->arc. 1740 */ 1741 static arc_buf_hdr_t * 1742 arc_buf_alloc_l2only(size_t size, arc_buf_contents_t type, l2arc_dev_t *dev, 1743 dva_t dva, uint64_t daddr, int32_t psize, uint64_t birth, 1744 enum zio_compress compress, uint8_t complevel, boolean_t protected, 1745 boolean_t prefetch, arc_state_type_t arcs_state) 1746 { 1747 arc_buf_hdr_t *hdr; 1748 1749 ASSERT(size != 0); 1750 hdr = kmem_cache_alloc(hdr_l2only_cache, KM_SLEEP); 1751 hdr->b_birth = birth; 1752 hdr->b_type = type; 1753 hdr->b_flags = 0; 1754 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L2HDR); 1755 HDR_SET_LSIZE(hdr, size); 1756 HDR_SET_PSIZE(hdr, psize); 1757 arc_hdr_set_compress(hdr, compress); 1758 hdr->b_complevel = complevel; 1759 if (protected) 1760 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 1761 if (prefetch) 1762 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 1763 hdr->b_spa = spa_load_guid(dev->l2ad_vdev->vdev_spa); 1764 1765 hdr->b_dva = dva; 1766 1767 hdr->b_l2hdr.b_dev = dev; 1768 hdr->b_l2hdr.b_daddr = daddr; 1769 hdr->b_l2hdr.b_arcs_state = arcs_state; 1770 1771 return (hdr); 1772 } 1773 1774 /* 1775 * Return the size of the block, b_pabd, that is stored in the arc_buf_hdr_t. 1776 */ 1777 static uint64_t 1778 arc_hdr_size(arc_buf_hdr_t *hdr) 1779 { 1780 uint64_t size; 1781 1782 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 1783 HDR_GET_PSIZE(hdr) > 0) { 1784 size = HDR_GET_PSIZE(hdr); 1785 } else { 1786 ASSERT3U(HDR_GET_LSIZE(hdr), !=, 0); 1787 size = HDR_GET_LSIZE(hdr); 1788 } 1789 return (size); 1790 } 1791 1792 static int 1793 arc_hdr_authenticate(arc_buf_hdr_t *hdr, spa_t *spa, uint64_t dsobj) 1794 { 1795 int ret; 1796 uint64_t csize; 1797 uint64_t lsize = HDR_GET_LSIZE(hdr); 1798 uint64_t psize = HDR_GET_PSIZE(hdr); 1799 void *tmpbuf = NULL; 1800 abd_t *abd = hdr->b_l1hdr.b_pabd; 1801 1802 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1803 ASSERT(HDR_AUTHENTICATED(hdr)); 1804 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1805 1806 /* 1807 * The MAC is calculated on the compressed data that is stored on disk. 1808 * However, if compressed arc is disabled we will only have the 1809 * decompressed data available to us now. Compress it into a temporary 1810 * abd so we can verify the MAC. The performance overhead of this will 1811 * be relatively low, since most objects in an encrypted objset will 1812 * be encrypted (instead of authenticated) anyway. 1813 */ 1814 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1815 !HDR_COMPRESSION_ENABLED(hdr)) { 1816 1817 csize = zio_compress_data(HDR_GET_COMPRESS(hdr), 1818 hdr->b_l1hdr.b_pabd, &tmpbuf, lsize, hdr->b_complevel); 1819 ASSERT3P(tmpbuf, !=, NULL); 1820 ASSERT3U(csize, <=, psize); 1821 abd = abd_get_from_buf(tmpbuf, lsize); 1822 abd_take_ownership_of_buf(abd, B_TRUE); 1823 abd_zero_off(abd, csize, psize - csize); 1824 } 1825 1826 /* 1827 * Authentication is best effort. We authenticate whenever the key is 1828 * available. If we succeed we clear ARC_FLAG_NOAUTH. 1829 */ 1830 if (hdr->b_crypt_hdr.b_ot == DMU_OT_OBJSET) { 1831 ASSERT3U(HDR_GET_COMPRESS(hdr), ==, ZIO_COMPRESS_OFF); 1832 ASSERT3U(lsize, ==, psize); 1833 ret = spa_do_crypt_objset_mac_abd(B_FALSE, spa, dsobj, abd, 1834 psize, hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1835 } else { 1836 ret = spa_do_crypt_mac_abd(B_FALSE, spa, dsobj, abd, psize, 1837 hdr->b_crypt_hdr.b_mac); 1838 } 1839 1840 if (ret == 0) 1841 arc_hdr_clear_flags(hdr, ARC_FLAG_NOAUTH); 1842 else if (ret != ENOENT) 1843 goto error; 1844 1845 if (tmpbuf != NULL) 1846 abd_free(abd); 1847 1848 return (0); 1849 1850 error: 1851 if (tmpbuf != NULL) 1852 abd_free(abd); 1853 1854 return (ret); 1855 } 1856 1857 /* 1858 * This function will take a header that only has raw encrypted data in 1859 * b_crypt_hdr.b_rabd and decrypt it into a new buffer which is stored in 1860 * b_l1hdr.b_pabd. If designated in the header flags, this function will 1861 * also decompress the data. 1862 */ 1863 static int 1864 arc_hdr_decrypt(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb) 1865 { 1866 int ret; 1867 abd_t *cabd = NULL; 1868 void *tmp = NULL; 1869 boolean_t no_crypt = B_FALSE; 1870 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 1871 1872 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1873 ASSERT(HDR_ENCRYPTED(hdr)); 1874 1875 arc_hdr_alloc_abd(hdr, 0); 1876 1877 ret = spa_do_crypt_abd(B_FALSE, spa, zb, hdr->b_crypt_hdr.b_ot, 1878 B_FALSE, bswap, hdr->b_crypt_hdr.b_salt, hdr->b_crypt_hdr.b_iv, 1879 hdr->b_crypt_hdr.b_mac, HDR_GET_PSIZE(hdr), hdr->b_l1hdr.b_pabd, 1880 hdr->b_crypt_hdr.b_rabd, &no_crypt); 1881 if (ret != 0) 1882 goto error; 1883 1884 if (no_crypt) { 1885 abd_copy(hdr->b_l1hdr.b_pabd, hdr->b_crypt_hdr.b_rabd, 1886 HDR_GET_PSIZE(hdr)); 1887 } 1888 1889 /* 1890 * If this header has disabled arc compression but the b_pabd is 1891 * compressed after decrypting it, we need to decompress the newly 1892 * decrypted data. 1893 */ 1894 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 1895 !HDR_COMPRESSION_ENABLED(hdr)) { 1896 /* 1897 * We want to make sure that we are correctly honoring the 1898 * zfs_abd_scatter_enabled setting, so we allocate an abd here 1899 * and then loan a buffer from it, rather than allocating a 1900 * linear buffer and wrapping it in an abd later. 1901 */ 1902 cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 0); 1903 tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 1904 1905 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 1906 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 1907 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 1908 if (ret != 0) { 1909 abd_return_buf(cabd, tmp, arc_hdr_size(hdr)); 1910 goto error; 1911 } 1912 1913 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 1914 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 1915 arc_hdr_size(hdr), hdr); 1916 hdr->b_l1hdr.b_pabd = cabd; 1917 } 1918 1919 return (0); 1920 1921 error: 1922 arc_hdr_free_abd(hdr, B_FALSE); 1923 if (cabd != NULL) 1924 arc_free_data_buf(hdr, cabd, arc_hdr_size(hdr), hdr); 1925 1926 return (ret); 1927 } 1928 1929 /* 1930 * This function is called during arc_buf_fill() to prepare the header's 1931 * abd plaintext pointer for use. This involves authenticated protected 1932 * data and decrypting encrypted data into the plaintext abd. 1933 */ 1934 static int 1935 arc_fill_hdr_crypt(arc_buf_hdr_t *hdr, kmutex_t *hash_lock, spa_t *spa, 1936 const zbookmark_phys_t *zb, boolean_t noauth) 1937 { 1938 int ret; 1939 1940 ASSERT(HDR_PROTECTED(hdr)); 1941 1942 if (hash_lock != NULL) 1943 mutex_enter(hash_lock); 1944 1945 if (HDR_NOAUTH(hdr) && !noauth) { 1946 /* 1947 * The caller requested authenticated data but our data has 1948 * not been authenticated yet. Verify the MAC now if we can. 1949 */ 1950 ret = arc_hdr_authenticate(hdr, spa, zb->zb_objset); 1951 if (ret != 0) 1952 goto error; 1953 } else if (HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd == NULL) { 1954 /* 1955 * If we only have the encrypted version of the data, but the 1956 * unencrypted version was requested we take this opportunity 1957 * to store the decrypted version in the header for future use. 1958 */ 1959 ret = arc_hdr_decrypt(hdr, spa, zb); 1960 if (ret != 0) 1961 goto error; 1962 } 1963 1964 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1965 1966 if (hash_lock != NULL) 1967 mutex_exit(hash_lock); 1968 1969 return (0); 1970 1971 error: 1972 if (hash_lock != NULL) 1973 mutex_exit(hash_lock); 1974 1975 return (ret); 1976 } 1977 1978 /* 1979 * This function is used by the dbuf code to decrypt bonus buffers in place. 1980 * The dbuf code itself doesn't have any locking for decrypting a shared dnode 1981 * block, so we use the hash lock here to protect against concurrent calls to 1982 * arc_buf_fill(). 1983 */ 1984 static void 1985 arc_buf_untransform_in_place(arc_buf_t *buf) 1986 { 1987 arc_buf_hdr_t *hdr = buf->b_hdr; 1988 1989 ASSERT(HDR_ENCRYPTED(hdr)); 1990 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 1991 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 1992 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 1993 1994 zio_crypt_copy_dnode_bonus(hdr->b_l1hdr.b_pabd, buf->b_data, 1995 arc_buf_size(buf)); 1996 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 1997 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 1998 hdr->b_crypt_hdr.b_ebufcnt -= 1; 1999 } 2000 2001 /* 2002 * Given a buf that has a data buffer attached to it, this function will 2003 * efficiently fill the buf with data of the specified compression setting from 2004 * the hdr and update the hdr's b_freeze_cksum if necessary. If the buf and hdr 2005 * are already sharing a data buf, no copy is performed. 2006 * 2007 * If the buf is marked as compressed but uncompressed data was requested, this 2008 * will allocate a new data buffer for the buf, remove that flag, and fill the 2009 * buf with uncompressed data. You can't request a compressed buf on a hdr with 2010 * uncompressed data, and (since we haven't added support for it yet) if you 2011 * want compressed data your buf must already be marked as compressed and have 2012 * the correct-sized data buffer. 2013 */ 2014 static int 2015 arc_buf_fill(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2016 arc_fill_flags_t flags) 2017 { 2018 int error = 0; 2019 arc_buf_hdr_t *hdr = buf->b_hdr; 2020 boolean_t hdr_compressed = 2021 (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 2022 boolean_t compressed = (flags & ARC_FILL_COMPRESSED) != 0; 2023 boolean_t encrypted = (flags & ARC_FILL_ENCRYPTED) != 0; 2024 dmu_object_byteswap_t bswap = hdr->b_l1hdr.b_byteswap; 2025 kmutex_t *hash_lock = (flags & ARC_FILL_LOCKED) ? NULL : HDR_LOCK(hdr); 2026 2027 ASSERT3P(buf->b_data, !=, NULL); 2028 IMPLY(compressed, hdr_compressed || ARC_BUF_ENCRYPTED(buf)); 2029 IMPLY(compressed, ARC_BUF_COMPRESSED(buf)); 2030 IMPLY(encrypted, HDR_ENCRYPTED(hdr)); 2031 IMPLY(encrypted, ARC_BUF_ENCRYPTED(buf)); 2032 IMPLY(encrypted, ARC_BUF_COMPRESSED(buf)); 2033 IMPLY(encrypted, !ARC_BUF_SHARED(buf)); 2034 2035 /* 2036 * If the caller wanted encrypted data we just need to copy it from 2037 * b_rabd and potentially byteswap it. We won't be able to do any 2038 * further transforms on it. 2039 */ 2040 if (encrypted) { 2041 ASSERT(HDR_HAS_RABD(hdr)); 2042 abd_copy_to_buf(buf->b_data, hdr->b_crypt_hdr.b_rabd, 2043 HDR_GET_PSIZE(hdr)); 2044 goto byteswap; 2045 } 2046 2047 /* 2048 * Adjust encrypted and authenticated headers to accommodate 2049 * the request if needed. Dnode blocks (ARC_FILL_IN_PLACE) are 2050 * allowed to fail decryption due to keys not being loaded 2051 * without being marked as an IO error. 2052 */ 2053 if (HDR_PROTECTED(hdr)) { 2054 error = arc_fill_hdr_crypt(hdr, hash_lock, spa, 2055 zb, !!(flags & ARC_FILL_NOAUTH)); 2056 if (error == EACCES && (flags & ARC_FILL_IN_PLACE) != 0) { 2057 return (error); 2058 } else if (error != 0) { 2059 if (hash_lock != NULL) 2060 mutex_enter(hash_lock); 2061 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2062 if (hash_lock != NULL) 2063 mutex_exit(hash_lock); 2064 return (error); 2065 } 2066 } 2067 2068 /* 2069 * There is a special case here for dnode blocks which are 2070 * decrypting their bonus buffers. These blocks may request to 2071 * be decrypted in-place. This is necessary because there may 2072 * be many dnodes pointing into this buffer and there is 2073 * currently no method to synchronize replacing the backing 2074 * b_data buffer and updating all of the pointers. Here we use 2075 * the hash lock to ensure there are no races. If the need 2076 * arises for other types to be decrypted in-place, they must 2077 * add handling here as well. 2078 */ 2079 if ((flags & ARC_FILL_IN_PLACE) != 0) { 2080 ASSERT(!hdr_compressed); 2081 ASSERT(!compressed); 2082 ASSERT(!encrypted); 2083 2084 if (HDR_ENCRYPTED(hdr) && ARC_BUF_ENCRYPTED(buf)) { 2085 ASSERT3U(hdr->b_crypt_hdr.b_ot, ==, DMU_OT_DNODE); 2086 2087 if (hash_lock != NULL) 2088 mutex_enter(hash_lock); 2089 arc_buf_untransform_in_place(buf); 2090 if (hash_lock != NULL) 2091 mutex_exit(hash_lock); 2092 2093 /* Compute the hdr's checksum if necessary */ 2094 arc_cksum_compute(buf); 2095 } 2096 2097 return (0); 2098 } 2099 2100 if (hdr_compressed == compressed) { 2101 if (!arc_buf_is_shared(buf)) { 2102 abd_copy_to_buf(buf->b_data, hdr->b_l1hdr.b_pabd, 2103 arc_buf_size(buf)); 2104 } 2105 } else { 2106 ASSERT(hdr_compressed); 2107 ASSERT(!compressed); 2108 2109 /* 2110 * If the buf is sharing its data with the hdr, unlink it and 2111 * allocate a new data buffer for the buf. 2112 */ 2113 if (arc_buf_is_shared(buf)) { 2114 ASSERT(ARC_BUF_COMPRESSED(buf)); 2115 2116 /* We need to give the buf its own b_data */ 2117 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 2118 buf->b_data = 2119 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2120 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 2121 2122 /* Previously overhead was 0; just add new overhead */ 2123 ARCSTAT_INCR(arcstat_overhead_size, HDR_GET_LSIZE(hdr)); 2124 } else if (ARC_BUF_COMPRESSED(buf)) { 2125 /* We need to reallocate the buf's b_data */ 2126 arc_free_data_buf(hdr, buf->b_data, HDR_GET_PSIZE(hdr), 2127 buf); 2128 buf->b_data = 2129 arc_get_data_buf(hdr, HDR_GET_LSIZE(hdr), buf); 2130 2131 /* We increased the size of b_data; update overhead */ 2132 ARCSTAT_INCR(arcstat_overhead_size, 2133 HDR_GET_LSIZE(hdr) - HDR_GET_PSIZE(hdr)); 2134 } 2135 2136 /* 2137 * Regardless of the buf's previous compression settings, it 2138 * should not be compressed at the end of this function. 2139 */ 2140 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 2141 2142 /* 2143 * Try copying the data from another buf which already has a 2144 * decompressed version. If that's not possible, it's time to 2145 * bite the bullet and decompress the data from the hdr. 2146 */ 2147 if (arc_buf_try_copy_decompressed_data(buf)) { 2148 /* Skip byteswapping and checksumming (already done) */ 2149 return (0); 2150 } else { 2151 error = zio_decompress_data(HDR_GET_COMPRESS(hdr), 2152 hdr->b_l1hdr.b_pabd, buf->b_data, 2153 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr), 2154 &hdr->b_complevel); 2155 2156 /* 2157 * Absent hardware errors or software bugs, this should 2158 * be impossible, but log it anyway so we can debug it. 2159 */ 2160 if (error != 0) { 2161 zfs_dbgmsg( 2162 "hdr %px, compress %d, psize %d, lsize %d", 2163 hdr, arc_hdr_get_compress(hdr), 2164 HDR_GET_PSIZE(hdr), HDR_GET_LSIZE(hdr)); 2165 if (hash_lock != NULL) 2166 mutex_enter(hash_lock); 2167 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 2168 if (hash_lock != NULL) 2169 mutex_exit(hash_lock); 2170 return (SET_ERROR(EIO)); 2171 } 2172 } 2173 } 2174 2175 byteswap: 2176 /* Byteswap the buf's data if necessary */ 2177 if (bswap != DMU_BSWAP_NUMFUNCS) { 2178 ASSERT(!HDR_SHARED_DATA(hdr)); 2179 ASSERT3U(bswap, <, DMU_BSWAP_NUMFUNCS); 2180 dmu_ot_byteswap[bswap].ob_func(buf->b_data, HDR_GET_LSIZE(hdr)); 2181 } 2182 2183 /* Compute the hdr's checksum if necessary */ 2184 arc_cksum_compute(buf); 2185 2186 return (0); 2187 } 2188 2189 /* 2190 * If this function is being called to decrypt an encrypted buffer or verify an 2191 * authenticated one, the key must be loaded and a mapping must be made 2192 * available in the keystore via spa_keystore_create_mapping() or one of its 2193 * callers. 2194 */ 2195 int 2196 arc_untransform(arc_buf_t *buf, spa_t *spa, const zbookmark_phys_t *zb, 2197 boolean_t in_place) 2198 { 2199 int ret; 2200 arc_fill_flags_t flags = 0; 2201 2202 if (in_place) 2203 flags |= ARC_FILL_IN_PLACE; 2204 2205 ret = arc_buf_fill(buf, spa, zb, flags); 2206 if (ret == ECKSUM) { 2207 /* 2208 * Convert authentication and decryption errors to EIO 2209 * (and generate an ereport) before leaving the ARC. 2210 */ 2211 ret = SET_ERROR(EIO); 2212 spa_log_error(spa, zb, &buf->b_hdr->b_birth); 2213 (void) zfs_ereport_post(FM_EREPORT_ZFS_AUTHENTICATION, 2214 spa, NULL, zb, NULL, 0); 2215 } 2216 2217 return (ret); 2218 } 2219 2220 /* 2221 * Increment the amount of evictable space in the arc_state_t's refcount. 2222 * We account for the space used by the hdr and the arc buf individually 2223 * so that we can add and remove them from the refcount individually. 2224 */ 2225 static void 2226 arc_evictable_space_increment(arc_buf_hdr_t *hdr, arc_state_t *state) 2227 { 2228 arc_buf_contents_t type = arc_buf_type(hdr); 2229 2230 ASSERT(HDR_HAS_L1HDR(hdr)); 2231 2232 if (GHOST_STATE(state)) { 2233 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2234 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2235 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2236 ASSERT(!HDR_HAS_RABD(hdr)); 2237 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2238 HDR_GET_LSIZE(hdr), hdr); 2239 return; 2240 } 2241 2242 if (hdr->b_l1hdr.b_pabd != NULL) { 2243 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2244 arc_hdr_size(hdr), hdr); 2245 } 2246 if (HDR_HAS_RABD(hdr)) { 2247 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2248 HDR_GET_PSIZE(hdr), hdr); 2249 } 2250 2251 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2252 buf = buf->b_next) { 2253 if (arc_buf_is_shared(buf)) 2254 continue; 2255 (void) zfs_refcount_add_many(&state->arcs_esize[type], 2256 arc_buf_size(buf), buf); 2257 } 2258 } 2259 2260 /* 2261 * Decrement the amount of evictable space in the arc_state_t's refcount. 2262 * We account for the space used by the hdr and the arc buf individually 2263 * so that we can add and remove them from the refcount individually. 2264 */ 2265 static void 2266 arc_evictable_space_decrement(arc_buf_hdr_t *hdr, arc_state_t *state) 2267 { 2268 arc_buf_contents_t type = arc_buf_type(hdr); 2269 2270 ASSERT(HDR_HAS_L1HDR(hdr)); 2271 2272 if (GHOST_STATE(state)) { 2273 ASSERT0(hdr->b_l1hdr.b_bufcnt); 2274 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2275 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2276 ASSERT(!HDR_HAS_RABD(hdr)); 2277 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2278 HDR_GET_LSIZE(hdr), hdr); 2279 return; 2280 } 2281 2282 if (hdr->b_l1hdr.b_pabd != NULL) { 2283 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2284 arc_hdr_size(hdr), hdr); 2285 } 2286 if (HDR_HAS_RABD(hdr)) { 2287 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2288 HDR_GET_PSIZE(hdr), hdr); 2289 } 2290 2291 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2292 buf = buf->b_next) { 2293 if (arc_buf_is_shared(buf)) 2294 continue; 2295 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2296 arc_buf_size(buf), buf); 2297 } 2298 } 2299 2300 /* 2301 * Add a reference to this hdr indicating that someone is actively 2302 * referencing that memory. When the refcount transitions from 0 to 1, 2303 * we remove it from the respective arc_state_t list to indicate that 2304 * it is not evictable. 2305 */ 2306 static void 2307 add_reference(arc_buf_hdr_t *hdr, const void *tag) 2308 { 2309 arc_state_t *state = hdr->b_l1hdr.b_state; 2310 2311 ASSERT(HDR_HAS_L1HDR(hdr)); 2312 if (!HDR_EMPTY(hdr) && !MUTEX_HELD(HDR_LOCK(hdr))) { 2313 ASSERT(state == arc_anon); 2314 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2315 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 2316 } 2317 2318 if ((zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag) == 1) && 2319 state != arc_anon && state != arc_l2c_only) { 2320 /* We don't use the L2-only state list. */ 2321 multilist_remove(&state->arcs_list[arc_buf_type(hdr)], hdr); 2322 arc_evictable_space_decrement(hdr, state); 2323 } 2324 } 2325 2326 /* 2327 * Remove a reference from this hdr. When the reference transitions from 2328 * 1 to 0 and we're not anonymous, then we add this hdr to the arc_state_t's 2329 * list making it eligible for eviction. 2330 */ 2331 static int 2332 remove_reference(arc_buf_hdr_t *hdr, const void *tag) 2333 { 2334 int cnt; 2335 arc_state_t *state = hdr->b_l1hdr.b_state; 2336 2337 ASSERT(HDR_HAS_L1HDR(hdr)); 2338 ASSERT(state == arc_anon || MUTEX_HELD(HDR_LOCK(hdr))); 2339 ASSERT(!GHOST_STATE(state)); /* arc_l2c_only counts as a ghost. */ 2340 2341 if ((cnt = zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag)) != 0) 2342 return (cnt); 2343 2344 if (state == arc_anon) { 2345 arc_hdr_destroy(hdr); 2346 return (0); 2347 } 2348 if (state == arc_uncached && !HDR_PREFETCH(hdr)) { 2349 arc_change_state(arc_anon, hdr); 2350 arc_hdr_destroy(hdr); 2351 return (0); 2352 } 2353 multilist_insert(&state->arcs_list[arc_buf_type(hdr)], hdr); 2354 arc_evictable_space_increment(hdr, state); 2355 return (0); 2356 } 2357 2358 /* 2359 * Returns detailed information about a specific arc buffer. When the 2360 * state_index argument is set the function will calculate the arc header 2361 * list position for its arc state. Since this requires a linear traversal 2362 * callers are strongly encourage not to do this. However, it can be helpful 2363 * for targeted analysis so the functionality is provided. 2364 */ 2365 void 2366 arc_buf_info(arc_buf_t *ab, arc_buf_info_t *abi, int state_index) 2367 { 2368 (void) state_index; 2369 arc_buf_hdr_t *hdr = ab->b_hdr; 2370 l1arc_buf_hdr_t *l1hdr = NULL; 2371 l2arc_buf_hdr_t *l2hdr = NULL; 2372 arc_state_t *state = NULL; 2373 2374 memset(abi, 0, sizeof (arc_buf_info_t)); 2375 2376 if (hdr == NULL) 2377 return; 2378 2379 abi->abi_flags = hdr->b_flags; 2380 2381 if (HDR_HAS_L1HDR(hdr)) { 2382 l1hdr = &hdr->b_l1hdr; 2383 state = l1hdr->b_state; 2384 } 2385 if (HDR_HAS_L2HDR(hdr)) 2386 l2hdr = &hdr->b_l2hdr; 2387 2388 if (l1hdr) { 2389 abi->abi_bufcnt = l1hdr->b_bufcnt; 2390 abi->abi_access = l1hdr->b_arc_access; 2391 abi->abi_mru_hits = l1hdr->b_mru_hits; 2392 abi->abi_mru_ghost_hits = l1hdr->b_mru_ghost_hits; 2393 abi->abi_mfu_hits = l1hdr->b_mfu_hits; 2394 abi->abi_mfu_ghost_hits = l1hdr->b_mfu_ghost_hits; 2395 abi->abi_holds = zfs_refcount_count(&l1hdr->b_refcnt); 2396 } 2397 2398 if (l2hdr) { 2399 abi->abi_l2arc_dattr = l2hdr->b_daddr; 2400 abi->abi_l2arc_hits = l2hdr->b_hits; 2401 } 2402 2403 abi->abi_state_type = state ? state->arcs_state : ARC_STATE_ANON; 2404 abi->abi_state_contents = arc_buf_type(hdr); 2405 abi->abi_size = arc_hdr_size(hdr); 2406 } 2407 2408 /* 2409 * Move the supplied buffer to the indicated state. The hash lock 2410 * for the buffer must be held by the caller. 2411 */ 2412 static void 2413 arc_change_state(arc_state_t *new_state, arc_buf_hdr_t *hdr) 2414 { 2415 arc_state_t *old_state; 2416 int64_t refcnt; 2417 uint32_t bufcnt; 2418 boolean_t update_old, update_new; 2419 arc_buf_contents_t type = arc_buf_type(hdr); 2420 2421 /* 2422 * We almost always have an L1 hdr here, since we call arc_hdr_realloc() 2423 * in arc_read() when bringing a buffer out of the L2ARC. However, the 2424 * L1 hdr doesn't always exist when we change state to arc_anon before 2425 * destroying a header, in which case reallocating to add the L1 hdr is 2426 * pointless. 2427 */ 2428 if (HDR_HAS_L1HDR(hdr)) { 2429 old_state = hdr->b_l1hdr.b_state; 2430 refcnt = zfs_refcount_count(&hdr->b_l1hdr.b_refcnt); 2431 bufcnt = hdr->b_l1hdr.b_bufcnt; 2432 update_old = (bufcnt > 0 || hdr->b_l1hdr.b_pabd != NULL || 2433 HDR_HAS_RABD(hdr)); 2434 2435 IMPLY(GHOST_STATE(old_state), bufcnt == 0); 2436 IMPLY(GHOST_STATE(new_state), bufcnt == 0); 2437 IMPLY(GHOST_STATE(old_state), hdr->b_l1hdr.b_buf == NULL); 2438 IMPLY(GHOST_STATE(new_state), hdr->b_l1hdr.b_buf == NULL); 2439 IMPLY(old_state == arc_anon, bufcnt <= 1); 2440 } else { 2441 old_state = arc_l2c_only; 2442 refcnt = 0; 2443 bufcnt = 0; 2444 update_old = B_FALSE; 2445 } 2446 update_new = update_old; 2447 if (GHOST_STATE(old_state)) 2448 update_old = B_TRUE; 2449 if (GHOST_STATE(new_state)) 2450 update_new = B_TRUE; 2451 2452 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 2453 ASSERT3P(new_state, !=, old_state); 2454 2455 /* 2456 * If this buffer is evictable, transfer it from the 2457 * old state list to the new state list. 2458 */ 2459 if (refcnt == 0) { 2460 if (old_state != arc_anon && old_state != arc_l2c_only) { 2461 ASSERT(HDR_HAS_L1HDR(hdr)); 2462 /* remove_reference() saves on insert. */ 2463 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2464 multilist_remove(&old_state->arcs_list[type], 2465 hdr); 2466 arc_evictable_space_decrement(hdr, old_state); 2467 } 2468 } 2469 if (new_state != arc_anon && new_state != arc_l2c_only) { 2470 /* 2471 * An L1 header always exists here, since if we're 2472 * moving to some L1-cached state (i.e. not l2c_only or 2473 * anonymous), we realloc the header to add an L1hdr 2474 * beforehand. 2475 */ 2476 ASSERT(HDR_HAS_L1HDR(hdr)); 2477 multilist_insert(&new_state->arcs_list[type], hdr); 2478 arc_evictable_space_increment(hdr, new_state); 2479 } 2480 } 2481 2482 ASSERT(!HDR_EMPTY(hdr)); 2483 if (new_state == arc_anon && HDR_IN_HASH_TABLE(hdr)) 2484 buf_hash_remove(hdr); 2485 2486 /* adjust state sizes (ignore arc_l2c_only) */ 2487 2488 if (update_new && new_state != arc_l2c_only) { 2489 ASSERT(HDR_HAS_L1HDR(hdr)); 2490 if (GHOST_STATE(new_state)) { 2491 ASSERT0(bufcnt); 2492 2493 /* 2494 * When moving a header to a ghost state, we first 2495 * remove all arc buffers. Thus, we'll have a 2496 * bufcnt of zero, and no arc buffer to use for 2497 * the reference. As a result, we use the arc 2498 * header pointer for the reference. 2499 */ 2500 (void) zfs_refcount_add_many( 2501 &new_state->arcs_size[type], 2502 HDR_GET_LSIZE(hdr), hdr); 2503 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2504 ASSERT(!HDR_HAS_RABD(hdr)); 2505 } else { 2506 uint32_t buffers = 0; 2507 2508 /* 2509 * Each individual buffer holds a unique reference, 2510 * thus we must remove each of these references one 2511 * at a time. 2512 */ 2513 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2514 buf = buf->b_next) { 2515 ASSERT3U(bufcnt, !=, 0); 2516 buffers++; 2517 2518 /* 2519 * When the arc_buf_t is sharing the data 2520 * block with the hdr, the owner of the 2521 * reference belongs to the hdr. Only 2522 * add to the refcount if the arc_buf_t is 2523 * not shared. 2524 */ 2525 if (arc_buf_is_shared(buf)) 2526 continue; 2527 2528 (void) zfs_refcount_add_many( 2529 &new_state->arcs_size[type], 2530 arc_buf_size(buf), buf); 2531 } 2532 ASSERT3U(bufcnt, ==, buffers); 2533 2534 if (hdr->b_l1hdr.b_pabd != NULL) { 2535 (void) zfs_refcount_add_many( 2536 &new_state->arcs_size[type], 2537 arc_hdr_size(hdr), hdr); 2538 } 2539 2540 if (HDR_HAS_RABD(hdr)) { 2541 (void) zfs_refcount_add_many( 2542 &new_state->arcs_size[type], 2543 HDR_GET_PSIZE(hdr), hdr); 2544 } 2545 } 2546 } 2547 2548 if (update_old && old_state != arc_l2c_only) { 2549 ASSERT(HDR_HAS_L1HDR(hdr)); 2550 if (GHOST_STATE(old_state)) { 2551 ASSERT0(bufcnt); 2552 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2553 ASSERT(!HDR_HAS_RABD(hdr)); 2554 2555 /* 2556 * When moving a header off of a ghost state, 2557 * the header will not contain any arc buffers. 2558 * We use the arc header pointer for the reference 2559 * which is exactly what we did when we put the 2560 * header on the ghost state. 2561 */ 2562 2563 (void) zfs_refcount_remove_many( 2564 &old_state->arcs_size[type], 2565 HDR_GET_LSIZE(hdr), hdr); 2566 } else { 2567 uint32_t buffers = 0; 2568 2569 /* 2570 * Each individual buffer holds a unique reference, 2571 * thus we must remove each of these references one 2572 * at a time. 2573 */ 2574 for (arc_buf_t *buf = hdr->b_l1hdr.b_buf; buf != NULL; 2575 buf = buf->b_next) { 2576 ASSERT3U(bufcnt, !=, 0); 2577 buffers++; 2578 2579 /* 2580 * When the arc_buf_t is sharing the data 2581 * block with the hdr, the owner of the 2582 * reference belongs to the hdr. Only 2583 * add to the refcount if the arc_buf_t is 2584 * not shared. 2585 */ 2586 if (arc_buf_is_shared(buf)) 2587 continue; 2588 2589 (void) zfs_refcount_remove_many( 2590 &old_state->arcs_size[type], 2591 arc_buf_size(buf), buf); 2592 } 2593 ASSERT3U(bufcnt, ==, buffers); 2594 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 2595 HDR_HAS_RABD(hdr)); 2596 2597 if (hdr->b_l1hdr.b_pabd != NULL) { 2598 (void) zfs_refcount_remove_many( 2599 &old_state->arcs_size[type], 2600 arc_hdr_size(hdr), hdr); 2601 } 2602 2603 if (HDR_HAS_RABD(hdr)) { 2604 (void) zfs_refcount_remove_many( 2605 &old_state->arcs_size[type], 2606 HDR_GET_PSIZE(hdr), hdr); 2607 } 2608 } 2609 } 2610 2611 if (HDR_HAS_L1HDR(hdr)) { 2612 hdr->b_l1hdr.b_state = new_state; 2613 2614 if (HDR_HAS_L2HDR(hdr) && new_state != arc_l2c_only) { 2615 l2arc_hdr_arcstats_decrement_state(hdr); 2616 hdr->b_l2hdr.b_arcs_state = new_state->arcs_state; 2617 l2arc_hdr_arcstats_increment_state(hdr); 2618 } 2619 } 2620 } 2621 2622 void 2623 arc_space_consume(uint64_t space, arc_space_type_t type) 2624 { 2625 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2626 2627 switch (type) { 2628 default: 2629 break; 2630 case ARC_SPACE_DATA: 2631 ARCSTAT_INCR(arcstat_data_size, space); 2632 break; 2633 case ARC_SPACE_META: 2634 ARCSTAT_INCR(arcstat_metadata_size, space); 2635 break; 2636 case ARC_SPACE_BONUS: 2637 ARCSTAT_INCR(arcstat_bonus_size, space); 2638 break; 2639 case ARC_SPACE_DNODE: 2640 ARCSTAT_INCR(arcstat_dnode_size, space); 2641 break; 2642 case ARC_SPACE_DBUF: 2643 ARCSTAT_INCR(arcstat_dbuf_size, space); 2644 break; 2645 case ARC_SPACE_HDRS: 2646 ARCSTAT_INCR(arcstat_hdr_size, space); 2647 break; 2648 case ARC_SPACE_L2HDRS: 2649 aggsum_add(&arc_sums.arcstat_l2_hdr_size, space); 2650 break; 2651 case ARC_SPACE_ABD_CHUNK_WASTE: 2652 /* 2653 * Note: this includes space wasted by all scatter ABD's, not 2654 * just those allocated by the ARC. But the vast majority of 2655 * scatter ABD's come from the ARC, because other users are 2656 * very short-lived. 2657 */ 2658 ARCSTAT_INCR(arcstat_abd_chunk_waste_size, space); 2659 break; 2660 } 2661 2662 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2663 ARCSTAT_INCR(arcstat_meta_used, space); 2664 2665 aggsum_add(&arc_sums.arcstat_size, space); 2666 } 2667 2668 void 2669 arc_space_return(uint64_t space, arc_space_type_t type) 2670 { 2671 ASSERT(type >= 0 && type < ARC_SPACE_NUMTYPES); 2672 2673 switch (type) { 2674 default: 2675 break; 2676 case ARC_SPACE_DATA: 2677 ARCSTAT_INCR(arcstat_data_size, -space); 2678 break; 2679 case ARC_SPACE_META: 2680 ARCSTAT_INCR(arcstat_metadata_size, -space); 2681 break; 2682 case ARC_SPACE_BONUS: 2683 ARCSTAT_INCR(arcstat_bonus_size, -space); 2684 break; 2685 case ARC_SPACE_DNODE: 2686 ARCSTAT_INCR(arcstat_dnode_size, -space); 2687 break; 2688 case ARC_SPACE_DBUF: 2689 ARCSTAT_INCR(arcstat_dbuf_size, -space); 2690 break; 2691 case ARC_SPACE_HDRS: 2692 ARCSTAT_INCR(arcstat_hdr_size, -space); 2693 break; 2694 case ARC_SPACE_L2HDRS: 2695 aggsum_add(&arc_sums.arcstat_l2_hdr_size, -space); 2696 break; 2697 case ARC_SPACE_ABD_CHUNK_WASTE: 2698 ARCSTAT_INCR(arcstat_abd_chunk_waste_size, -space); 2699 break; 2700 } 2701 2702 if (type != ARC_SPACE_DATA && type != ARC_SPACE_ABD_CHUNK_WASTE) 2703 ARCSTAT_INCR(arcstat_meta_used, -space); 2704 2705 ASSERT(aggsum_compare(&arc_sums.arcstat_size, space) >= 0); 2706 aggsum_add(&arc_sums.arcstat_size, -space); 2707 } 2708 2709 /* 2710 * Given a hdr and a buf, returns whether that buf can share its b_data buffer 2711 * with the hdr's b_pabd. 2712 */ 2713 static boolean_t 2714 arc_can_share(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2715 { 2716 /* 2717 * The criteria for sharing a hdr's data are: 2718 * 1. the buffer is not encrypted 2719 * 2. the hdr's compression matches the buf's compression 2720 * 3. the hdr doesn't need to be byteswapped 2721 * 4. the hdr isn't already being shared 2722 * 5. the buf is either compressed or it is the last buf in the hdr list 2723 * 2724 * Criterion #5 maintains the invariant that shared uncompressed 2725 * bufs must be the final buf in the hdr's b_buf list. Reading this, you 2726 * might ask, "if a compressed buf is allocated first, won't that be the 2727 * last thing in the list?", but in that case it's impossible to create 2728 * a shared uncompressed buf anyway (because the hdr must be compressed 2729 * to have the compressed buf). You might also think that #3 is 2730 * sufficient to make this guarantee, however it's possible 2731 * (specifically in the rare L2ARC write race mentioned in 2732 * arc_buf_alloc_impl()) there will be an existing uncompressed buf that 2733 * is shareable, but wasn't at the time of its allocation. Rather than 2734 * allow a new shared uncompressed buf to be created and then shuffle 2735 * the list around to make it the last element, this simply disallows 2736 * sharing if the new buf isn't the first to be added. 2737 */ 2738 ASSERT3P(buf->b_hdr, ==, hdr); 2739 boolean_t hdr_compressed = 2740 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF; 2741 boolean_t buf_compressed = ARC_BUF_COMPRESSED(buf) != 0; 2742 return (!ARC_BUF_ENCRYPTED(buf) && 2743 buf_compressed == hdr_compressed && 2744 hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS && 2745 !HDR_SHARED_DATA(hdr) && 2746 (ARC_BUF_LAST(buf) || ARC_BUF_COMPRESSED(buf))); 2747 } 2748 2749 /* 2750 * Allocate a buf for this hdr. If you care about the data that's in the hdr, 2751 * or if you want a compressed buffer, pass those flags in. Returns 0 if the 2752 * copy was made successfully, or an error code otherwise. 2753 */ 2754 static int 2755 arc_buf_alloc_impl(arc_buf_hdr_t *hdr, spa_t *spa, const zbookmark_phys_t *zb, 2756 const void *tag, boolean_t encrypted, boolean_t compressed, 2757 boolean_t noauth, boolean_t fill, arc_buf_t **ret) 2758 { 2759 arc_buf_t *buf; 2760 arc_fill_flags_t flags = ARC_FILL_LOCKED; 2761 2762 ASSERT(HDR_HAS_L1HDR(hdr)); 2763 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 2764 VERIFY(hdr->b_type == ARC_BUFC_DATA || 2765 hdr->b_type == ARC_BUFC_METADATA); 2766 ASSERT3P(ret, !=, NULL); 2767 ASSERT3P(*ret, ==, NULL); 2768 IMPLY(encrypted, compressed); 2769 2770 buf = *ret = kmem_cache_alloc(buf_cache, KM_PUSHPAGE); 2771 buf->b_hdr = hdr; 2772 buf->b_data = NULL; 2773 buf->b_next = hdr->b_l1hdr.b_buf; 2774 buf->b_flags = 0; 2775 2776 add_reference(hdr, tag); 2777 2778 /* 2779 * We're about to change the hdr's b_flags. We must either 2780 * hold the hash_lock or be undiscoverable. 2781 */ 2782 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2783 2784 /* 2785 * Only honor requests for compressed bufs if the hdr is actually 2786 * compressed. This must be overridden if the buffer is encrypted since 2787 * encrypted buffers cannot be decompressed. 2788 */ 2789 if (encrypted) { 2790 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2791 buf->b_flags |= ARC_BUF_FLAG_ENCRYPTED; 2792 flags |= ARC_FILL_COMPRESSED | ARC_FILL_ENCRYPTED; 2793 } else if (compressed && 2794 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 2795 buf->b_flags |= ARC_BUF_FLAG_COMPRESSED; 2796 flags |= ARC_FILL_COMPRESSED; 2797 } 2798 2799 if (noauth) { 2800 ASSERT0(encrypted); 2801 flags |= ARC_FILL_NOAUTH; 2802 } 2803 2804 /* 2805 * If the hdr's data can be shared then we share the data buffer and 2806 * set the appropriate bit in the hdr's b_flags to indicate the hdr is 2807 * sharing it's b_pabd with the arc_buf_t. Otherwise, we allocate a new 2808 * buffer to store the buf's data. 2809 * 2810 * There are two additional restrictions here because we're sharing 2811 * hdr -> buf instead of the usual buf -> hdr. First, the hdr can't be 2812 * actively involved in an L2ARC write, because if this buf is used by 2813 * an arc_write() then the hdr's data buffer will be released when the 2814 * write completes, even though the L2ARC write might still be using it. 2815 * Second, the hdr's ABD must be linear so that the buf's user doesn't 2816 * need to be ABD-aware. It must be allocated via 2817 * zio_[data_]buf_alloc(), not as a page, because we need to be able 2818 * to abd_release_ownership_of_buf(), which isn't allowed on "linear 2819 * page" buffers because the ABD code needs to handle freeing them 2820 * specially. 2821 */ 2822 boolean_t can_share = arc_can_share(hdr, buf) && 2823 !HDR_L2_WRITING(hdr) && 2824 hdr->b_l1hdr.b_pabd != NULL && 2825 abd_is_linear(hdr->b_l1hdr.b_pabd) && 2826 !abd_is_linear_page(hdr->b_l1hdr.b_pabd); 2827 2828 /* Set up b_data and sharing */ 2829 if (can_share) { 2830 buf->b_data = abd_to_buf(hdr->b_l1hdr.b_pabd); 2831 buf->b_flags |= ARC_BUF_FLAG_SHARED; 2832 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 2833 } else { 2834 buf->b_data = 2835 arc_get_data_buf(hdr, arc_buf_size(buf), buf); 2836 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 2837 } 2838 VERIFY3P(buf->b_data, !=, NULL); 2839 2840 hdr->b_l1hdr.b_buf = buf; 2841 hdr->b_l1hdr.b_bufcnt += 1; 2842 if (encrypted) 2843 hdr->b_crypt_hdr.b_ebufcnt += 1; 2844 2845 /* 2846 * If the user wants the data from the hdr, we need to either copy or 2847 * decompress the data. 2848 */ 2849 if (fill) { 2850 ASSERT3P(zb, !=, NULL); 2851 return (arc_buf_fill(buf, spa, zb, flags)); 2852 } 2853 2854 return (0); 2855 } 2856 2857 static const char *arc_onloan_tag = "onloan"; 2858 2859 static inline void 2860 arc_loaned_bytes_update(int64_t delta) 2861 { 2862 atomic_add_64(&arc_loaned_bytes, delta); 2863 2864 /* assert that it did not wrap around */ 2865 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 2866 } 2867 2868 /* 2869 * Loan out an anonymous arc buffer. Loaned buffers are not counted as in 2870 * flight data by arc_tempreserve_space() until they are "returned". Loaned 2871 * buffers must be returned to the arc before they can be used by the DMU or 2872 * freed. 2873 */ 2874 arc_buf_t * 2875 arc_loan_buf(spa_t *spa, boolean_t is_metadata, int size) 2876 { 2877 arc_buf_t *buf = arc_alloc_buf(spa, arc_onloan_tag, 2878 is_metadata ? ARC_BUFC_METADATA : ARC_BUFC_DATA, size); 2879 2880 arc_loaned_bytes_update(arc_buf_size(buf)); 2881 2882 return (buf); 2883 } 2884 2885 arc_buf_t * 2886 arc_loan_compressed_buf(spa_t *spa, uint64_t psize, uint64_t lsize, 2887 enum zio_compress compression_type, uint8_t complevel) 2888 { 2889 arc_buf_t *buf = arc_alloc_compressed_buf(spa, arc_onloan_tag, 2890 psize, lsize, compression_type, complevel); 2891 2892 arc_loaned_bytes_update(arc_buf_size(buf)); 2893 2894 return (buf); 2895 } 2896 2897 arc_buf_t * 2898 arc_loan_raw_buf(spa_t *spa, uint64_t dsobj, boolean_t byteorder, 2899 const uint8_t *salt, const uint8_t *iv, const uint8_t *mac, 2900 dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 2901 enum zio_compress compression_type, uint8_t complevel) 2902 { 2903 arc_buf_t *buf = arc_alloc_raw_buf(spa, arc_onloan_tag, dsobj, 2904 byteorder, salt, iv, mac, ot, psize, lsize, compression_type, 2905 complevel); 2906 2907 atomic_add_64(&arc_loaned_bytes, psize); 2908 return (buf); 2909 } 2910 2911 2912 /* 2913 * Return a loaned arc buffer to the arc. 2914 */ 2915 void 2916 arc_return_buf(arc_buf_t *buf, const void *tag) 2917 { 2918 arc_buf_hdr_t *hdr = buf->b_hdr; 2919 2920 ASSERT3P(buf->b_data, !=, NULL); 2921 ASSERT(HDR_HAS_L1HDR(hdr)); 2922 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, tag); 2923 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2924 2925 arc_loaned_bytes_update(-arc_buf_size(buf)); 2926 } 2927 2928 /* Detach an arc_buf from a dbuf (tag) */ 2929 void 2930 arc_loan_inuse_buf(arc_buf_t *buf, const void *tag) 2931 { 2932 arc_buf_hdr_t *hdr = buf->b_hdr; 2933 2934 ASSERT3P(buf->b_data, !=, NULL); 2935 ASSERT(HDR_HAS_L1HDR(hdr)); 2936 (void) zfs_refcount_add(&hdr->b_l1hdr.b_refcnt, arc_onloan_tag); 2937 (void) zfs_refcount_remove(&hdr->b_l1hdr.b_refcnt, tag); 2938 2939 arc_loaned_bytes_update(arc_buf_size(buf)); 2940 } 2941 2942 static void 2943 l2arc_free_abd_on_write(abd_t *abd, size_t size, arc_buf_contents_t type) 2944 { 2945 l2arc_data_free_t *df = kmem_alloc(sizeof (*df), KM_SLEEP); 2946 2947 df->l2df_abd = abd; 2948 df->l2df_size = size; 2949 df->l2df_type = type; 2950 mutex_enter(&l2arc_free_on_write_mtx); 2951 list_insert_head(l2arc_free_on_write, df); 2952 mutex_exit(&l2arc_free_on_write_mtx); 2953 } 2954 2955 static void 2956 arc_hdr_free_on_write(arc_buf_hdr_t *hdr, boolean_t free_rdata) 2957 { 2958 arc_state_t *state = hdr->b_l1hdr.b_state; 2959 arc_buf_contents_t type = arc_buf_type(hdr); 2960 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 2961 2962 /* protected by hash lock, if in the hash table */ 2963 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 2964 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 2965 ASSERT(state != arc_anon && state != arc_l2c_only); 2966 2967 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 2968 size, hdr); 2969 } 2970 (void) zfs_refcount_remove_many(&state->arcs_size[type], size, hdr); 2971 if (type == ARC_BUFC_METADATA) { 2972 arc_space_return(size, ARC_SPACE_META); 2973 } else { 2974 ASSERT(type == ARC_BUFC_DATA); 2975 arc_space_return(size, ARC_SPACE_DATA); 2976 } 2977 2978 if (free_rdata) { 2979 l2arc_free_abd_on_write(hdr->b_crypt_hdr.b_rabd, size, type); 2980 } else { 2981 l2arc_free_abd_on_write(hdr->b_l1hdr.b_pabd, size, type); 2982 } 2983 } 2984 2985 /* 2986 * Share the arc_buf_t's data with the hdr. Whenever we are sharing the 2987 * data buffer, we transfer the refcount ownership to the hdr and update 2988 * the appropriate kstats. 2989 */ 2990 static void 2991 arc_share_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 2992 { 2993 ASSERT(arc_can_share(hdr, buf)); 2994 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 2995 ASSERT(!ARC_BUF_ENCRYPTED(buf)); 2996 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 2997 2998 /* 2999 * Start sharing the data buffer. We transfer the 3000 * refcount ownership to the hdr since it always owns 3001 * the refcount whenever an arc_buf_t is shared. 3002 */ 3003 zfs_refcount_transfer_ownership_many( 3004 &hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)], 3005 arc_hdr_size(hdr), buf, hdr); 3006 hdr->b_l1hdr.b_pabd = abd_get_from_buf(buf->b_data, arc_buf_size(buf)); 3007 abd_take_ownership_of_buf(hdr->b_l1hdr.b_pabd, 3008 HDR_ISTYPE_METADATA(hdr)); 3009 arc_hdr_set_flags(hdr, ARC_FLAG_SHARED_DATA); 3010 buf->b_flags |= ARC_BUF_FLAG_SHARED; 3011 3012 /* 3013 * Since we've transferred ownership to the hdr we need 3014 * to increment its compressed and uncompressed kstats and 3015 * decrement the overhead size. 3016 */ 3017 ARCSTAT_INCR(arcstat_compressed_size, arc_hdr_size(hdr)); 3018 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3019 ARCSTAT_INCR(arcstat_overhead_size, -arc_buf_size(buf)); 3020 } 3021 3022 static void 3023 arc_unshare_buf(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3024 { 3025 ASSERT(arc_buf_is_shared(buf)); 3026 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3027 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3028 3029 /* 3030 * We are no longer sharing this buffer so we need 3031 * to transfer its ownership to the rightful owner. 3032 */ 3033 zfs_refcount_transfer_ownership_many( 3034 &hdr->b_l1hdr.b_state->arcs_size[arc_buf_type(hdr)], 3035 arc_hdr_size(hdr), hdr, buf); 3036 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3037 abd_release_ownership_of_buf(hdr->b_l1hdr.b_pabd); 3038 abd_free(hdr->b_l1hdr.b_pabd); 3039 hdr->b_l1hdr.b_pabd = NULL; 3040 buf->b_flags &= ~ARC_BUF_FLAG_SHARED; 3041 3042 /* 3043 * Since the buffer is no longer shared between 3044 * the arc buf and the hdr, count it as overhead. 3045 */ 3046 ARCSTAT_INCR(arcstat_compressed_size, -arc_hdr_size(hdr)); 3047 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3048 ARCSTAT_INCR(arcstat_overhead_size, arc_buf_size(buf)); 3049 } 3050 3051 /* 3052 * Remove an arc_buf_t from the hdr's buf list and return the last 3053 * arc_buf_t on the list. If no buffers remain on the list then return 3054 * NULL. 3055 */ 3056 static arc_buf_t * 3057 arc_buf_remove(arc_buf_hdr_t *hdr, arc_buf_t *buf) 3058 { 3059 ASSERT(HDR_HAS_L1HDR(hdr)); 3060 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3061 3062 arc_buf_t **bufp = &hdr->b_l1hdr.b_buf; 3063 arc_buf_t *lastbuf = NULL; 3064 3065 /* 3066 * Remove the buf from the hdr list and locate the last 3067 * remaining buffer on the list. 3068 */ 3069 while (*bufp != NULL) { 3070 if (*bufp == buf) 3071 *bufp = buf->b_next; 3072 3073 /* 3074 * If we've removed a buffer in the middle of 3075 * the list then update the lastbuf and update 3076 * bufp. 3077 */ 3078 if (*bufp != NULL) { 3079 lastbuf = *bufp; 3080 bufp = &(*bufp)->b_next; 3081 } 3082 } 3083 buf->b_next = NULL; 3084 ASSERT3P(lastbuf, !=, buf); 3085 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, lastbuf != NULL); 3086 IMPLY(hdr->b_l1hdr.b_bufcnt > 0, hdr->b_l1hdr.b_buf != NULL); 3087 IMPLY(lastbuf != NULL, ARC_BUF_LAST(lastbuf)); 3088 3089 return (lastbuf); 3090 } 3091 3092 /* 3093 * Free up buf->b_data and pull the arc_buf_t off of the arc_buf_hdr_t's 3094 * list and free it. 3095 */ 3096 static void 3097 arc_buf_destroy_impl(arc_buf_t *buf) 3098 { 3099 arc_buf_hdr_t *hdr = buf->b_hdr; 3100 3101 /* 3102 * Free up the data associated with the buf but only if we're not 3103 * sharing this with the hdr. If we are sharing it with the hdr, the 3104 * hdr is responsible for doing the free. 3105 */ 3106 if (buf->b_data != NULL) { 3107 /* 3108 * We're about to change the hdr's b_flags. We must either 3109 * hold the hash_lock or be undiscoverable. 3110 */ 3111 ASSERT(HDR_EMPTY_OR_LOCKED(hdr)); 3112 3113 arc_cksum_verify(buf); 3114 arc_buf_unwatch(buf); 3115 3116 if (arc_buf_is_shared(buf)) { 3117 arc_hdr_clear_flags(hdr, ARC_FLAG_SHARED_DATA); 3118 } else { 3119 uint64_t size = arc_buf_size(buf); 3120 arc_free_data_buf(hdr, buf->b_data, size, buf); 3121 ARCSTAT_INCR(arcstat_overhead_size, -size); 3122 } 3123 buf->b_data = NULL; 3124 3125 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3126 hdr->b_l1hdr.b_bufcnt -= 1; 3127 3128 if (ARC_BUF_ENCRYPTED(buf)) { 3129 hdr->b_crypt_hdr.b_ebufcnt -= 1; 3130 3131 /* 3132 * If we have no more encrypted buffers and we've 3133 * already gotten a copy of the decrypted data we can 3134 * free b_rabd to save some space. 3135 */ 3136 if (hdr->b_crypt_hdr.b_ebufcnt == 0 && 3137 HDR_HAS_RABD(hdr) && hdr->b_l1hdr.b_pabd != NULL && 3138 !HDR_IO_IN_PROGRESS(hdr)) { 3139 arc_hdr_free_abd(hdr, B_TRUE); 3140 } 3141 } 3142 } 3143 3144 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 3145 3146 if (ARC_BUF_SHARED(buf) && !ARC_BUF_COMPRESSED(buf)) { 3147 /* 3148 * If the current arc_buf_t is sharing its data buffer with the 3149 * hdr, then reassign the hdr's b_pabd to share it with the new 3150 * buffer at the end of the list. The shared buffer is always 3151 * the last one on the hdr's buffer list. 3152 * 3153 * There is an equivalent case for compressed bufs, but since 3154 * they aren't guaranteed to be the last buf in the list and 3155 * that is an exceedingly rare case, we just allow that space be 3156 * wasted temporarily. We must also be careful not to share 3157 * encrypted buffers, since they cannot be shared. 3158 */ 3159 if (lastbuf != NULL && !ARC_BUF_ENCRYPTED(lastbuf)) { 3160 /* Only one buf can be shared at once */ 3161 VERIFY(!arc_buf_is_shared(lastbuf)); 3162 /* hdr is uncompressed so can't have compressed buf */ 3163 VERIFY(!ARC_BUF_COMPRESSED(lastbuf)); 3164 3165 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3166 arc_hdr_free_abd(hdr, B_FALSE); 3167 3168 /* 3169 * We must setup a new shared block between the 3170 * last buffer and the hdr. The data would have 3171 * been allocated by the arc buf so we need to transfer 3172 * ownership to the hdr since it's now being shared. 3173 */ 3174 arc_share_buf(hdr, lastbuf); 3175 } 3176 } else if (HDR_SHARED_DATA(hdr)) { 3177 /* 3178 * Uncompressed shared buffers are always at the end 3179 * of the list. Compressed buffers don't have the 3180 * same requirements. This makes it hard to 3181 * simply assert that the lastbuf is shared so 3182 * we rely on the hdr's compression flags to determine 3183 * if we have a compressed, shared buffer. 3184 */ 3185 ASSERT3P(lastbuf, !=, NULL); 3186 ASSERT(arc_buf_is_shared(lastbuf) || 3187 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 3188 } 3189 3190 /* 3191 * Free the checksum if we're removing the last uncompressed buf from 3192 * this hdr. 3193 */ 3194 if (!arc_hdr_has_uncompressed_buf(hdr)) { 3195 arc_cksum_free(hdr); 3196 } 3197 3198 /* clean up the buf */ 3199 buf->b_hdr = NULL; 3200 kmem_cache_free(buf_cache, buf); 3201 } 3202 3203 static void 3204 arc_hdr_alloc_abd(arc_buf_hdr_t *hdr, int alloc_flags) 3205 { 3206 uint64_t size; 3207 boolean_t alloc_rdata = ((alloc_flags & ARC_HDR_ALLOC_RDATA) != 0); 3208 3209 ASSERT3U(HDR_GET_LSIZE(hdr), >, 0); 3210 ASSERT(HDR_HAS_L1HDR(hdr)); 3211 ASSERT(!HDR_SHARED_DATA(hdr) || alloc_rdata); 3212 IMPLY(alloc_rdata, HDR_PROTECTED(hdr)); 3213 3214 if (alloc_rdata) { 3215 size = HDR_GET_PSIZE(hdr); 3216 ASSERT3P(hdr->b_crypt_hdr.b_rabd, ==, NULL); 3217 hdr->b_crypt_hdr.b_rabd = arc_get_data_abd(hdr, size, hdr, 3218 alloc_flags); 3219 ASSERT3P(hdr->b_crypt_hdr.b_rabd, !=, NULL); 3220 ARCSTAT_INCR(arcstat_raw_size, size); 3221 } else { 3222 size = arc_hdr_size(hdr); 3223 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3224 hdr->b_l1hdr.b_pabd = arc_get_data_abd(hdr, size, hdr, 3225 alloc_flags); 3226 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 3227 } 3228 3229 ARCSTAT_INCR(arcstat_compressed_size, size); 3230 ARCSTAT_INCR(arcstat_uncompressed_size, HDR_GET_LSIZE(hdr)); 3231 } 3232 3233 static void 3234 arc_hdr_free_abd(arc_buf_hdr_t *hdr, boolean_t free_rdata) 3235 { 3236 uint64_t size = (free_rdata) ? HDR_GET_PSIZE(hdr) : arc_hdr_size(hdr); 3237 3238 ASSERT(HDR_HAS_L1HDR(hdr)); 3239 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 3240 IMPLY(free_rdata, HDR_HAS_RABD(hdr)); 3241 3242 /* 3243 * If the hdr is currently being written to the l2arc then 3244 * we defer freeing the data by adding it to the l2arc_free_on_write 3245 * list. The l2arc will free the data once it's finished 3246 * writing it to the l2arc device. 3247 */ 3248 if (HDR_L2_WRITING(hdr)) { 3249 arc_hdr_free_on_write(hdr, free_rdata); 3250 ARCSTAT_BUMP(arcstat_l2_free_on_write); 3251 } else if (free_rdata) { 3252 arc_free_data_abd(hdr, hdr->b_crypt_hdr.b_rabd, size, hdr); 3253 } else { 3254 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, size, hdr); 3255 } 3256 3257 if (free_rdata) { 3258 hdr->b_crypt_hdr.b_rabd = NULL; 3259 ARCSTAT_INCR(arcstat_raw_size, -size); 3260 } else { 3261 hdr->b_l1hdr.b_pabd = NULL; 3262 } 3263 3264 if (hdr->b_l1hdr.b_pabd == NULL && !HDR_HAS_RABD(hdr)) 3265 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 3266 3267 ARCSTAT_INCR(arcstat_compressed_size, -size); 3268 ARCSTAT_INCR(arcstat_uncompressed_size, -HDR_GET_LSIZE(hdr)); 3269 } 3270 3271 /* 3272 * Allocate empty anonymous ARC header. The header will get its identity 3273 * assigned and buffers attached later as part of read or write operations. 3274 * 3275 * In case of read arc_read() assigns header its identify (b_dva + b_birth), 3276 * inserts it into ARC hash to become globally visible and allocates physical 3277 * (b_pabd) or raw (b_rabd) ABD buffer to read into from disk. On disk read 3278 * completion arc_read_done() allocates ARC buffer(s) as needed, potentially 3279 * sharing one of them with the physical ABD buffer. 3280 * 3281 * In case of write arc_alloc_buf() allocates ARC buffer to be filled with 3282 * data. Then after compression and/or encryption arc_write_ready() allocates 3283 * and fills (or potentially shares) physical (b_pabd) or raw (b_rabd) ABD 3284 * buffer. On disk write completion arc_write_done() assigns the header its 3285 * new identity (b_dva + b_birth) and inserts into ARC hash. 3286 * 3287 * In case of partial overwrite the old data is read first as described. Then 3288 * arc_release() either allocates new anonymous ARC header and moves the ARC 3289 * buffer to it, or reuses the old ARC header by discarding its identity and 3290 * removing it from ARC hash. After buffer modification normal write process 3291 * follows as described. 3292 */ 3293 static arc_buf_hdr_t * 3294 arc_hdr_alloc(uint64_t spa, int32_t psize, int32_t lsize, 3295 boolean_t protected, enum zio_compress compression_type, uint8_t complevel, 3296 arc_buf_contents_t type) 3297 { 3298 arc_buf_hdr_t *hdr; 3299 3300 VERIFY(type == ARC_BUFC_DATA || type == ARC_BUFC_METADATA); 3301 if (protected) { 3302 hdr = kmem_cache_alloc(hdr_full_crypt_cache, KM_PUSHPAGE); 3303 } else { 3304 hdr = kmem_cache_alloc(hdr_full_cache, KM_PUSHPAGE); 3305 } 3306 3307 ASSERT(HDR_EMPTY(hdr)); 3308 #ifdef ZFS_DEBUG 3309 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3310 #endif 3311 HDR_SET_PSIZE(hdr, psize); 3312 HDR_SET_LSIZE(hdr, lsize); 3313 hdr->b_spa = spa; 3314 hdr->b_type = type; 3315 hdr->b_flags = 0; 3316 arc_hdr_set_flags(hdr, arc_bufc_to_flags(type) | ARC_FLAG_HAS_L1HDR); 3317 arc_hdr_set_compress(hdr, compression_type); 3318 hdr->b_complevel = complevel; 3319 if (protected) 3320 arc_hdr_set_flags(hdr, ARC_FLAG_PROTECTED); 3321 3322 hdr->b_l1hdr.b_state = arc_anon; 3323 hdr->b_l1hdr.b_arc_access = 0; 3324 hdr->b_l1hdr.b_mru_hits = 0; 3325 hdr->b_l1hdr.b_mru_ghost_hits = 0; 3326 hdr->b_l1hdr.b_mfu_hits = 0; 3327 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 3328 hdr->b_l1hdr.b_bufcnt = 0; 3329 hdr->b_l1hdr.b_buf = NULL; 3330 3331 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3332 3333 return (hdr); 3334 } 3335 3336 /* 3337 * Transition between the two allocation states for the arc_buf_hdr struct. 3338 * The arc_buf_hdr struct can be allocated with (hdr_full_cache) or without 3339 * (hdr_l2only_cache) the fields necessary for the L1 cache - the smaller 3340 * version is used when a cache buffer is only in the L2ARC in order to reduce 3341 * memory usage. 3342 */ 3343 static arc_buf_hdr_t * 3344 arc_hdr_realloc(arc_buf_hdr_t *hdr, kmem_cache_t *old, kmem_cache_t *new) 3345 { 3346 ASSERT(HDR_HAS_L2HDR(hdr)); 3347 3348 arc_buf_hdr_t *nhdr; 3349 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3350 3351 ASSERT((old == hdr_full_cache && new == hdr_l2only_cache) || 3352 (old == hdr_l2only_cache && new == hdr_full_cache)); 3353 3354 /* 3355 * if the caller wanted a new full header and the header is to be 3356 * encrypted we will actually allocate the header from the full crypt 3357 * cache instead. The same applies to freeing from the old cache. 3358 */ 3359 if (HDR_PROTECTED(hdr) && new == hdr_full_cache) 3360 new = hdr_full_crypt_cache; 3361 if (HDR_PROTECTED(hdr) && old == hdr_full_cache) 3362 old = hdr_full_crypt_cache; 3363 3364 nhdr = kmem_cache_alloc(new, KM_PUSHPAGE); 3365 3366 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3367 buf_hash_remove(hdr); 3368 3369 memcpy(nhdr, hdr, HDR_L2ONLY_SIZE); 3370 3371 if (new == hdr_full_cache || new == hdr_full_crypt_cache) { 3372 arc_hdr_set_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3373 /* 3374 * arc_access and arc_change_state need to be aware that a 3375 * header has just come out of L2ARC, so we set its state to 3376 * l2c_only even though it's about to change. 3377 */ 3378 nhdr->b_l1hdr.b_state = arc_l2c_only; 3379 3380 /* Verify previous threads set to NULL before freeing */ 3381 ASSERT3P(nhdr->b_l1hdr.b_pabd, ==, NULL); 3382 ASSERT(!HDR_HAS_RABD(hdr)); 3383 } else { 3384 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3385 ASSERT0(hdr->b_l1hdr.b_bufcnt); 3386 #ifdef ZFS_DEBUG 3387 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3388 #endif 3389 3390 /* 3391 * If we've reached here, We must have been called from 3392 * arc_evict_hdr(), as such we should have already been 3393 * removed from any ghost list we were previously on 3394 * (which protects us from racing with arc_evict_state), 3395 * thus no locking is needed during this check. 3396 */ 3397 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3398 3399 /* 3400 * A buffer must not be moved into the arc_l2c_only 3401 * state if it's not finished being written out to the 3402 * l2arc device. Otherwise, the b_l1hdr.b_pabd field 3403 * might try to be accessed, even though it was removed. 3404 */ 3405 VERIFY(!HDR_L2_WRITING(hdr)); 3406 VERIFY3P(hdr->b_l1hdr.b_pabd, ==, NULL); 3407 ASSERT(!HDR_HAS_RABD(hdr)); 3408 3409 arc_hdr_clear_flags(nhdr, ARC_FLAG_HAS_L1HDR); 3410 } 3411 /* 3412 * The header has been reallocated so we need to re-insert it into any 3413 * lists it was on. 3414 */ 3415 (void) buf_hash_insert(nhdr, NULL); 3416 3417 ASSERT(list_link_active(&hdr->b_l2hdr.b_l2node)); 3418 3419 mutex_enter(&dev->l2ad_mtx); 3420 3421 /* 3422 * We must place the realloc'ed header back into the list at 3423 * the same spot. Otherwise, if it's placed earlier in the list, 3424 * l2arc_write_buffers() could find it during the function's 3425 * write phase, and try to write it out to the l2arc. 3426 */ 3427 list_insert_after(&dev->l2ad_buflist, hdr, nhdr); 3428 list_remove(&dev->l2ad_buflist, hdr); 3429 3430 mutex_exit(&dev->l2ad_mtx); 3431 3432 /* 3433 * Since we're using the pointer address as the tag when 3434 * incrementing and decrementing the l2ad_alloc refcount, we 3435 * must remove the old pointer (that we're about to destroy) and 3436 * add the new pointer to the refcount. Otherwise we'd remove 3437 * the wrong pointer address when calling arc_hdr_destroy() later. 3438 */ 3439 3440 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 3441 arc_hdr_size(hdr), hdr); 3442 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 3443 arc_hdr_size(nhdr), nhdr); 3444 3445 buf_discard_identity(hdr); 3446 kmem_cache_free(old, hdr); 3447 3448 return (nhdr); 3449 } 3450 3451 /* 3452 * This function allows an L1 header to be reallocated as a crypt 3453 * header and vice versa. If we are going to a crypt header, the 3454 * new fields will be zeroed out. 3455 */ 3456 static arc_buf_hdr_t * 3457 arc_hdr_realloc_crypt(arc_buf_hdr_t *hdr, boolean_t need_crypt) 3458 { 3459 arc_buf_hdr_t *nhdr; 3460 arc_buf_t *buf; 3461 kmem_cache_t *ncache, *ocache; 3462 3463 /* 3464 * This function requires that hdr is in the arc_anon state. 3465 * Therefore it won't have any L2ARC data for us to worry 3466 * about copying. 3467 */ 3468 ASSERT(HDR_HAS_L1HDR(hdr)); 3469 ASSERT(!HDR_HAS_L2HDR(hdr)); 3470 ASSERT3U(!!HDR_PROTECTED(hdr), !=, need_crypt); 3471 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3472 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3473 ASSERT(!list_link_active(&hdr->b_l2hdr.b_l2node)); 3474 ASSERT3P(hdr->b_hash_next, ==, NULL); 3475 3476 if (need_crypt) { 3477 ncache = hdr_full_crypt_cache; 3478 ocache = hdr_full_cache; 3479 } else { 3480 ncache = hdr_full_cache; 3481 ocache = hdr_full_crypt_cache; 3482 } 3483 3484 nhdr = kmem_cache_alloc(ncache, KM_PUSHPAGE); 3485 3486 /* 3487 * Copy all members that aren't locks or condvars to the new header. 3488 * No lists are pointing to us (as we asserted above), so we don't 3489 * need to worry about the list nodes. 3490 */ 3491 nhdr->b_dva = hdr->b_dva; 3492 nhdr->b_birth = hdr->b_birth; 3493 nhdr->b_type = hdr->b_type; 3494 nhdr->b_flags = hdr->b_flags; 3495 nhdr->b_psize = hdr->b_psize; 3496 nhdr->b_lsize = hdr->b_lsize; 3497 nhdr->b_spa = hdr->b_spa; 3498 #ifdef ZFS_DEBUG 3499 nhdr->b_l1hdr.b_freeze_cksum = hdr->b_l1hdr.b_freeze_cksum; 3500 #endif 3501 nhdr->b_l1hdr.b_bufcnt = hdr->b_l1hdr.b_bufcnt; 3502 nhdr->b_l1hdr.b_byteswap = hdr->b_l1hdr.b_byteswap; 3503 nhdr->b_l1hdr.b_state = hdr->b_l1hdr.b_state; 3504 nhdr->b_l1hdr.b_arc_access = hdr->b_l1hdr.b_arc_access; 3505 nhdr->b_l1hdr.b_mru_hits = hdr->b_l1hdr.b_mru_hits; 3506 nhdr->b_l1hdr.b_mru_ghost_hits = hdr->b_l1hdr.b_mru_ghost_hits; 3507 nhdr->b_l1hdr.b_mfu_hits = hdr->b_l1hdr.b_mfu_hits; 3508 nhdr->b_l1hdr.b_mfu_ghost_hits = hdr->b_l1hdr.b_mfu_ghost_hits; 3509 nhdr->b_l1hdr.b_acb = hdr->b_l1hdr.b_acb; 3510 nhdr->b_l1hdr.b_pabd = hdr->b_l1hdr.b_pabd; 3511 3512 /* 3513 * This zfs_refcount_add() exists only to ensure that the individual 3514 * arc buffers always point to a header that is referenced, avoiding 3515 * a small race condition that could trigger ASSERTs. 3516 */ 3517 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, FTAG); 3518 nhdr->b_l1hdr.b_buf = hdr->b_l1hdr.b_buf; 3519 for (buf = nhdr->b_l1hdr.b_buf; buf != NULL; buf = buf->b_next) 3520 buf->b_hdr = nhdr; 3521 3522 zfs_refcount_transfer(&nhdr->b_l1hdr.b_refcnt, &hdr->b_l1hdr.b_refcnt); 3523 (void) zfs_refcount_remove(&nhdr->b_l1hdr.b_refcnt, FTAG); 3524 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3525 3526 if (need_crypt) { 3527 arc_hdr_set_flags(nhdr, ARC_FLAG_PROTECTED); 3528 } else { 3529 arc_hdr_clear_flags(nhdr, ARC_FLAG_PROTECTED); 3530 } 3531 3532 /* unset all members of the original hdr */ 3533 memset(&hdr->b_dva, 0, sizeof (dva_t)); 3534 hdr->b_birth = 0; 3535 hdr->b_type = 0; 3536 hdr->b_flags = 0; 3537 hdr->b_psize = 0; 3538 hdr->b_lsize = 0; 3539 hdr->b_spa = 0; 3540 #ifdef ZFS_DEBUG 3541 hdr->b_l1hdr.b_freeze_cksum = NULL; 3542 #endif 3543 hdr->b_l1hdr.b_buf = NULL; 3544 hdr->b_l1hdr.b_bufcnt = 0; 3545 hdr->b_l1hdr.b_byteswap = 0; 3546 hdr->b_l1hdr.b_state = NULL; 3547 hdr->b_l1hdr.b_arc_access = 0; 3548 hdr->b_l1hdr.b_mru_hits = 0; 3549 hdr->b_l1hdr.b_mru_ghost_hits = 0; 3550 hdr->b_l1hdr.b_mfu_hits = 0; 3551 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 3552 hdr->b_l1hdr.b_acb = NULL; 3553 hdr->b_l1hdr.b_pabd = NULL; 3554 3555 if (ocache == hdr_full_crypt_cache) { 3556 ASSERT(!HDR_HAS_RABD(hdr)); 3557 hdr->b_crypt_hdr.b_ot = DMU_OT_NONE; 3558 hdr->b_crypt_hdr.b_ebufcnt = 0; 3559 hdr->b_crypt_hdr.b_dsobj = 0; 3560 memset(hdr->b_crypt_hdr.b_salt, 0, ZIO_DATA_SALT_LEN); 3561 memset(hdr->b_crypt_hdr.b_iv, 0, ZIO_DATA_IV_LEN); 3562 memset(hdr->b_crypt_hdr.b_mac, 0, ZIO_DATA_MAC_LEN); 3563 } 3564 3565 buf_discard_identity(hdr); 3566 kmem_cache_free(ocache, hdr); 3567 3568 return (nhdr); 3569 } 3570 3571 /* 3572 * This function is used by the send / receive code to convert a newly 3573 * allocated arc_buf_t to one that is suitable for a raw encrypted write. It 3574 * is also used to allow the root objset block to be updated without altering 3575 * its embedded MACs. Both block types will always be uncompressed so we do not 3576 * have to worry about compression type or psize. 3577 */ 3578 void 3579 arc_convert_to_raw(arc_buf_t *buf, uint64_t dsobj, boolean_t byteorder, 3580 dmu_object_type_t ot, const uint8_t *salt, const uint8_t *iv, 3581 const uint8_t *mac) 3582 { 3583 arc_buf_hdr_t *hdr = buf->b_hdr; 3584 3585 ASSERT(ot == DMU_OT_DNODE || ot == DMU_OT_OBJSET); 3586 ASSERT(HDR_HAS_L1HDR(hdr)); 3587 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3588 3589 buf->b_flags |= (ARC_BUF_FLAG_COMPRESSED | ARC_BUF_FLAG_ENCRYPTED); 3590 if (!HDR_PROTECTED(hdr)) 3591 hdr = arc_hdr_realloc_crypt(hdr, B_TRUE); 3592 hdr->b_crypt_hdr.b_dsobj = dsobj; 3593 hdr->b_crypt_hdr.b_ot = ot; 3594 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3595 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3596 if (!arc_hdr_has_uncompressed_buf(hdr)) 3597 arc_cksum_free(hdr); 3598 3599 if (salt != NULL) 3600 memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 3601 if (iv != NULL) 3602 memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 3603 if (mac != NULL) 3604 memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 3605 } 3606 3607 /* 3608 * Allocate a new arc_buf_hdr_t and arc_buf_t and return the buf to the caller. 3609 * The buf is returned thawed since we expect the consumer to modify it. 3610 */ 3611 arc_buf_t * 3612 arc_alloc_buf(spa_t *spa, const void *tag, arc_buf_contents_t type, 3613 int32_t size) 3614 { 3615 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), size, size, 3616 B_FALSE, ZIO_COMPRESS_OFF, 0, type); 3617 3618 arc_buf_t *buf = NULL; 3619 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, B_FALSE, 3620 B_FALSE, B_FALSE, &buf)); 3621 arc_buf_thaw(buf); 3622 3623 return (buf); 3624 } 3625 3626 /* 3627 * Allocate a compressed buf in the same manner as arc_alloc_buf. Don't use this 3628 * for bufs containing metadata. 3629 */ 3630 arc_buf_t * 3631 arc_alloc_compressed_buf(spa_t *spa, const void *tag, uint64_t psize, 3632 uint64_t lsize, enum zio_compress compression_type, uint8_t complevel) 3633 { 3634 ASSERT3U(lsize, >, 0); 3635 ASSERT3U(lsize, >=, psize); 3636 ASSERT3U(compression_type, >, ZIO_COMPRESS_OFF); 3637 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3638 3639 arc_buf_hdr_t *hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 3640 B_FALSE, compression_type, complevel, ARC_BUFC_DATA); 3641 3642 arc_buf_t *buf = NULL; 3643 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_FALSE, 3644 B_TRUE, B_FALSE, B_FALSE, &buf)); 3645 arc_buf_thaw(buf); 3646 3647 /* 3648 * To ensure that the hdr has the correct data in it if we call 3649 * arc_untransform() on this buf before it's been written to disk, 3650 * it's easiest if we just set up sharing between the buf and the hdr. 3651 */ 3652 arc_share_buf(hdr, buf); 3653 3654 return (buf); 3655 } 3656 3657 arc_buf_t * 3658 arc_alloc_raw_buf(spa_t *spa, const void *tag, uint64_t dsobj, 3659 boolean_t byteorder, const uint8_t *salt, const uint8_t *iv, 3660 const uint8_t *mac, dmu_object_type_t ot, uint64_t psize, uint64_t lsize, 3661 enum zio_compress compression_type, uint8_t complevel) 3662 { 3663 arc_buf_hdr_t *hdr; 3664 arc_buf_t *buf; 3665 arc_buf_contents_t type = DMU_OT_IS_METADATA(ot) ? 3666 ARC_BUFC_METADATA : ARC_BUFC_DATA; 3667 3668 ASSERT3U(lsize, >, 0); 3669 ASSERT3U(lsize, >=, psize); 3670 ASSERT3U(compression_type, >=, ZIO_COMPRESS_OFF); 3671 ASSERT3U(compression_type, <, ZIO_COMPRESS_FUNCTIONS); 3672 3673 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, B_TRUE, 3674 compression_type, complevel, type); 3675 3676 hdr->b_crypt_hdr.b_dsobj = dsobj; 3677 hdr->b_crypt_hdr.b_ot = ot; 3678 hdr->b_l1hdr.b_byteswap = (byteorder == ZFS_HOST_BYTEORDER) ? 3679 DMU_BSWAP_NUMFUNCS : DMU_OT_BYTESWAP(ot); 3680 memcpy(hdr->b_crypt_hdr.b_salt, salt, ZIO_DATA_SALT_LEN); 3681 memcpy(hdr->b_crypt_hdr.b_iv, iv, ZIO_DATA_IV_LEN); 3682 memcpy(hdr->b_crypt_hdr.b_mac, mac, ZIO_DATA_MAC_LEN); 3683 3684 /* 3685 * This buffer will be considered encrypted even if the ot is not an 3686 * encrypted type. It will become authenticated instead in 3687 * arc_write_ready(). 3688 */ 3689 buf = NULL; 3690 VERIFY0(arc_buf_alloc_impl(hdr, spa, NULL, tag, B_TRUE, B_TRUE, 3691 B_FALSE, B_FALSE, &buf)); 3692 arc_buf_thaw(buf); 3693 3694 return (buf); 3695 } 3696 3697 static void 3698 l2arc_hdr_arcstats_update(arc_buf_hdr_t *hdr, boolean_t incr, 3699 boolean_t state_only) 3700 { 3701 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3702 l2arc_dev_t *dev = l2hdr->b_dev; 3703 uint64_t lsize = HDR_GET_LSIZE(hdr); 3704 uint64_t psize = HDR_GET_PSIZE(hdr); 3705 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3706 arc_buf_contents_t type = hdr->b_type; 3707 int64_t lsize_s; 3708 int64_t psize_s; 3709 int64_t asize_s; 3710 3711 if (incr) { 3712 lsize_s = lsize; 3713 psize_s = psize; 3714 asize_s = asize; 3715 } else { 3716 lsize_s = -lsize; 3717 psize_s = -psize; 3718 asize_s = -asize; 3719 } 3720 3721 /* If the buffer is a prefetch, count it as such. */ 3722 if (HDR_PREFETCH(hdr)) { 3723 ARCSTAT_INCR(arcstat_l2_prefetch_asize, asize_s); 3724 } else { 3725 /* 3726 * We use the value stored in the L2 header upon initial 3727 * caching in L2ARC. This value will be updated in case 3728 * an MRU/MRU_ghost buffer transitions to MFU but the L2ARC 3729 * metadata (log entry) cannot currently be updated. Having 3730 * the ARC state in the L2 header solves the problem of a 3731 * possibly absent L1 header (apparent in buffers restored 3732 * from persistent L2ARC). 3733 */ 3734 switch (hdr->b_l2hdr.b_arcs_state) { 3735 case ARC_STATE_MRU_GHOST: 3736 case ARC_STATE_MRU: 3737 ARCSTAT_INCR(arcstat_l2_mru_asize, asize_s); 3738 break; 3739 case ARC_STATE_MFU_GHOST: 3740 case ARC_STATE_MFU: 3741 ARCSTAT_INCR(arcstat_l2_mfu_asize, asize_s); 3742 break; 3743 default: 3744 break; 3745 } 3746 } 3747 3748 if (state_only) 3749 return; 3750 3751 ARCSTAT_INCR(arcstat_l2_psize, psize_s); 3752 ARCSTAT_INCR(arcstat_l2_lsize, lsize_s); 3753 3754 switch (type) { 3755 case ARC_BUFC_DATA: 3756 ARCSTAT_INCR(arcstat_l2_bufc_data_asize, asize_s); 3757 break; 3758 case ARC_BUFC_METADATA: 3759 ARCSTAT_INCR(arcstat_l2_bufc_metadata_asize, asize_s); 3760 break; 3761 default: 3762 break; 3763 } 3764 } 3765 3766 3767 static void 3768 arc_hdr_l2hdr_destroy(arc_buf_hdr_t *hdr) 3769 { 3770 l2arc_buf_hdr_t *l2hdr = &hdr->b_l2hdr; 3771 l2arc_dev_t *dev = l2hdr->b_dev; 3772 uint64_t psize = HDR_GET_PSIZE(hdr); 3773 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 3774 3775 ASSERT(MUTEX_HELD(&dev->l2ad_mtx)); 3776 ASSERT(HDR_HAS_L2HDR(hdr)); 3777 3778 list_remove(&dev->l2ad_buflist, hdr); 3779 3780 l2arc_hdr_arcstats_decrement(hdr); 3781 vdev_space_update(dev->l2ad_vdev, -asize, 0, 0); 3782 3783 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, arc_hdr_size(hdr), 3784 hdr); 3785 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 3786 } 3787 3788 static void 3789 arc_hdr_destroy(arc_buf_hdr_t *hdr) 3790 { 3791 if (HDR_HAS_L1HDR(hdr)) { 3792 ASSERT(hdr->b_l1hdr.b_buf == NULL || 3793 hdr->b_l1hdr.b_bufcnt > 0); 3794 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 3795 ASSERT3P(hdr->b_l1hdr.b_state, ==, arc_anon); 3796 } 3797 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3798 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 3799 3800 if (HDR_HAS_L2HDR(hdr)) { 3801 l2arc_dev_t *dev = hdr->b_l2hdr.b_dev; 3802 boolean_t buflist_held = MUTEX_HELD(&dev->l2ad_mtx); 3803 3804 if (!buflist_held) 3805 mutex_enter(&dev->l2ad_mtx); 3806 3807 /* 3808 * Even though we checked this conditional above, we 3809 * need to check this again now that we have the 3810 * l2ad_mtx. This is because we could be racing with 3811 * another thread calling l2arc_evict() which might have 3812 * destroyed this header's L2 portion as we were waiting 3813 * to acquire the l2ad_mtx. If that happens, we don't 3814 * want to re-destroy the header's L2 portion. 3815 */ 3816 if (HDR_HAS_L2HDR(hdr)) { 3817 3818 if (!HDR_EMPTY(hdr)) 3819 buf_discard_identity(hdr); 3820 3821 arc_hdr_l2hdr_destroy(hdr); 3822 } 3823 3824 if (!buflist_held) 3825 mutex_exit(&dev->l2ad_mtx); 3826 } 3827 3828 /* 3829 * The header's identify can only be safely discarded once it is no 3830 * longer discoverable. This requires removing it from the hash table 3831 * and the l2arc header list. After this point the hash lock can not 3832 * be used to protect the header. 3833 */ 3834 if (!HDR_EMPTY(hdr)) 3835 buf_discard_identity(hdr); 3836 3837 if (HDR_HAS_L1HDR(hdr)) { 3838 arc_cksum_free(hdr); 3839 3840 while (hdr->b_l1hdr.b_buf != NULL) 3841 arc_buf_destroy_impl(hdr->b_l1hdr.b_buf); 3842 3843 if (hdr->b_l1hdr.b_pabd != NULL) 3844 arc_hdr_free_abd(hdr, B_FALSE); 3845 3846 if (HDR_HAS_RABD(hdr)) 3847 arc_hdr_free_abd(hdr, B_TRUE); 3848 } 3849 3850 ASSERT3P(hdr->b_hash_next, ==, NULL); 3851 if (HDR_HAS_L1HDR(hdr)) { 3852 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 3853 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 3854 #ifdef ZFS_DEBUG 3855 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 3856 #endif 3857 3858 if (!HDR_PROTECTED(hdr)) { 3859 kmem_cache_free(hdr_full_cache, hdr); 3860 } else { 3861 kmem_cache_free(hdr_full_crypt_cache, hdr); 3862 } 3863 } else { 3864 kmem_cache_free(hdr_l2only_cache, hdr); 3865 } 3866 } 3867 3868 void 3869 arc_buf_destroy(arc_buf_t *buf, const void *tag) 3870 { 3871 arc_buf_hdr_t *hdr = buf->b_hdr; 3872 3873 if (hdr->b_l1hdr.b_state == arc_anon) { 3874 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 3875 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3876 VERIFY0(remove_reference(hdr, tag)); 3877 return; 3878 } 3879 3880 kmutex_t *hash_lock = HDR_LOCK(hdr); 3881 mutex_enter(hash_lock); 3882 3883 ASSERT3P(hdr, ==, buf->b_hdr); 3884 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 3885 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 3886 ASSERT3P(hdr->b_l1hdr.b_state, !=, arc_anon); 3887 ASSERT3P(buf->b_data, !=, NULL); 3888 3889 arc_buf_destroy_impl(buf); 3890 (void) remove_reference(hdr, tag); 3891 mutex_exit(hash_lock); 3892 } 3893 3894 /* 3895 * Evict the arc_buf_hdr that is provided as a parameter. The resultant 3896 * state of the header is dependent on its state prior to entering this 3897 * function. The following transitions are possible: 3898 * 3899 * - arc_mru -> arc_mru_ghost 3900 * - arc_mfu -> arc_mfu_ghost 3901 * - arc_mru_ghost -> arc_l2c_only 3902 * - arc_mru_ghost -> deleted 3903 * - arc_mfu_ghost -> arc_l2c_only 3904 * - arc_mfu_ghost -> deleted 3905 * - arc_uncached -> deleted 3906 * 3907 * Return total size of evicted data buffers for eviction progress tracking. 3908 * When evicting from ghost states return logical buffer size to make eviction 3909 * progress at the same (or at least comparable) rate as from non-ghost states. 3910 * 3911 * Return *real_evicted for actual ARC size reduction to wake up threads 3912 * waiting for it. For non-ghost states it includes size of evicted data 3913 * buffers (the headers are not freed there). For ghost states it includes 3914 * only the evicted headers size. 3915 */ 3916 static int64_t 3917 arc_evict_hdr(arc_buf_hdr_t *hdr, uint64_t *real_evicted) 3918 { 3919 arc_state_t *evicted_state, *state; 3920 int64_t bytes_evicted = 0; 3921 uint_t min_lifetime = HDR_PRESCIENT_PREFETCH(hdr) ? 3922 arc_min_prescient_prefetch_ms : arc_min_prefetch_ms; 3923 3924 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 3925 ASSERT(HDR_HAS_L1HDR(hdr)); 3926 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 3927 ASSERT0(hdr->b_l1hdr.b_bufcnt); 3928 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 3929 ASSERT0(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt)); 3930 3931 *real_evicted = 0; 3932 state = hdr->b_l1hdr.b_state; 3933 if (GHOST_STATE(state)) { 3934 3935 /* 3936 * l2arc_write_buffers() relies on a header's L1 portion 3937 * (i.e. its b_pabd field) during it's write phase. 3938 * Thus, we cannot push a header onto the arc_l2c_only 3939 * state (removing its L1 piece) until the header is 3940 * done being written to the l2arc. 3941 */ 3942 if (HDR_HAS_L2HDR(hdr) && HDR_L2_WRITING(hdr)) { 3943 ARCSTAT_BUMP(arcstat_evict_l2_skip); 3944 return (bytes_evicted); 3945 } 3946 3947 ARCSTAT_BUMP(arcstat_deleted); 3948 bytes_evicted += HDR_GET_LSIZE(hdr); 3949 3950 DTRACE_PROBE1(arc__delete, arc_buf_hdr_t *, hdr); 3951 3952 if (HDR_HAS_L2HDR(hdr)) { 3953 ASSERT(hdr->b_l1hdr.b_pabd == NULL); 3954 ASSERT(!HDR_HAS_RABD(hdr)); 3955 /* 3956 * This buffer is cached on the 2nd Level ARC; 3957 * don't destroy the header. 3958 */ 3959 arc_change_state(arc_l2c_only, hdr); 3960 /* 3961 * dropping from L1+L2 cached to L2-only, 3962 * realloc to remove the L1 header. 3963 */ 3964 (void) arc_hdr_realloc(hdr, hdr_full_cache, 3965 hdr_l2only_cache); 3966 *real_evicted += HDR_FULL_SIZE - HDR_L2ONLY_SIZE; 3967 } else { 3968 arc_change_state(arc_anon, hdr); 3969 arc_hdr_destroy(hdr); 3970 *real_evicted += HDR_FULL_SIZE; 3971 } 3972 return (bytes_evicted); 3973 } 3974 3975 ASSERT(state == arc_mru || state == arc_mfu || state == arc_uncached); 3976 evicted_state = (state == arc_uncached) ? arc_anon : 3977 ((state == arc_mru) ? arc_mru_ghost : arc_mfu_ghost); 3978 3979 /* prefetch buffers have a minimum lifespan */ 3980 if ((hdr->b_flags & (ARC_FLAG_PREFETCH | ARC_FLAG_INDIRECT)) && 3981 ddi_get_lbolt() - hdr->b_l1hdr.b_arc_access < 3982 MSEC_TO_TICK(min_lifetime)) { 3983 ARCSTAT_BUMP(arcstat_evict_skip); 3984 return (bytes_evicted); 3985 } 3986 3987 if (HDR_HAS_L2HDR(hdr)) { 3988 ARCSTAT_INCR(arcstat_evict_l2_cached, HDR_GET_LSIZE(hdr)); 3989 } else { 3990 if (l2arc_write_eligible(hdr->b_spa, hdr)) { 3991 ARCSTAT_INCR(arcstat_evict_l2_eligible, 3992 HDR_GET_LSIZE(hdr)); 3993 3994 switch (state->arcs_state) { 3995 case ARC_STATE_MRU: 3996 ARCSTAT_INCR( 3997 arcstat_evict_l2_eligible_mru, 3998 HDR_GET_LSIZE(hdr)); 3999 break; 4000 case ARC_STATE_MFU: 4001 ARCSTAT_INCR( 4002 arcstat_evict_l2_eligible_mfu, 4003 HDR_GET_LSIZE(hdr)); 4004 break; 4005 default: 4006 break; 4007 } 4008 } else { 4009 ARCSTAT_INCR(arcstat_evict_l2_ineligible, 4010 HDR_GET_LSIZE(hdr)); 4011 } 4012 } 4013 4014 bytes_evicted += arc_hdr_size(hdr); 4015 *real_evicted += arc_hdr_size(hdr); 4016 4017 /* 4018 * If this hdr is being evicted and has a compressed buffer then we 4019 * discard it here before we change states. This ensures that the 4020 * accounting is updated correctly in arc_free_data_impl(). 4021 */ 4022 if (hdr->b_l1hdr.b_pabd != NULL) 4023 arc_hdr_free_abd(hdr, B_FALSE); 4024 4025 if (HDR_HAS_RABD(hdr)) 4026 arc_hdr_free_abd(hdr, B_TRUE); 4027 4028 arc_change_state(evicted_state, hdr); 4029 DTRACE_PROBE1(arc__evict, arc_buf_hdr_t *, hdr); 4030 if (evicted_state == arc_anon) { 4031 arc_hdr_destroy(hdr); 4032 *real_evicted += HDR_FULL_SIZE; 4033 } else { 4034 ASSERT(HDR_IN_HASH_TABLE(hdr)); 4035 } 4036 4037 return (bytes_evicted); 4038 } 4039 4040 static void 4041 arc_set_need_free(void) 4042 { 4043 ASSERT(MUTEX_HELD(&arc_evict_lock)); 4044 int64_t remaining = arc_free_memory() - arc_sys_free / 2; 4045 arc_evict_waiter_t *aw = list_tail(&arc_evict_waiters); 4046 if (aw == NULL) { 4047 arc_need_free = MAX(-remaining, 0); 4048 } else { 4049 arc_need_free = 4050 MAX(-remaining, (int64_t)(aw->aew_count - arc_evict_count)); 4051 } 4052 } 4053 4054 static uint64_t 4055 arc_evict_state_impl(multilist_t *ml, int idx, arc_buf_hdr_t *marker, 4056 uint64_t spa, uint64_t bytes) 4057 { 4058 multilist_sublist_t *mls; 4059 uint64_t bytes_evicted = 0, real_evicted = 0; 4060 arc_buf_hdr_t *hdr; 4061 kmutex_t *hash_lock; 4062 uint_t evict_count = zfs_arc_evict_batch_limit; 4063 4064 ASSERT3P(marker, !=, NULL); 4065 4066 mls = multilist_sublist_lock(ml, idx); 4067 4068 for (hdr = multilist_sublist_prev(mls, marker); likely(hdr != NULL); 4069 hdr = multilist_sublist_prev(mls, marker)) { 4070 if ((evict_count == 0) || (bytes_evicted >= bytes)) 4071 break; 4072 4073 /* 4074 * To keep our iteration location, move the marker 4075 * forward. Since we're not holding hdr's hash lock, we 4076 * must be very careful and not remove 'hdr' from the 4077 * sublist. Otherwise, other consumers might mistake the 4078 * 'hdr' as not being on a sublist when they call the 4079 * multilist_link_active() function (they all rely on 4080 * the hash lock protecting concurrent insertions and 4081 * removals). multilist_sublist_move_forward() was 4082 * specifically implemented to ensure this is the case 4083 * (only 'marker' will be removed and re-inserted). 4084 */ 4085 multilist_sublist_move_forward(mls, marker); 4086 4087 /* 4088 * The only case where the b_spa field should ever be 4089 * zero, is the marker headers inserted by 4090 * arc_evict_state(). It's possible for multiple threads 4091 * to be calling arc_evict_state() concurrently (e.g. 4092 * dsl_pool_close() and zio_inject_fault()), so we must 4093 * skip any markers we see from these other threads. 4094 */ 4095 if (hdr->b_spa == 0) 4096 continue; 4097 4098 /* we're only interested in evicting buffers of a certain spa */ 4099 if (spa != 0 && hdr->b_spa != spa) { 4100 ARCSTAT_BUMP(arcstat_evict_skip); 4101 continue; 4102 } 4103 4104 hash_lock = HDR_LOCK(hdr); 4105 4106 /* 4107 * We aren't calling this function from any code path 4108 * that would already be holding a hash lock, so we're 4109 * asserting on this assumption to be defensive in case 4110 * this ever changes. Without this check, it would be 4111 * possible to incorrectly increment arcstat_mutex_miss 4112 * below (e.g. if the code changed such that we called 4113 * this function with a hash lock held). 4114 */ 4115 ASSERT(!MUTEX_HELD(hash_lock)); 4116 4117 if (mutex_tryenter(hash_lock)) { 4118 uint64_t revicted; 4119 uint64_t evicted = arc_evict_hdr(hdr, &revicted); 4120 mutex_exit(hash_lock); 4121 4122 bytes_evicted += evicted; 4123 real_evicted += revicted; 4124 4125 /* 4126 * If evicted is zero, arc_evict_hdr() must have 4127 * decided to skip this header, don't increment 4128 * evict_count in this case. 4129 */ 4130 if (evicted != 0) 4131 evict_count--; 4132 4133 } else { 4134 ARCSTAT_BUMP(arcstat_mutex_miss); 4135 } 4136 } 4137 4138 multilist_sublist_unlock(mls); 4139 4140 /* 4141 * Increment the count of evicted bytes, and wake up any threads that 4142 * are waiting for the count to reach this value. Since the list is 4143 * ordered by ascending aew_count, we pop off the beginning of the 4144 * list until we reach the end, or a waiter that's past the current 4145 * "count". Doing this outside the loop reduces the number of times 4146 * we need to acquire the global arc_evict_lock. 4147 * 4148 * Only wake when there's sufficient free memory in the system 4149 * (specifically, arc_sys_free/2, which by default is a bit more than 4150 * 1/64th of RAM). See the comments in arc_wait_for_eviction(). 4151 */ 4152 mutex_enter(&arc_evict_lock); 4153 arc_evict_count += real_evicted; 4154 4155 if (arc_free_memory() > arc_sys_free / 2) { 4156 arc_evict_waiter_t *aw; 4157 while ((aw = list_head(&arc_evict_waiters)) != NULL && 4158 aw->aew_count <= arc_evict_count) { 4159 list_remove(&arc_evict_waiters, aw); 4160 cv_broadcast(&aw->aew_cv); 4161 } 4162 } 4163 arc_set_need_free(); 4164 mutex_exit(&arc_evict_lock); 4165 4166 /* 4167 * If the ARC size is reduced from arc_c_max to arc_c_min (especially 4168 * if the average cached block is small), eviction can be on-CPU for 4169 * many seconds. To ensure that other threads that may be bound to 4170 * this CPU are able to make progress, make a voluntary preemption 4171 * call here. 4172 */ 4173 kpreempt(KPREEMPT_SYNC); 4174 4175 return (bytes_evicted); 4176 } 4177 4178 /* 4179 * Allocate an array of buffer headers used as placeholders during arc state 4180 * eviction. 4181 */ 4182 static arc_buf_hdr_t ** 4183 arc_state_alloc_markers(int count) 4184 { 4185 arc_buf_hdr_t **markers; 4186 4187 markers = kmem_zalloc(sizeof (*markers) * count, KM_SLEEP); 4188 for (int i = 0; i < count; i++) { 4189 markers[i] = kmem_cache_alloc(hdr_full_cache, KM_SLEEP); 4190 4191 /* 4192 * A b_spa of 0 is used to indicate that this header is 4193 * a marker. This fact is used in arc_evict_state_impl(). 4194 */ 4195 markers[i]->b_spa = 0; 4196 4197 } 4198 return (markers); 4199 } 4200 4201 static void 4202 arc_state_free_markers(arc_buf_hdr_t **markers, int count) 4203 { 4204 for (int i = 0; i < count; i++) 4205 kmem_cache_free(hdr_full_cache, markers[i]); 4206 kmem_free(markers, sizeof (*markers) * count); 4207 } 4208 4209 /* 4210 * Evict buffers from the given arc state, until we've removed the 4211 * specified number of bytes. Move the removed buffers to the 4212 * appropriate evict state. 4213 * 4214 * This function makes a "best effort". It skips over any buffers 4215 * it can't get a hash_lock on, and so, may not catch all candidates. 4216 * It may also return without evicting as much space as requested. 4217 * 4218 * If bytes is specified using the special value ARC_EVICT_ALL, this 4219 * will evict all available (i.e. unlocked and evictable) buffers from 4220 * the given arc state; which is used by arc_flush(). 4221 */ 4222 static uint64_t 4223 arc_evict_state(arc_state_t *state, arc_buf_contents_t type, uint64_t spa, 4224 uint64_t bytes) 4225 { 4226 uint64_t total_evicted = 0; 4227 multilist_t *ml = &state->arcs_list[type]; 4228 int num_sublists; 4229 arc_buf_hdr_t **markers; 4230 4231 num_sublists = multilist_get_num_sublists(ml); 4232 4233 /* 4234 * If we've tried to evict from each sublist, made some 4235 * progress, but still have not hit the target number of bytes 4236 * to evict, we want to keep trying. The markers allow us to 4237 * pick up where we left off for each individual sublist, rather 4238 * than starting from the tail each time. 4239 */ 4240 if (zthr_iscurthread(arc_evict_zthr)) { 4241 markers = arc_state_evict_markers; 4242 ASSERT3S(num_sublists, <=, arc_state_evict_marker_count); 4243 } else { 4244 markers = arc_state_alloc_markers(num_sublists); 4245 } 4246 for (int i = 0; i < num_sublists; i++) { 4247 multilist_sublist_t *mls; 4248 4249 mls = multilist_sublist_lock(ml, i); 4250 multilist_sublist_insert_tail(mls, markers[i]); 4251 multilist_sublist_unlock(mls); 4252 } 4253 4254 /* 4255 * While we haven't hit our target number of bytes to evict, or 4256 * we're evicting all available buffers. 4257 */ 4258 while (total_evicted < bytes) { 4259 int sublist_idx = multilist_get_random_index(ml); 4260 uint64_t scan_evicted = 0; 4261 4262 /* 4263 * Start eviction using a randomly selected sublist, 4264 * this is to try and evenly balance eviction across all 4265 * sublists. Always starting at the same sublist 4266 * (e.g. index 0) would cause evictions to favor certain 4267 * sublists over others. 4268 */ 4269 for (int i = 0; i < num_sublists; i++) { 4270 uint64_t bytes_remaining; 4271 uint64_t bytes_evicted; 4272 4273 if (total_evicted < bytes) 4274 bytes_remaining = bytes - total_evicted; 4275 else 4276 break; 4277 4278 bytes_evicted = arc_evict_state_impl(ml, sublist_idx, 4279 markers[sublist_idx], spa, bytes_remaining); 4280 4281 scan_evicted += bytes_evicted; 4282 total_evicted += bytes_evicted; 4283 4284 /* we've reached the end, wrap to the beginning */ 4285 if (++sublist_idx >= num_sublists) 4286 sublist_idx = 0; 4287 } 4288 4289 /* 4290 * If we didn't evict anything during this scan, we have 4291 * no reason to believe we'll evict more during another 4292 * scan, so break the loop. 4293 */ 4294 if (scan_evicted == 0) { 4295 /* This isn't possible, let's make that obvious */ 4296 ASSERT3S(bytes, !=, 0); 4297 4298 /* 4299 * When bytes is ARC_EVICT_ALL, the only way to 4300 * break the loop is when scan_evicted is zero. 4301 * In that case, we actually have evicted enough, 4302 * so we don't want to increment the kstat. 4303 */ 4304 if (bytes != ARC_EVICT_ALL) { 4305 ASSERT3S(total_evicted, <, bytes); 4306 ARCSTAT_BUMP(arcstat_evict_not_enough); 4307 } 4308 4309 break; 4310 } 4311 } 4312 4313 for (int i = 0; i < num_sublists; i++) { 4314 multilist_sublist_t *mls = multilist_sublist_lock(ml, i); 4315 multilist_sublist_remove(mls, markers[i]); 4316 multilist_sublist_unlock(mls); 4317 } 4318 if (markers != arc_state_evict_markers) 4319 arc_state_free_markers(markers, num_sublists); 4320 4321 return (total_evicted); 4322 } 4323 4324 /* 4325 * Flush all "evictable" data of the given type from the arc state 4326 * specified. This will not evict any "active" buffers (i.e. referenced). 4327 * 4328 * When 'retry' is set to B_FALSE, the function will make a single pass 4329 * over the state and evict any buffers that it can. Since it doesn't 4330 * continually retry the eviction, it might end up leaving some buffers 4331 * in the ARC due to lock misses. 4332 * 4333 * When 'retry' is set to B_TRUE, the function will continually retry the 4334 * eviction until *all* evictable buffers have been removed from the 4335 * state. As a result, if concurrent insertions into the state are 4336 * allowed (e.g. if the ARC isn't shutting down), this function might 4337 * wind up in an infinite loop, continually trying to evict buffers. 4338 */ 4339 static uint64_t 4340 arc_flush_state(arc_state_t *state, uint64_t spa, arc_buf_contents_t type, 4341 boolean_t retry) 4342 { 4343 uint64_t evicted = 0; 4344 4345 while (zfs_refcount_count(&state->arcs_esize[type]) != 0) { 4346 evicted += arc_evict_state(state, type, spa, ARC_EVICT_ALL); 4347 4348 if (!retry) 4349 break; 4350 } 4351 4352 return (evicted); 4353 } 4354 4355 /* 4356 * Evict the specified number of bytes from the state specified. This 4357 * function prevents us from trying to evict more from a state's list 4358 * than is "evictable", and to skip evicting altogether when passed a 4359 * negative value for "bytes". In contrast, arc_evict_state() will 4360 * evict everything it can, when passed a negative value for "bytes". 4361 */ 4362 static uint64_t 4363 arc_evict_impl(arc_state_t *state, arc_buf_contents_t type, int64_t bytes) 4364 { 4365 uint64_t delta; 4366 4367 if (bytes > 0 && zfs_refcount_count(&state->arcs_esize[type]) > 0) { 4368 delta = MIN(zfs_refcount_count(&state->arcs_esize[type]), 4369 bytes); 4370 return (arc_evict_state(state, type, 0, delta)); 4371 } 4372 4373 return (0); 4374 } 4375 4376 /* 4377 * Adjust specified fraction, taking into account initial ghost state(s) size, 4378 * ghost hit bytes towards increasing the fraction, ghost hit bytes towards 4379 * decreasing it, plus a balance factor, controlling the decrease rate, used 4380 * to balance metadata vs data. 4381 */ 4382 static uint64_t 4383 arc_evict_adj(uint64_t frac, uint64_t total, uint64_t up, uint64_t down, 4384 uint_t balance) 4385 { 4386 if (total < 8 || up + down == 0) 4387 return (frac); 4388 4389 /* 4390 * We should not have more ghost hits than ghost size, but they 4391 * may get close. Restrict maximum adjustment in that case. 4392 */ 4393 if (up + down >= total / 4) { 4394 uint64_t scale = (up + down) / (total / 8); 4395 up /= scale; 4396 down /= scale; 4397 } 4398 4399 /* Get maximal dynamic range by choosing optimal shifts. */ 4400 int s = highbit64(total); 4401 s = MIN(64 - s, 32); 4402 4403 uint64_t ofrac = (1ULL << 32) - frac; 4404 4405 if (frac >= 4 * ofrac) 4406 up /= frac / (2 * ofrac + 1); 4407 up = (up << s) / (total >> (32 - s)); 4408 if (ofrac >= 4 * frac) 4409 down /= ofrac / (2 * frac + 1); 4410 down = (down << s) / (total >> (32 - s)); 4411 down = down * 100 / balance; 4412 4413 return (frac + up - down); 4414 } 4415 4416 /* 4417 * Evict buffers from the cache, such that arcstat_size is capped by arc_c. 4418 */ 4419 static uint64_t 4420 arc_evict(void) 4421 { 4422 uint64_t asize, bytes, total_evicted = 0; 4423 int64_t e, mrud, mrum, mfud, mfum, w; 4424 static uint64_t ogrd, ogrm, ogfd, ogfm; 4425 static uint64_t gsrd, gsrm, gsfd, gsfm; 4426 uint64_t ngrd, ngrm, ngfd, ngfm; 4427 4428 /* Get current size of ARC states we can evict from. */ 4429 mrud = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_DATA]) + 4430 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]); 4431 mrum = zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) + 4432 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 4433 mfud = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 4434 mfum = zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 4435 uint64_t d = mrud + mfud; 4436 uint64_t m = mrum + mfum; 4437 uint64_t t = d + m; 4438 4439 /* Get ARC ghost hits since last eviction. */ 4440 ngrd = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]); 4441 uint64_t grd = ngrd - ogrd; 4442 ogrd = ngrd; 4443 ngrm = wmsum_value(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]); 4444 uint64_t grm = ngrm - ogrm; 4445 ogrm = ngrm; 4446 ngfd = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]); 4447 uint64_t gfd = ngfd - ogfd; 4448 ogfd = ngfd; 4449 ngfm = wmsum_value(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]); 4450 uint64_t gfm = ngfm - ogfm; 4451 ogfm = ngfm; 4452 4453 /* Adjust ARC states balance based on ghost hits. */ 4454 arc_meta = arc_evict_adj(arc_meta, gsrd + gsrm + gsfd + gsfm, 4455 grm + gfm, grd + gfd, zfs_arc_meta_balance); 4456 arc_pd = arc_evict_adj(arc_pd, gsrd + gsfd, grd, gfd, 100); 4457 arc_pm = arc_evict_adj(arc_pm, gsrm + gsfm, grm, gfm, 100); 4458 4459 asize = aggsum_value(&arc_sums.arcstat_size); 4460 int64_t wt = t - (asize - arc_c); 4461 4462 /* 4463 * Try to reduce pinned dnodes if more than 3/4 of wanted metadata 4464 * target is not evictable or if they go over arc_dnode_limit. 4465 */ 4466 int64_t prune = 0; 4467 int64_t dn = wmsum_value(&arc_sums.arcstat_dnode_size); 4468 w = wt * (int64_t)(arc_meta >> 16) >> 16; 4469 if (zfs_refcount_count(&arc_mru->arcs_size[ARC_BUFC_METADATA]) + 4470 zfs_refcount_count(&arc_mfu->arcs_size[ARC_BUFC_METADATA]) - 4471 zfs_refcount_count(&arc_mru->arcs_esize[ARC_BUFC_METADATA]) - 4472 zfs_refcount_count(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]) > 4473 w * 3 / 4) { 4474 prune = dn / sizeof (dnode_t) * 4475 zfs_arc_dnode_reduce_percent / 100; 4476 } else if (dn > arc_dnode_limit) { 4477 prune = (dn - arc_dnode_limit) / sizeof (dnode_t) * 4478 zfs_arc_dnode_reduce_percent / 100; 4479 } 4480 if (prune > 0) 4481 arc_prune_async(prune); 4482 4483 /* Evict MRU metadata. */ 4484 w = wt * (int64_t)(arc_meta * arc_pm >> 48) >> 16; 4485 e = MIN((int64_t)(asize - arc_c), (int64_t)(mrum - w)); 4486 bytes = arc_evict_impl(arc_mru, ARC_BUFC_METADATA, e); 4487 total_evicted += bytes; 4488 mrum -= bytes; 4489 asize -= bytes; 4490 4491 /* Evict MFU metadata. */ 4492 w = wt * (int64_t)(arc_meta >> 16) >> 16; 4493 e = MIN((int64_t)(asize - arc_c), (int64_t)(m - w)); 4494 bytes = arc_evict_impl(arc_mfu, ARC_BUFC_METADATA, e); 4495 total_evicted += bytes; 4496 mfum -= bytes; 4497 asize -= bytes; 4498 4499 /* Evict MRU data. */ 4500 wt -= m - total_evicted; 4501 w = wt * (int64_t)(arc_pd >> 16) >> 16; 4502 e = MIN((int64_t)(asize - arc_c), (int64_t)(mrud - w)); 4503 bytes = arc_evict_impl(arc_mru, ARC_BUFC_DATA, e); 4504 total_evicted += bytes; 4505 mrud -= bytes; 4506 asize -= bytes; 4507 4508 /* Evict MFU data. */ 4509 e = asize - arc_c; 4510 bytes = arc_evict_impl(arc_mfu, ARC_BUFC_DATA, e); 4511 mfud -= bytes; 4512 total_evicted += bytes; 4513 4514 /* 4515 * Evict ghost lists 4516 * 4517 * Size of each state's ghost list represents how much that state 4518 * may grow by shrinking the other states. Would it need to shrink 4519 * other states to zero (that is unlikely), its ghost size would be 4520 * equal to sum of other three state sizes. But excessive ghost 4521 * size may result in false ghost hits (too far back), that may 4522 * never result in real cache hits if several states are competing. 4523 * So choose some arbitraty point of 1/2 of other state sizes. 4524 */ 4525 gsrd = (mrum + mfud + mfum) / 2; 4526 e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]) - 4527 gsrd; 4528 (void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_DATA, e); 4529 4530 gsrm = (mrud + mfud + mfum) / 2; 4531 e = zfs_refcount_count(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]) - 4532 gsrm; 4533 (void) arc_evict_impl(arc_mru_ghost, ARC_BUFC_METADATA, e); 4534 4535 gsfd = (mrud + mrum + mfum) / 2; 4536 e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]) - 4537 gsfd; 4538 (void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_DATA, e); 4539 4540 gsfm = (mrud + mrum + mfud) / 2; 4541 e = zfs_refcount_count(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]) - 4542 gsfm; 4543 (void) arc_evict_impl(arc_mfu_ghost, ARC_BUFC_METADATA, e); 4544 4545 return (total_evicted); 4546 } 4547 4548 void 4549 arc_flush(spa_t *spa, boolean_t retry) 4550 { 4551 uint64_t guid = 0; 4552 4553 /* 4554 * If retry is B_TRUE, a spa must not be specified since we have 4555 * no good way to determine if all of a spa's buffers have been 4556 * evicted from an arc state. 4557 */ 4558 ASSERT(!retry || spa == NULL); 4559 4560 if (spa != NULL) 4561 guid = spa_load_guid(spa); 4562 4563 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_DATA, retry); 4564 (void) arc_flush_state(arc_mru, guid, ARC_BUFC_METADATA, retry); 4565 4566 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_DATA, retry); 4567 (void) arc_flush_state(arc_mfu, guid, ARC_BUFC_METADATA, retry); 4568 4569 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_DATA, retry); 4570 (void) arc_flush_state(arc_mru_ghost, guid, ARC_BUFC_METADATA, retry); 4571 4572 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_DATA, retry); 4573 (void) arc_flush_state(arc_mfu_ghost, guid, ARC_BUFC_METADATA, retry); 4574 4575 (void) arc_flush_state(arc_uncached, guid, ARC_BUFC_DATA, retry); 4576 (void) arc_flush_state(arc_uncached, guid, ARC_BUFC_METADATA, retry); 4577 } 4578 4579 void 4580 arc_reduce_target_size(int64_t to_free) 4581 { 4582 uint64_t c = arc_c; 4583 4584 if (c <= arc_c_min) 4585 return; 4586 4587 /* 4588 * All callers want the ARC to actually evict (at least) this much 4589 * memory. Therefore we reduce from the lower of the current size and 4590 * the target size. This way, even if arc_c is much higher than 4591 * arc_size (as can be the case after many calls to arc_freed(), we will 4592 * immediately have arc_c < arc_size and therefore the arc_evict_zthr 4593 * will evict. 4594 */ 4595 uint64_t asize = aggsum_value(&arc_sums.arcstat_size); 4596 if (asize < c) 4597 to_free += c - asize; 4598 arc_c = MAX((int64_t)c - to_free, (int64_t)arc_c_min); 4599 4600 /* See comment in arc_evict_cb_check() on why lock+flag */ 4601 mutex_enter(&arc_evict_lock); 4602 arc_evict_needed = B_TRUE; 4603 mutex_exit(&arc_evict_lock); 4604 zthr_wakeup(arc_evict_zthr); 4605 } 4606 4607 /* 4608 * Determine if the system is under memory pressure and is asking 4609 * to reclaim memory. A return value of B_TRUE indicates that the system 4610 * is under memory pressure and that the arc should adjust accordingly. 4611 */ 4612 boolean_t 4613 arc_reclaim_needed(void) 4614 { 4615 return (arc_available_memory() < 0); 4616 } 4617 4618 void 4619 arc_kmem_reap_soon(void) 4620 { 4621 size_t i; 4622 kmem_cache_t *prev_cache = NULL; 4623 kmem_cache_t *prev_data_cache = NULL; 4624 4625 #ifdef _KERNEL 4626 #if defined(_ILP32) 4627 /* 4628 * Reclaim unused memory from all kmem caches. 4629 */ 4630 kmem_reap(); 4631 #endif 4632 #endif 4633 4634 for (i = 0; i < SPA_MAXBLOCKSIZE >> SPA_MINBLOCKSHIFT; i++) { 4635 #if defined(_ILP32) 4636 /* reach upper limit of cache size on 32-bit */ 4637 if (zio_buf_cache[i] == NULL) 4638 break; 4639 #endif 4640 if (zio_buf_cache[i] != prev_cache) { 4641 prev_cache = zio_buf_cache[i]; 4642 kmem_cache_reap_now(zio_buf_cache[i]); 4643 } 4644 if (zio_data_buf_cache[i] != prev_data_cache) { 4645 prev_data_cache = zio_data_buf_cache[i]; 4646 kmem_cache_reap_now(zio_data_buf_cache[i]); 4647 } 4648 } 4649 kmem_cache_reap_now(buf_cache); 4650 kmem_cache_reap_now(hdr_full_cache); 4651 kmem_cache_reap_now(hdr_l2only_cache); 4652 kmem_cache_reap_now(zfs_btree_leaf_cache); 4653 abd_cache_reap_now(); 4654 } 4655 4656 static boolean_t 4657 arc_evict_cb_check(void *arg, zthr_t *zthr) 4658 { 4659 (void) arg, (void) zthr; 4660 4661 #ifdef ZFS_DEBUG 4662 /* 4663 * This is necessary in order to keep the kstat information 4664 * up to date for tools that display kstat data such as the 4665 * mdb ::arc dcmd and the Linux crash utility. These tools 4666 * typically do not call kstat's update function, but simply 4667 * dump out stats from the most recent update. Without 4668 * this call, these commands may show stale stats for the 4669 * anon, mru, mru_ghost, mfu, and mfu_ghost lists. Even 4670 * with this call, the data might be out of date if the 4671 * evict thread hasn't been woken recently; but that should 4672 * suffice. The arc_state_t structures can be queried 4673 * directly if more accurate information is needed. 4674 */ 4675 if (arc_ksp != NULL) 4676 arc_ksp->ks_update(arc_ksp, KSTAT_READ); 4677 #endif 4678 4679 /* 4680 * We have to rely on arc_wait_for_eviction() to tell us when to 4681 * evict, rather than checking if we are overflowing here, so that we 4682 * are sure to not leave arc_wait_for_eviction() waiting on aew_cv. 4683 * If we have become "not overflowing" since arc_wait_for_eviction() 4684 * checked, we need to wake it up. We could broadcast the CV here, 4685 * but arc_wait_for_eviction() may have not yet gone to sleep. We 4686 * would need to use a mutex to ensure that this function doesn't 4687 * broadcast until arc_wait_for_eviction() has gone to sleep (e.g. 4688 * the arc_evict_lock). However, the lock ordering of such a lock 4689 * would necessarily be incorrect with respect to the zthr_lock, 4690 * which is held before this function is called, and is held by 4691 * arc_wait_for_eviction() when it calls zthr_wakeup(). 4692 */ 4693 if (arc_evict_needed) 4694 return (B_TRUE); 4695 4696 /* 4697 * If we have buffers in uncached state, evict them periodically. 4698 */ 4699 return ((zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_DATA]) + 4700 zfs_refcount_count(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]) && 4701 ddi_get_lbolt() - arc_last_uncached_flush > 4702 MSEC_TO_TICK(arc_min_prefetch_ms / 2))); 4703 } 4704 4705 /* 4706 * Keep arc_size under arc_c by running arc_evict which evicts data 4707 * from the ARC. 4708 */ 4709 static void 4710 arc_evict_cb(void *arg, zthr_t *zthr) 4711 { 4712 (void) arg, (void) zthr; 4713 4714 uint64_t evicted = 0; 4715 fstrans_cookie_t cookie = spl_fstrans_mark(); 4716 4717 /* Always try to evict from uncached state. */ 4718 arc_last_uncached_flush = ddi_get_lbolt(); 4719 evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_DATA, B_FALSE); 4720 evicted += arc_flush_state(arc_uncached, 0, ARC_BUFC_METADATA, B_FALSE); 4721 4722 /* Evict from other states only if told to. */ 4723 if (arc_evict_needed) 4724 evicted += arc_evict(); 4725 4726 /* 4727 * If evicted is zero, we couldn't evict anything 4728 * via arc_evict(). This could be due to hash lock 4729 * collisions, but more likely due to the majority of 4730 * arc buffers being unevictable. Therefore, even if 4731 * arc_size is above arc_c, another pass is unlikely to 4732 * be helpful and could potentially cause us to enter an 4733 * infinite loop. Additionally, zthr_iscancelled() is 4734 * checked here so that if the arc is shutting down, the 4735 * broadcast will wake any remaining arc evict waiters. 4736 */ 4737 mutex_enter(&arc_evict_lock); 4738 arc_evict_needed = !zthr_iscancelled(arc_evict_zthr) && 4739 evicted > 0 && aggsum_compare(&arc_sums.arcstat_size, arc_c) > 0; 4740 if (!arc_evict_needed) { 4741 /* 4742 * We're either no longer overflowing, or we 4743 * can't evict anything more, so we should wake 4744 * arc_get_data_impl() sooner. 4745 */ 4746 arc_evict_waiter_t *aw; 4747 while ((aw = list_remove_head(&arc_evict_waiters)) != NULL) { 4748 cv_broadcast(&aw->aew_cv); 4749 } 4750 arc_set_need_free(); 4751 } 4752 mutex_exit(&arc_evict_lock); 4753 spl_fstrans_unmark(cookie); 4754 } 4755 4756 static boolean_t 4757 arc_reap_cb_check(void *arg, zthr_t *zthr) 4758 { 4759 (void) arg, (void) zthr; 4760 4761 int64_t free_memory = arc_available_memory(); 4762 static int reap_cb_check_counter = 0; 4763 4764 /* 4765 * If a kmem reap is already active, don't schedule more. We must 4766 * check for this because kmem_cache_reap_soon() won't actually 4767 * block on the cache being reaped (this is to prevent callers from 4768 * becoming implicitly blocked by a system-wide kmem reap -- which, 4769 * on a system with many, many full magazines, can take minutes). 4770 */ 4771 if (!kmem_cache_reap_active() && free_memory < 0) { 4772 4773 arc_no_grow = B_TRUE; 4774 arc_warm = B_TRUE; 4775 /* 4776 * Wait at least zfs_grow_retry (default 5) seconds 4777 * before considering growing. 4778 */ 4779 arc_growtime = gethrtime() + SEC2NSEC(arc_grow_retry); 4780 return (B_TRUE); 4781 } else if (free_memory < arc_c >> arc_no_grow_shift) { 4782 arc_no_grow = B_TRUE; 4783 } else if (gethrtime() >= arc_growtime) { 4784 arc_no_grow = B_FALSE; 4785 } 4786 4787 /* 4788 * Called unconditionally every 60 seconds to reclaim unused 4789 * zstd compression and decompression context. This is done 4790 * here to avoid the need for an independent thread. 4791 */ 4792 if (!((reap_cb_check_counter++) % 60)) 4793 zfs_zstd_cache_reap_now(); 4794 4795 return (B_FALSE); 4796 } 4797 4798 /* 4799 * Keep enough free memory in the system by reaping the ARC's kmem 4800 * caches. To cause more slabs to be reapable, we may reduce the 4801 * target size of the cache (arc_c), causing the arc_evict_cb() 4802 * to free more buffers. 4803 */ 4804 static void 4805 arc_reap_cb(void *arg, zthr_t *zthr) 4806 { 4807 (void) arg, (void) zthr; 4808 4809 int64_t free_memory; 4810 fstrans_cookie_t cookie = spl_fstrans_mark(); 4811 4812 /* 4813 * Kick off asynchronous kmem_reap()'s of all our caches. 4814 */ 4815 arc_kmem_reap_soon(); 4816 4817 /* 4818 * Wait at least arc_kmem_cache_reap_retry_ms between 4819 * arc_kmem_reap_soon() calls. Without this check it is possible to 4820 * end up in a situation where we spend lots of time reaping 4821 * caches, while we're near arc_c_min. Waiting here also gives the 4822 * subsequent free memory check a chance of finding that the 4823 * asynchronous reap has already freed enough memory, and we don't 4824 * need to call arc_reduce_target_size(). 4825 */ 4826 delay((hz * arc_kmem_cache_reap_retry_ms + 999) / 1000); 4827 4828 /* 4829 * Reduce the target size as needed to maintain the amount of free 4830 * memory in the system at a fraction of the arc_size (1/128th by 4831 * default). If oversubscribed (free_memory < 0) then reduce the 4832 * target arc_size by the deficit amount plus the fractional 4833 * amount. If free memory is positive but less than the fractional 4834 * amount, reduce by what is needed to hit the fractional amount. 4835 */ 4836 free_memory = arc_available_memory(); 4837 4838 int64_t can_free = arc_c - arc_c_min; 4839 if (can_free > 0) { 4840 int64_t to_free = (can_free >> arc_shrink_shift) - free_memory; 4841 if (to_free > 0) 4842 arc_reduce_target_size(to_free); 4843 } 4844 spl_fstrans_unmark(cookie); 4845 } 4846 4847 #ifdef _KERNEL 4848 /* 4849 * Determine the amount of memory eligible for eviction contained in the 4850 * ARC. All clean data reported by the ghost lists can always be safely 4851 * evicted. Due to arc_c_min, the same does not hold for all clean data 4852 * contained by the regular mru and mfu lists. 4853 * 4854 * In the case of the regular mru and mfu lists, we need to report as 4855 * much clean data as possible, such that evicting that same reported 4856 * data will not bring arc_size below arc_c_min. Thus, in certain 4857 * circumstances, the total amount of clean data in the mru and mfu 4858 * lists might not actually be evictable. 4859 * 4860 * The following two distinct cases are accounted for: 4861 * 4862 * 1. The sum of the amount of dirty data contained by both the mru and 4863 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 4864 * is greater than or equal to arc_c_min. 4865 * (i.e. amount of dirty data >= arc_c_min) 4866 * 4867 * This is the easy case; all clean data contained by the mru and mfu 4868 * lists is evictable. Evicting all clean data can only drop arc_size 4869 * to the amount of dirty data, which is greater than arc_c_min. 4870 * 4871 * 2. The sum of the amount of dirty data contained by both the mru and 4872 * mfu lists, plus the ARC's other accounting (e.g. the anon list), 4873 * is less than arc_c_min. 4874 * (i.e. arc_c_min > amount of dirty data) 4875 * 4876 * 2.1. arc_size is greater than or equal arc_c_min. 4877 * (i.e. arc_size >= arc_c_min > amount of dirty data) 4878 * 4879 * In this case, not all clean data from the regular mru and mfu 4880 * lists is actually evictable; we must leave enough clean data 4881 * to keep arc_size above arc_c_min. Thus, the maximum amount of 4882 * evictable data from the two lists combined, is exactly the 4883 * difference between arc_size and arc_c_min. 4884 * 4885 * 2.2. arc_size is less than arc_c_min 4886 * (i.e. arc_c_min > arc_size > amount of dirty data) 4887 * 4888 * In this case, none of the data contained in the mru and mfu 4889 * lists is evictable, even if it's clean. Since arc_size is 4890 * already below arc_c_min, evicting any more would only 4891 * increase this negative difference. 4892 */ 4893 4894 #endif /* _KERNEL */ 4895 4896 /* 4897 * Adapt arc info given the number of bytes we are trying to add and 4898 * the state that we are coming from. This function is only called 4899 * when we are adding new content to the cache. 4900 */ 4901 static void 4902 arc_adapt(uint64_t bytes) 4903 { 4904 /* 4905 * Wake reap thread if we do not have any available memory 4906 */ 4907 if (arc_reclaim_needed()) { 4908 zthr_wakeup(arc_reap_zthr); 4909 return; 4910 } 4911 4912 if (arc_no_grow) 4913 return; 4914 4915 if (arc_c >= arc_c_max) 4916 return; 4917 4918 /* 4919 * If we're within (2 * maxblocksize) bytes of the target 4920 * cache size, increment the target cache size 4921 */ 4922 if (aggsum_upper_bound(&arc_sums.arcstat_size) + 4923 2 * SPA_MAXBLOCKSIZE >= arc_c) { 4924 uint64_t dc = MAX(bytes, SPA_OLD_MAXBLOCKSIZE); 4925 if (atomic_add_64_nv(&arc_c, dc) > arc_c_max) 4926 arc_c = arc_c_max; 4927 } 4928 } 4929 4930 /* 4931 * Check if arc_size has grown past our upper threshold, determined by 4932 * zfs_arc_overflow_shift. 4933 */ 4934 static arc_ovf_level_t 4935 arc_is_overflowing(boolean_t use_reserve) 4936 { 4937 /* Always allow at least one block of overflow */ 4938 int64_t overflow = MAX(SPA_MAXBLOCKSIZE, 4939 arc_c >> zfs_arc_overflow_shift); 4940 4941 /* 4942 * We just compare the lower bound here for performance reasons. Our 4943 * primary goals are to make sure that the arc never grows without 4944 * bound, and that it can reach its maximum size. This check 4945 * accomplishes both goals. The maximum amount we could run over by is 4946 * 2 * aggsum_borrow_multiplier * NUM_CPUS * the average size of a block 4947 * in the ARC. In practice, that's in the tens of MB, which is low 4948 * enough to be safe. 4949 */ 4950 int64_t over = aggsum_lower_bound(&arc_sums.arcstat_size) - 4951 arc_c - overflow / 2; 4952 if (!use_reserve) 4953 overflow /= 2; 4954 return (over < 0 ? ARC_OVF_NONE : 4955 over < overflow ? ARC_OVF_SOME : ARC_OVF_SEVERE); 4956 } 4957 4958 static abd_t * 4959 arc_get_data_abd(arc_buf_hdr_t *hdr, uint64_t size, const void *tag, 4960 int alloc_flags) 4961 { 4962 arc_buf_contents_t type = arc_buf_type(hdr); 4963 4964 arc_get_data_impl(hdr, size, tag, alloc_flags); 4965 if (alloc_flags & ARC_HDR_ALLOC_LINEAR) 4966 return (abd_alloc_linear(size, type == ARC_BUFC_METADATA)); 4967 else 4968 return (abd_alloc(size, type == ARC_BUFC_METADATA)); 4969 } 4970 4971 static void * 4972 arc_get_data_buf(arc_buf_hdr_t *hdr, uint64_t size, const void *tag) 4973 { 4974 arc_buf_contents_t type = arc_buf_type(hdr); 4975 4976 arc_get_data_impl(hdr, size, tag, 0); 4977 if (type == ARC_BUFC_METADATA) { 4978 return (zio_buf_alloc(size)); 4979 } else { 4980 ASSERT(type == ARC_BUFC_DATA); 4981 return (zio_data_buf_alloc(size)); 4982 } 4983 } 4984 4985 /* 4986 * Wait for the specified amount of data (in bytes) to be evicted from the 4987 * ARC, and for there to be sufficient free memory in the system. Waiting for 4988 * eviction ensures that the memory used by the ARC decreases. Waiting for 4989 * free memory ensures that the system won't run out of free pages, regardless 4990 * of ARC behavior and settings. See arc_lowmem_init(). 4991 */ 4992 void 4993 arc_wait_for_eviction(uint64_t amount, boolean_t use_reserve) 4994 { 4995 switch (arc_is_overflowing(use_reserve)) { 4996 case ARC_OVF_NONE: 4997 return; 4998 case ARC_OVF_SOME: 4999 /* 5000 * This is a bit racy without taking arc_evict_lock, but the 5001 * worst that can happen is we either call zthr_wakeup() extra 5002 * time due to race with other thread here, or the set flag 5003 * get cleared by arc_evict_cb(), which is unlikely due to 5004 * big hysteresis, but also not important since at this level 5005 * of overflow the eviction is purely advisory. Same time 5006 * taking the global lock here every time without waiting for 5007 * the actual eviction creates a significant lock contention. 5008 */ 5009 if (!arc_evict_needed) { 5010 arc_evict_needed = B_TRUE; 5011 zthr_wakeup(arc_evict_zthr); 5012 } 5013 return; 5014 case ARC_OVF_SEVERE: 5015 default: 5016 { 5017 arc_evict_waiter_t aw; 5018 list_link_init(&aw.aew_node); 5019 cv_init(&aw.aew_cv, NULL, CV_DEFAULT, NULL); 5020 5021 uint64_t last_count = 0; 5022 mutex_enter(&arc_evict_lock); 5023 if (!list_is_empty(&arc_evict_waiters)) { 5024 arc_evict_waiter_t *last = 5025 list_tail(&arc_evict_waiters); 5026 last_count = last->aew_count; 5027 } else if (!arc_evict_needed) { 5028 arc_evict_needed = B_TRUE; 5029 zthr_wakeup(arc_evict_zthr); 5030 } 5031 /* 5032 * Note, the last waiter's count may be less than 5033 * arc_evict_count if we are low on memory in which 5034 * case arc_evict_state_impl() may have deferred 5035 * wakeups (but still incremented arc_evict_count). 5036 */ 5037 aw.aew_count = MAX(last_count, arc_evict_count) + amount; 5038 5039 list_insert_tail(&arc_evict_waiters, &aw); 5040 5041 arc_set_need_free(); 5042 5043 DTRACE_PROBE3(arc__wait__for__eviction, 5044 uint64_t, amount, 5045 uint64_t, arc_evict_count, 5046 uint64_t, aw.aew_count); 5047 5048 /* 5049 * We will be woken up either when arc_evict_count reaches 5050 * aew_count, or when the ARC is no longer overflowing and 5051 * eviction completes. 5052 * In case of "false" wakeup, we will still be on the list. 5053 */ 5054 do { 5055 cv_wait(&aw.aew_cv, &arc_evict_lock); 5056 } while (list_link_active(&aw.aew_node)); 5057 mutex_exit(&arc_evict_lock); 5058 5059 cv_destroy(&aw.aew_cv); 5060 } 5061 } 5062 } 5063 5064 /* 5065 * Allocate a block and return it to the caller. If we are hitting the 5066 * hard limit for the cache size, we must sleep, waiting for the eviction 5067 * thread to catch up. If we're past the target size but below the hard 5068 * limit, we'll only signal the reclaim thread and continue on. 5069 */ 5070 static void 5071 arc_get_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag, 5072 int alloc_flags) 5073 { 5074 arc_adapt(size); 5075 5076 /* 5077 * If arc_size is currently overflowing, we must be adding data 5078 * faster than we are evicting. To ensure we don't compound the 5079 * problem by adding more data and forcing arc_size to grow even 5080 * further past it's target size, we wait for the eviction thread to 5081 * make some progress. We also wait for there to be sufficient free 5082 * memory in the system, as measured by arc_free_memory(). 5083 * 5084 * Specifically, we wait for zfs_arc_eviction_pct percent of the 5085 * requested size to be evicted. This should be more than 100%, to 5086 * ensure that that progress is also made towards getting arc_size 5087 * under arc_c. See the comment above zfs_arc_eviction_pct. 5088 */ 5089 arc_wait_for_eviction(size * zfs_arc_eviction_pct / 100, 5090 alloc_flags & ARC_HDR_USE_RESERVE); 5091 5092 arc_buf_contents_t type = arc_buf_type(hdr); 5093 if (type == ARC_BUFC_METADATA) { 5094 arc_space_consume(size, ARC_SPACE_META); 5095 } else { 5096 arc_space_consume(size, ARC_SPACE_DATA); 5097 } 5098 5099 /* 5100 * Update the state size. Note that ghost states have a 5101 * "ghost size" and so don't need to be updated. 5102 */ 5103 arc_state_t *state = hdr->b_l1hdr.b_state; 5104 if (!GHOST_STATE(state)) { 5105 5106 (void) zfs_refcount_add_many(&state->arcs_size[type], size, 5107 tag); 5108 5109 /* 5110 * If this is reached via arc_read, the link is 5111 * protected by the hash lock. If reached via 5112 * arc_buf_alloc, the header should not be accessed by 5113 * any other thread. And, if reached via arc_read_done, 5114 * the hash lock will protect it if it's found in the 5115 * hash table; otherwise no other thread should be 5116 * trying to [add|remove]_reference it. 5117 */ 5118 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5119 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5120 (void) zfs_refcount_add_many(&state->arcs_esize[type], 5121 size, tag); 5122 } 5123 } 5124 } 5125 5126 static void 5127 arc_free_data_abd(arc_buf_hdr_t *hdr, abd_t *abd, uint64_t size, 5128 const void *tag) 5129 { 5130 arc_free_data_impl(hdr, size, tag); 5131 abd_free(abd); 5132 } 5133 5134 static void 5135 arc_free_data_buf(arc_buf_hdr_t *hdr, void *buf, uint64_t size, const void *tag) 5136 { 5137 arc_buf_contents_t type = arc_buf_type(hdr); 5138 5139 arc_free_data_impl(hdr, size, tag); 5140 if (type == ARC_BUFC_METADATA) { 5141 zio_buf_free(buf, size); 5142 } else { 5143 ASSERT(type == ARC_BUFC_DATA); 5144 zio_data_buf_free(buf, size); 5145 } 5146 } 5147 5148 /* 5149 * Free the arc data buffer. 5150 */ 5151 static void 5152 arc_free_data_impl(arc_buf_hdr_t *hdr, uint64_t size, const void *tag) 5153 { 5154 arc_state_t *state = hdr->b_l1hdr.b_state; 5155 arc_buf_contents_t type = arc_buf_type(hdr); 5156 5157 /* protected by hash lock, if in the hash table */ 5158 if (multilist_link_active(&hdr->b_l1hdr.b_arc_node)) { 5159 ASSERT(zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)); 5160 ASSERT(state != arc_anon && state != arc_l2c_only); 5161 5162 (void) zfs_refcount_remove_many(&state->arcs_esize[type], 5163 size, tag); 5164 } 5165 (void) zfs_refcount_remove_many(&state->arcs_size[type], size, tag); 5166 5167 VERIFY3U(hdr->b_type, ==, type); 5168 if (type == ARC_BUFC_METADATA) { 5169 arc_space_return(size, ARC_SPACE_META); 5170 } else { 5171 ASSERT(type == ARC_BUFC_DATA); 5172 arc_space_return(size, ARC_SPACE_DATA); 5173 } 5174 } 5175 5176 /* 5177 * This routine is called whenever a buffer is accessed. 5178 */ 5179 static void 5180 arc_access(arc_buf_hdr_t *hdr, arc_flags_t arc_flags, boolean_t hit) 5181 { 5182 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 5183 ASSERT(HDR_HAS_L1HDR(hdr)); 5184 5185 /* 5186 * Update buffer prefetch status. 5187 */ 5188 boolean_t was_prefetch = HDR_PREFETCH(hdr); 5189 boolean_t now_prefetch = arc_flags & ARC_FLAG_PREFETCH; 5190 if (was_prefetch != now_prefetch) { 5191 if (was_prefetch) { 5192 ARCSTAT_CONDSTAT(hit, demand_hit, demand_iohit, 5193 HDR_PRESCIENT_PREFETCH(hdr), prescient, predictive, 5194 prefetch); 5195 } 5196 if (HDR_HAS_L2HDR(hdr)) 5197 l2arc_hdr_arcstats_decrement_state(hdr); 5198 if (was_prefetch) { 5199 arc_hdr_clear_flags(hdr, 5200 ARC_FLAG_PREFETCH | ARC_FLAG_PRESCIENT_PREFETCH); 5201 } else { 5202 arc_hdr_set_flags(hdr, ARC_FLAG_PREFETCH); 5203 } 5204 if (HDR_HAS_L2HDR(hdr)) 5205 l2arc_hdr_arcstats_increment_state(hdr); 5206 } 5207 if (now_prefetch) { 5208 if (arc_flags & ARC_FLAG_PRESCIENT_PREFETCH) { 5209 arc_hdr_set_flags(hdr, ARC_FLAG_PRESCIENT_PREFETCH); 5210 ARCSTAT_BUMP(arcstat_prescient_prefetch); 5211 } else { 5212 ARCSTAT_BUMP(arcstat_predictive_prefetch); 5213 } 5214 } 5215 if (arc_flags & ARC_FLAG_L2CACHE) 5216 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 5217 5218 clock_t now = ddi_get_lbolt(); 5219 if (hdr->b_l1hdr.b_state == arc_anon) { 5220 arc_state_t *new_state; 5221 /* 5222 * This buffer is not in the cache, and does not appear in 5223 * our "ghost" lists. Add it to the MRU or uncached state. 5224 */ 5225 ASSERT0(hdr->b_l1hdr.b_arc_access); 5226 hdr->b_l1hdr.b_arc_access = now; 5227 if (HDR_UNCACHED(hdr)) { 5228 new_state = arc_uncached; 5229 DTRACE_PROBE1(new_state__uncached, arc_buf_hdr_t *, 5230 hdr); 5231 } else { 5232 new_state = arc_mru; 5233 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5234 } 5235 arc_change_state(new_state, hdr); 5236 } else if (hdr->b_l1hdr.b_state == arc_mru) { 5237 /* 5238 * This buffer has been accessed once recently and either 5239 * its read is still in progress or it is in the cache. 5240 */ 5241 if (HDR_IO_IN_PROGRESS(hdr)) { 5242 hdr->b_l1hdr.b_arc_access = now; 5243 return; 5244 } 5245 hdr->b_l1hdr.b_mru_hits++; 5246 ARCSTAT_BUMP(arcstat_mru_hits); 5247 5248 /* 5249 * If the previous access was a prefetch, then it already 5250 * handled possible promotion, so nothing more to do for now. 5251 */ 5252 if (was_prefetch) { 5253 hdr->b_l1hdr.b_arc_access = now; 5254 return; 5255 } 5256 5257 /* 5258 * If more than ARC_MINTIME have passed from the previous 5259 * hit, promote the buffer to the MFU state. 5260 */ 5261 if (ddi_time_after(now, hdr->b_l1hdr.b_arc_access + 5262 ARC_MINTIME)) { 5263 hdr->b_l1hdr.b_arc_access = now; 5264 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5265 arc_change_state(arc_mfu, hdr); 5266 } 5267 } else if (hdr->b_l1hdr.b_state == arc_mru_ghost) { 5268 arc_state_t *new_state; 5269 /* 5270 * This buffer has been accessed once recently, but was 5271 * evicted from the cache. Would we have bigger MRU, it 5272 * would be an MRU hit, so handle it the same way, except 5273 * we don't need to check the previous access time. 5274 */ 5275 hdr->b_l1hdr.b_mru_ghost_hits++; 5276 ARCSTAT_BUMP(arcstat_mru_ghost_hits); 5277 hdr->b_l1hdr.b_arc_access = now; 5278 wmsum_add(&arc_mru_ghost->arcs_hits[arc_buf_type(hdr)], 5279 arc_hdr_size(hdr)); 5280 if (was_prefetch) { 5281 new_state = arc_mru; 5282 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5283 } else { 5284 new_state = arc_mfu; 5285 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5286 } 5287 arc_change_state(new_state, hdr); 5288 } else if (hdr->b_l1hdr.b_state == arc_mfu) { 5289 /* 5290 * This buffer has been accessed more than once and either 5291 * still in the cache or being restored from one of ghosts. 5292 */ 5293 if (!HDR_IO_IN_PROGRESS(hdr)) { 5294 hdr->b_l1hdr.b_mfu_hits++; 5295 ARCSTAT_BUMP(arcstat_mfu_hits); 5296 } 5297 hdr->b_l1hdr.b_arc_access = now; 5298 } else if (hdr->b_l1hdr.b_state == arc_mfu_ghost) { 5299 /* 5300 * This buffer has been accessed more than once recently, but 5301 * has been evicted from the cache. Would we have bigger MFU 5302 * it would stay in cache, so move it back to MFU state. 5303 */ 5304 hdr->b_l1hdr.b_mfu_ghost_hits++; 5305 ARCSTAT_BUMP(arcstat_mfu_ghost_hits); 5306 hdr->b_l1hdr.b_arc_access = now; 5307 wmsum_add(&arc_mfu_ghost->arcs_hits[arc_buf_type(hdr)], 5308 arc_hdr_size(hdr)); 5309 DTRACE_PROBE1(new_state__mfu, arc_buf_hdr_t *, hdr); 5310 arc_change_state(arc_mfu, hdr); 5311 } else if (hdr->b_l1hdr.b_state == arc_uncached) { 5312 /* 5313 * This buffer is uncacheable, but we got a hit. Probably 5314 * a demand read after prefetch. Nothing more to do here. 5315 */ 5316 if (!HDR_IO_IN_PROGRESS(hdr)) 5317 ARCSTAT_BUMP(arcstat_uncached_hits); 5318 hdr->b_l1hdr.b_arc_access = now; 5319 } else if (hdr->b_l1hdr.b_state == arc_l2c_only) { 5320 /* 5321 * This buffer is on the 2nd Level ARC and was not accessed 5322 * for a long time, so treat it as new and put into MRU. 5323 */ 5324 hdr->b_l1hdr.b_arc_access = now; 5325 DTRACE_PROBE1(new_state__mru, arc_buf_hdr_t *, hdr); 5326 arc_change_state(arc_mru, hdr); 5327 } else { 5328 cmn_err(CE_PANIC, "invalid arc state 0x%p", 5329 hdr->b_l1hdr.b_state); 5330 } 5331 } 5332 5333 /* 5334 * This routine is called by dbuf_hold() to update the arc_access() state 5335 * which otherwise would be skipped for entries in the dbuf cache. 5336 */ 5337 void 5338 arc_buf_access(arc_buf_t *buf) 5339 { 5340 arc_buf_hdr_t *hdr = buf->b_hdr; 5341 5342 /* 5343 * Avoid taking the hash_lock when possible as an optimization. 5344 * The header must be checked again under the hash_lock in order 5345 * to handle the case where it is concurrently being released. 5346 */ 5347 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) 5348 return; 5349 5350 kmutex_t *hash_lock = HDR_LOCK(hdr); 5351 mutex_enter(hash_lock); 5352 5353 if (hdr->b_l1hdr.b_state == arc_anon || HDR_EMPTY(hdr)) { 5354 mutex_exit(hash_lock); 5355 ARCSTAT_BUMP(arcstat_access_skip); 5356 return; 5357 } 5358 5359 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5360 hdr->b_l1hdr.b_state == arc_mfu || 5361 hdr->b_l1hdr.b_state == arc_uncached); 5362 5363 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5364 arc_access(hdr, 0, B_TRUE); 5365 mutex_exit(hash_lock); 5366 5367 ARCSTAT_BUMP(arcstat_hits); 5368 ARCSTAT_CONDSTAT(B_TRUE /* demand */, demand, prefetch, 5369 !HDR_ISTYPE_METADATA(hdr), data, metadata, hits); 5370 } 5371 5372 /* a generic arc_read_done_func_t which you can use */ 5373 void 5374 arc_bcopy_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5375 arc_buf_t *buf, void *arg) 5376 { 5377 (void) zio, (void) zb, (void) bp; 5378 5379 if (buf == NULL) 5380 return; 5381 5382 memcpy(arg, buf->b_data, arc_buf_size(buf)); 5383 arc_buf_destroy(buf, arg); 5384 } 5385 5386 /* a generic arc_read_done_func_t */ 5387 void 5388 arc_getbuf_func(zio_t *zio, const zbookmark_phys_t *zb, const blkptr_t *bp, 5389 arc_buf_t *buf, void *arg) 5390 { 5391 (void) zb, (void) bp; 5392 arc_buf_t **bufp = arg; 5393 5394 if (buf == NULL) { 5395 ASSERT(zio == NULL || zio->io_error != 0); 5396 *bufp = NULL; 5397 } else { 5398 ASSERT(zio == NULL || zio->io_error == 0); 5399 *bufp = buf; 5400 ASSERT(buf->b_data != NULL); 5401 } 5402 } 5403 5404 static void 5405 arc_hdr_verify(arc_buf_hdr_t *hdr, blkptr_t *bp) 5406 { 5407 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 5408 ASSERT3U(HDR_GET_PSIZE(hdr), ==, 0); 5409 ASSERT3U(arc_hdr_get_compress(hdr), ==, ZIO_COMPRESS_OFF); 5410 } else { 5411 if (HDR_COMPRESSION_ENABLED(hdr)) { 5412 ASSERT3U(arc_hdr_get_compress(hdr), ==, 5413 BP_GET_COMPRESS(bp)); 5414 } 5415 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 5416 ASSERT3U(HDR_GET_PSIZE(hdr), ==, BP_GET_PSIZE(bp)); 5417 ASSERT3U(!!HDR_PROTECTED(hdr), ==, BP_IS_PROTECTED(bp)); 5418 } 5419 } 5420 5421 static void 5422 arc_read_done(zio_t *zio) 5423 { 5424 blkptr_t *bp = zio->io_bp; 5425 arc_buf_hdr_t *hdr = zio->io_private; 5426 kmutex_t *hash_lock = NULL; 5427 arc_callback_t *callback_list; 5428 arc_callback_t *acb; 5429 5430 /* 5431 * The hdr was inserted into hash-table and removed from lists 5432 * prior to starting I/O. We should find this header, since 5433 * it's in the hash table, and it should be legit since it's 5434 * not possible to evict it during the I/O. The only possible 5435 * reason for it not to be found is if we were freed during the 5436 * read. 5437 */ 5438 if (HDR_IN_HASH_TABLE(hdr)) { 5439 arc_buf_hdr_t *found; 5440 5441 ASSERT3U(hdr->b_birth, ==, BP_PHYSICAL_BIRTH(zio->io_bp)); 5442 ASSERT3U(hdr->b_dva.dva_word[0], ==, 5443 BP_IDENTITY(zio->io_bp)->dva_word[0]); 5444 ASSERT3U(hdr->b_dva.dva_word[1], ==, 5445 BP_IDENTITY(zio->io_bp)->dva_word[1]); 5446 5447 found = buf_hash_find(hdr->b_spa, zio->io_bp, &hash_lock); 5448 5449 ASSERT((found == hdr && 5450 DVA_EQUAL(&hdr->b_dva, BP_IDENTITY(zio->io_bp))) || 5451 (found == hdr && HDR_L2_READING(hdr))); 5452 ASSERT3P(hash_lock, !=, NULL); 5453 } 5454 5455 if (BP_IS_PROTECTED(bp)) { 5456 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 5457 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 5458 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 5459 hdr->b_crypt_hdr.b_iv); 5460 5461 if (zio->io_error == 0) { 5462 if (BP_GET_TYPE(bp) == DMU_OT_INTENT_LOG) { 5463 void *tmpbuf; 5464 5465 tmpbuf = abd_borrow_buf_copy(zio->io_abd, 5466 sizeof (zil_chain_t)); 5467 zio_crypt_decode_mac_zil(tmpbuf, 5468 hdr->b_crypt_hdr.b_mac); 5469 abd_return_buf(zio->io_abd, tmpbuf, 5470 sizeof (zil_chain_t)); 5471 } else { 5472 zio_crypt_decode_mac_bp(bp, 5473 hdr->b_crypt_hdr.b_mac); 5474 } 5475 } 5476 } 5477 5478 if (zio->io_error == 0) { 5479 /* byteswap if necessary */ 5480 if (BP_SHOULD_BYTESWAP(zio->io_bp)) { 5481 if (BP_GET_LEVEL(zio->io_bp) > 0) { 5482 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 5483 } else { 5484 hdr->b_l1hdr.b_byteswap = 5485 DMU_OT_BYTESWAP(BP_GET_TYPE(zio->io_bp)); 5486 } 5487 } else { 5488 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 5489 } 5490 if (!HDR_L2_READING(hdr)) { 5491 hdr->b_complevel = zio->io_prop.zp_complevel; 5492 } 5493 } 5494 5495 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_EVICTED); 5496 if (l2arc_noprefetch && HDR_PREFETCH(hdr)) 5497 arc_hdr_clear_flags(hdr, ARC_FLAG_L2CACHE); 5498 5499 callback_list = hdr->b_l1hdr.b_acb; 5500 ASSERT3P(callback_list, !=, NULL); 5501 hdr->b_l1hdr.b_acb = NULL; 5502 5503 /* 5504 * If a read request has a callback (i.e. acb_done is not NULL), then we 5505 * make a buf containing the data according to the parameters which were 5506 * passed in. The implementation of arc_buf_alloc_impl() ensures that we 5507 * aren't needlessly decompressing the data multiple times. 5508 */ 5509 int callback_cnt = 0; 5510 for (acb = callback_list; acb != NULL; acb = acb->acb_next) { 5511 5512 /* We need the last one to call below in original order. */ 5513 callback_list = acb; 5514 5515 if (!acb->acb_done || acb->acb_nobuf) 5516 continue; 5517 5518 callback_cnt++; 5519 5520 if (zio->io_error != 0) 5521 continue; 5522 5523 int error = arc_buf_alloc_impl(hdr, zio->io_spa, 5524 &acb->acb_zb, acb->acb_private, acb->acb_encrypted, 5525 acb->acb_compressed, acb->acb_noauth, B_TRUE, 5526 &acb->acb_buf); 5527 5528 /* 5529 * Assert non-speculative zios didn't fail because an 5530 * encryption key wasn't loaded 5531 */ 5532 ASSERT((zio->io_flags & ZIO_FLAG_SPECULATIVE) || 5533 error != EACCES); 5534 5535 /* 5536 * If we failed to decrypt, report an error now (as the zio 5537 * layer would have done if it had done the transforms). 5538 */ 5539 if (error == ECKSUM) { 5540 ASSERT(BP_IS_PROTECTED(bp)); 5541 error = SET_ERROR(EIO); 5542 if ((zio->io_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5543 spa_log_error(zio->io_spa, &acb->acb_zb, 5544 &zio->io_bp->blk_birth); 5545 (void) zfs_ereport_post( 5546 FM_EREPORT_ZFS_AUTHENTICATION, 5547 zio->io_spa, NULL, &acb->acb_zb, zio, 0); 5548 } 5549 } 5550 5551 if (error != 0) { 5552 /* 5553 * Decompression or decryption failed. Set 5554 * io_error so that when we call acb_done 5555 * (below), we will indicate that the read 5556 * failed. Note that in the unusual case 5557 * where one callback is compressed and another 5558 * uncompressed, we will mark all of them 5559 * as failed, even though the uncompressed 5560 * one can't actually fail. In this case, 5561 * the hdr will not be anonymous, because 5562 * if there are multiple callbacks, it's 5563 * because multiple threads found the same 5564 * arc buf in the hash table. 5565 */ 5566 zio->io_error = error; 5567 } 5568 } 5569 5570 /* 5571 * If there are multiple callbacks, we must have the hash lock, 5572 * because the only way for multiple threads to find this hdr is 5573 * in the hash table. This ensures that if there are multiple 5574 * callbacks, the hdr is not anonymous. If it were anonymous, 5575 * we couldn't use arc_buf_destroy() in the error case below. 5576 */ 5577 ASSERT(callback_cnt < 2 || hash_lock != NULL); 5578 5579 if (zio->io_error == 0) { 5580 arc_hdr_verify(hdr, zio->io_bp); 5581 } else { 5582 arc_hdr_set_flags(hdr, ARC_FLAG_IO_ERROR); 5583 if (hdr->b_l1hdr.b_state != arc_anon) 5584 arc_change_state(arc_anon, hdr); 5585 if (HDR_IN_HASH_TABLE(hdr)) 5586 buf_hash_remove(hdr); 5587 } 5588 5589 /* 5590 * Broadcast before we drop the hash_lock to avoid the possibility 5591 * that the hdr (and hence the cv) might be freed before we get to 5592 * the cv_broadcast(). 5593 */ 5594 cv_broadcast(&hdr->b_l1hdr.b_cv); 5595 5596 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5597 (void) remove_reference(hdr, hdr); 5598 5599 if (hash_lock != NULL) 5600 mutex_exit(hash_lock); 5601 5602 /* execute each callback and free its structure */ 5603 while ((acb = callback_list) != NULL) { 5604 if (acb->acb_done != NULL) { 5605 if (zio->io_error != 0 && acb->acb_buf != NULL) { 5606 /* 5607 * If arc_buf_alloc_impl() fails during 5608 * decompression, the buf will still be 5609 * allocated, and needs to be freed here. 5610 */ 5611 arc_buf_destroy(acb->acb_buf, 5612 acb->acb_private); 5613 acb->acb_buf = NULL; 5614 } 5615 acb->acb_done(zio, &zio->io_bookmark, zio->io_bp, 5616 acb->acb_buf, acb->acb_private); 5617 } 5618 5619 if (acb->acb_zio_dummy != NULL) { 5620 acb->acb_zio_dummy->io_error = zio->io_error; 5621 zio_nowait(acb->acb_zio_dummy); 5622 } 5623 5624 callback_list = acb->acb_prev; 5625 if (acb->acb_wait) { 5626 mutex_enter(&acb->acb_wait_lock); 5627 acb->acb_wait_error = zio->io_error; 5628 acb->acb_wait = B_FALSE; 5629 cv_signal(&acb->acb_wait_cv); 5630 mutex_exit(&acb->acb_wait_lock); 5631 /* acb will be freed by the waiting thread. */ 5632 } else { 5633 kmem_free(acb, sizeof (arc_callback_t)); 5634 } 5635 } 5636 } 5637 5638 /* 5639 * "Read" the block at the specified DVA (in bp) via the 5640 * cache. If the block is found in the cache, invoke the provided 5641 * callback immediately and return. Note that the `zio' parameter 5642 * in the callback will be NULL in this case, since no IO was 5643 * required. If the block is not in the cache pass the read request 5644 * on to the spa with a substitute callback function, so that the 5645 * requested block will be added to the cache. 5646 * 5647 * If a read request arrives for a block that has a read in-progress, 5648 * either wait for the in-progress read to complete (and return the 5649 * results); or, if this is a read with a "done" func, add a record 5650 * to the read to invoke the "done" func when the read completes, 5651 * and return; or just return. 5652 * 5653 * arc_read_done() will invoke all the requested "done" functions 5654 * for readers of this block. 5655 */ 5656 int 5657 arc_read(zio_t *pio, spa_t *spa, const blkptr_t *bp, 5658 arc_read_done_func_t *done, void *private, zio_priority_t priority, 5659 int zio_flags, arc_flags_t *arc_flags, const zbookmark_phys_t *zb) 5660 { 5661 arc_buf_hdr_t *hdr = NULL; 5662 kmutex_t *hash_lock = NULL; 5663 zio_t *rzio; 5664 uint64_t guid = spa_load_guid(spa); 5665 boolean_t compressed_read = (zio_flags & ZIO_FLAG_RAW_COMPRESS) != 0; 5666 boolean_t encrypted_read = BP_IS_ENCRYPTED(bp) && 5667 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5668 boolean_t noauth_read = BP_IS_AUTHENTICATED(bp) && 5669 (zio_flags & ZIO_FLAG_RAW_ENCRYPT) != 0; 5670 boolean_t embedded_bp = !!BP_IS_EMBEDDED(bp); 5671 boolean_t no_buf = *arc_flags & ARC_FLAG_NO_BUF; 5672 arc_buf_t *buf = NULL; 5673 int rc = 0; 5674 5675 ASSERT(!embedded_bp || 5676 BPE_GET_ETYPE(bp) == BP_EMBEDDED_TYPE_DATA); 5677 ASSERT(!BP_IS_HOLE(bp)); 5678 ASSERT(!BP_IS_REDACTED(bp)); 5679 5680 /* 5681 * Normally SPL_FSTRANS will already be set since kernel threads which 5682 * expect to call the DMU interfaces will set it when created. System 5683 * calls are similarly handled by setting/cleaning the bit in the 5684 * registered callback (module/os/.../zfs/zpl_*). 5685 * 5686 * External consumers such as Lustre which call the exported DMU 5687 * interfaces may not have set SPL_FSTRANS. To avoid a deadlock 5688 * on the hash_lock always set and clear the bit. 5689 */ 5690 fstrans_cookie_t cookie = spl_fstrans_mark(); 5691 top: 5692 /* 5693 * Verify the block pointer contents are reasonable. This should 5694 * always be the case since the blkptr is protected by a checksum. 5695 * However, if there is damage it's desirable to detect this early 5696 * and treat it as a checksum error. This allows an alternate blkptr 5697 * to be tried when one is available (e.g. ditto blocks). 5698 */ 5699 if (!zfs_blkptr_verify(spa, bp, (zio_flags & ZIO_FLAG_CONFIG_WRITER) ? 5700 BLK_CONFIG_HELD : BLK_CONFIG_NEEDED, BLK_VERIFY_LOG)) { 5701 rc = SET_ERROR(ECKSUM); 5702 goto done; 5703 } 5704 5705 if (!embedded_bp) { 5706 /* 5707 * Embedded BP's have no DVA and require no I/O to "read". 5708 * Create an anonymous arc buf to back it. 5709 */ 5710 hdr = buf_hash_find(guid, bp, &hash_lock); 5711 } 5712 5713 /* 5714 * Determine if we have an L1 cache hit or a cache miss. For simplicity 5715 * we maintain encrypted data separately from compressed / uncompressed 5716 * data. If the user is requesting raw encrypted data and we don't have 5717 * that in the header we will read from disk to guarantee that we can 5718 * get it even if the encryption keys aren't loaded. 5719 */ 5720 if (hdr != NULL && HDR_HAS_L1HDR(hdr) && (HDR_HAS_RABD(hdr) || 5721 (hdr->b_l1hdr.b_pabd != NULL && !encrypted_read))) { 5722 boolean_t is_data = !HDR_ISTYPE_METADATA(hdr); 5723 5724 if (HDR_IO_IN_PROGRESS(hdr)) { 5725 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5726 mutex_exit(hash_lock); 5727 ARCSTAT_BUMP(arcstat_cached_only_in_progress); 5728 rc = SET_ERROR(ENOENT); 5729 goto done; 5730 } 5731 5732 zio_t *head_zio = hdr->b_l1hdr.b_acb->acb_zio_head; 5733 ASSERT3P(head_zio, !=, NULL); 5734 if ((hdr->b_flags & ARC_FLAG_PRIO_ASYNC_READ) && 5735 priority == ZIO_PRIORITY_SYNC_READ) { 5736 /* 5737 * This is a sync read that needs to wait for 5738 * an in-flight async read. Request that the 5739 * zio have its priority upgraded. 5740 */ 5741 zio_change_priority(head_zio, priority); 5742 DTRACE_PROBE1(arc__async__upgrade__sync, 5743 arc_buf_hdr_t *, hdr); 5744 ARCSTAT_BUMP(arcstat_async_upgrade_sync); 5745 } 5746 5747 DTRACE_PROBE1(arc__iohit, arc_buf_hdr_t *, hdr); 5748 arc_access(hdr, *arc_flags, B_FALSE); 5749 5750 /* 5751 * If there are multiple threads reading the same block 5752 * and that block is not yet in the ARC, then only one 5753 * thread will do the physical I/O and all other 5754 * threads will wait until that I/O completes. 5755 * Synchronous reads use the acb_wait_cv whereas nowait 5756 * reads register a callback. Both are signalled/called 5757 * in arc_read_done. 5758 * 5759 * Errors of the physical I/O may need to be propagated. 5760 * Synchronous read errors are returned here from 5761 * arc_read_done via acb_wait_error. Nowait reads 5762 * attach the acb_zio_dummy zio to pio and 5763 * arc_read_done propagates the physical I/O's io_error 5764 * to acb_zio_dummy, and thereby to pio. 5765 */ 5766 arc_callback_t *acb = NULL; 5767 if (done || pio || *arc_flags & ARC_FLAG_WAIT) { 5768 acb = kmem_zalloc(sizeof (arc_callback_t), 5769 KM_SLEEP); 5770 acb->acb_done = done; 5771 acb->acb_private = private; 5772 acb->acb_compressed = compressed_read; 5773 acb->acb_encrypted = encrypted_read; 5774 acb->acb_noauth = noauth_read; 5775 acb->acb_nobuf = no_buf; 5776 if (*arc_flags & ARC_FLAG_WAIT) { 5777 acb->acb_wait = B_TRUE; 5778 mutex_init(&acb->acb_wait_lock, NULL, 5779 MUTEX_DEFAULT, NULL); 5780 cv_init(&acb->acb_wait_cv, NULL, 5781 CV_DEFAULT, NULL); 5782 } 5783 acb->acb_zb = *zb; 5784 if (pio != NULL) { 5785 acb->acb_zio_dummy = zio_null(pio, 5786 spa, NULL, NULL, NULL, zio_flags); 5787 } 5788 acb->acb_zio_head = head_zio; 5789 acb->acb_next = hdr->b_l1hdr.b_acb; 5790 if (hdr->b_l1hdr.b_acb) 5791 hdr->b_l1hdr.b_acb->acb_prev = acb; 5792 hdr->b_l1hdr.b_acb = acb; 5793 } 5794 mutex_exit(hash_lock); 5795 5796 ARCSTAT_BUMP(arcstat_iohits); 5797 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5798 demand, prefetch, is_data, data, metadata, iohits); 5799 5800 if (*arc_flags & ARC_FLAG_WAIT) { 5801 mutex_enter(&acb->acb_wait_lock); 5802 while (acb->acb_wait) { 5803 cv_wait(&acb->acb_wait_cv, 5804 &acb->acb_wait_lock); 5805 } 5806 rc = acb->acb_wait_error; 5807 mutex_exit(&acb->acb_wait_lock); 5808 mutex_destroy(&acb->acb_wait_lock); 5809 cv_destroy(&acb->acb_wait_cv); 5810 kmem_free(acb, sizeof (arc_callback_t)); 5811 } 5812 goto out; 5813 } 5814 5815 ASSERT(hdr->b_l1hdr.b_state == arc_mru || 5816 hdr->b_l1hdr.b_state == arc_mfu || 5817 hdr->b_l1hdr.b_state == arc_uncached); 5818 5819 DTRACE_PROBE1(arc__hit, arc_buf_hdr_t *, hdr); 5820 arc_access(hdr, *arc_flags, B_TRUE); 5821 5822 if (done && !no_buf) { 5823 ASSERT(!embedded_bp || !BP_IS_HOLE(bp)); 5824 5825 /* Get a buf with the desired data in it. */ 5826 rc = arc_buf_alloc_impl(hdr, spa, zb, private, 5827 encrypted_read, compressed_read, noauth_read, 5828 B_TRUE, &buf); 5829 if (rc == ECKSUM) { 5830 /* 5831 * Convert authentication and decryption errors 5832 * to EIO (and generate an ereport if needed) 5833 * before leaving the ARC. 5834 */ 5835 rc = SET_ERROR(EIO); 5836 if ((zio_flags & ZIO_FLAG_SPECULATIVE) == 0) { 5837 spa_log_error(spa, zb, &hdr->b_birth); 5838 (void) zfs_ereport_post( 5839 FM_EREPORT_ZFS_AUTHENTICATION, 5840 spa, NULL, zb, NULL, 0); 5841 } 5842 } 5843 if (rc != 0) { 5844 arc_buf_destroy_impl(buf); 5845 buf = NULL; 5846 (void) remove_reference(hdr, private); 5847 } 5848 5849 /* assert any errors weren't due to unloaded keys */ 5850 ASSERT((zio_flags & ZIO_FLAG_SPECULATIVE) || 5851 rc != EACCES); 5852 } 5853 mutex_exit(hash_lock); 5854 ARCSTAT_BUMP(arcstat_hits); 5855 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 5856 demand, prefetch, is_data, data, metadata, hits); 5857 *arc_flags |= ARC_FLAG_CACHED; 5858 goto done; 5859 } else { 5860 uint64_t lsize = BP_GET_LSIZE(bp); 5861 uint64_t psize = BP_GET_PSIZE(bp); 5862 arc_callback_t *acb; 5863 vdev_t *vd = NULL; 5864 uint64_t addr = 0; 5865 boolean_t devw = B_FALSE; 5866 uint64_t size; 5867 abd_t *hdr_abd; 5868 int alloc_flags = encrypted_read ? ARC_HDR_ALLOC_RDATA : 0; 5869 arc_buf_contents_t type = BP_GET_BUFC_TYPE(bp); 5870 5871 if (*arc_flags & ARC_FLAG_CACHED_ONLY) { 5872 if (hash_lock != NULL) 5873 mutex_exit(hash_lock); 5874 rc = SET_ERROR(ENOENT); 5875 goto done; 5876 } 5877 5878 if (hdr == NULL) { 5879 /* 5880 * This block is not in the cache or it has 5881 * embedded data. 5882 */ 5883 arc_buf_hdr_t *exists = NULL; 5884 hdr = arc_hdr_alloc(spa_load_guid(spa), psize, lsize, 5885 BP_IS_PROTECTED(bp), BP_GET_COMPRESS(bp), 0, type); 5886 5887 if (!embedded_bp) { 5888 hdr->b_dva = *BP_IDENTITY(bp); 5889 hdr->b_birth = BP_PHYSICAL_BIRTH(bp); 5890 exists = buf_hash_insert(hdr, &hash_lock); 5891 } 5892 if (exists != NULL) { 5893 /* somebody beat us to the hash insert */ 5894 mutex_exit(hash_lock); 5895 buf_discard_identity(hdr); 5896 arc_hdr_destroy(hdr); 5897 goto top; /* restart the IO request */ 5898 } 5899 } else { 5900 /* 5901 * This block is in the ghost cache or encrypted data 5902 * was requested and we didn't have it. If it was 5903 * L2-only (and thus didn't have an L1 hdr), 5904 * we realloc the header to add an L1 hdr. 5905 */ 5906 if (!HDR_HAS_L1HDR(hdr)) { 5907 hdr = arc_hdr_realloc(hdr, hdr_l2only_cache, 5908 hdr_full_cache); 5909 } 5910 5911 if (GHOST_STATE(hdr->b_l1hdr.b_state)) { 5912 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 5913 ASSERT(!HDR_HAS_RABD(hdr)); 5914 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 5915 ASSERT0(zfs_refcount_count( 5916 &hdr->b_l1hdr.b_refcnt)); 5917 ASSERT3P(hdr->b_l1hdr.b_buf, ==, NULL); 5918 #ifdef ZFS_DEBUG 5919 ASSERT3P(hdr->b_l1hdr.b_freeze_cksum, ==, NULL); 5920 #endif 5921 } else if (HDR_IO_IN_PROGRESS(hdr)) { 5922 /* 5923 * If this header already had an IO in progress 5924 * and we are performing another IO to fetch 5925 * encrypted data we must wait until the first 5926 * IO completes so as not to confuse 5927 * arc_read_done(). This should be very rare 5928 * and so the performance impact shouldn't 5929 * matter. 5930 */ 5931 cv_wait(&hdr->b_l1hdr.b_cv, hash_lock); 5932 mutex_exit(hash_lock); 5933 goto top; 5934 } 5935 } 5936 if (*arc_flags & ARC_FLAG_UNCACHED) { 5937 arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED); 5938 if (!encrypted_read) 5939 alloc_flags |= ARC_HDR_ALLOC_LINEAR; 5940 } 5941 5942 /* 5943 * Take additional reference for IO_IN_PROGRESS. It stops 5944 * arc_access() from putting this header without any buffers 5945 * and so other references but obviously nonevictable onto 5946 * the evictable list of MRU or MFU state. 5947 */ 5948 add_reference(hdr, hdr); 5949 if (!embedded_bp) 5950 arc_access(hdr, *arc_flags, B_FALSE); 5951 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 5952 arc_hdr_alloc_abd(hdr, alloc_flags); 5953 if (encrypted_read) { 5954 ASSERT(HDR_HAS_RABD(hdr)); 5955 size = HDR_GET_PSIZE(hdr); 5956 hdr_abd = hdr->b_crypt_hdr.b_rabd; 5957 zio_flags |= ZIO_FLAG_RAW; 5958 } else { 5959 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 5960 size = arc_hdr_size(hdr); 5961 hdr_abd = hdr->b_l1hdr.b_pabd; 5962 5963 if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF) { 5964 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 5965 } 5966 5967 /* 5968 * For authenticated bp's, we do not ask the ZIO layer 5969 * to authenticate them since this will cause the entire 5970 * IO to fail if the key isn't loaded. Instead, we 5971 * defer authentication until arc_buf_fill(), which will 5972 * verify the data when the key is available. 5973 */ 5974 if (BP_IS_AUTHENTICATED(bp)) 5975 zio_flags |= ZIO_FLAG_RAW_ENCRYPT; 5976 } 5977 5978 if (BP_IS_AUTHENTICATED(bp)) 5979 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 5980 if (BP_GET_LEVEL(bp) > 0) 5981 arc_hdr_set_flags(hdr, ARC_FLAG_INDIRECT); 5982 ASSERT(!GHOST_STATE(hdr->b_l1hdr.b_state)); 5983 5984 acb = kmem_zalloc(sizeof (arc_callback_t), KM_SLEEP); 5985 acb->acb_done = done; 5986 acb->acb_private = private; 5987 acb->acb_compressed = compressed_read; 5988 acb->acb_encrypted = encrypted_read; 5989 acb->acb_noauth = noauth_read; 5990 acb->acb_zb = *zb; 5991 5992 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 5993 hdr->b_l1hdr.b_acb = acb; 5994 5995 if (HDR_HAS_L2HDR(hdr) && 5996 (vd = hdr->b_l2hdr.b_dev->l2ad_vdev) != NULL) { 5997 devw = hdr->b_l2hdr.b_dev->l2ad_writing; 5998 addr = hdr->b_l2hdr.b_daddr; 5999 /* 6000 * Lock out L2ARC device removal. 6001 */ 6002 if (vdev_is_dead(vd) || 6003 !spa_config_tryenter(spa, SCL_L2ARC, vd, RW_READER)) 6004 vd = NULL; 6005 } 6006 6007 /* 6008 * We count both async reads and scrub IOs as asynchronous so 6009 * that both can be upgraded in the event of a cache hit while 6010 * the read IO is still in-flight. 6011 */ 6012 if (priority == ZIO_PRIORITY_ASYNC_READ || 6013 priority == ZIO_PRIORITY_SCRUB) 6014 arc_hdr_set_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 6015 else 6016 arc_hdr_clear_flags(hdr, ARC_FLAG_PRIO_ASYNC_READ); 6017 6018 /* 6019 * At this point, we have a level 1 cache miss or a blkptr 6020 * with embedded data. Try again in L2ARC if possible. 6021 */ 6022 ASSERT3U(HDR_GET_LSIZE(hdr), ==, lsize); 6023 6024 /* 6025 * Skip ARC stat bump for block pointers with embedded 6026 * data. The data are read from the blkptr itself via 6027 * decode_embedded_bp_compressed(). 6028 */ 6029 if (!embedded_bp) { 6030 DTRACE_PROBE4(arc__miss, arc_buf_hdr_t *, hdr, 6031 blkptr_t *, bp, uint64_t, lsize, 6032 zbookmark_phys_t *, zb); 6033 ARCSTAT_BUMP(arcstat_misses); 6034 ARCSTAT_CONDSTAT(!(*arc_flags & ARC_FLAG_PREFETCH), 6035 demand, prefetch, !HDR_ISTYPE_METADATA(hdr), data, 6036 metadata, misses); 6037 zfs_racct_read(size, 1); 6038 } 6039 6040 /* Check if the spa even has l2 configured */ 6041 const boolean_t spa_has_l2 = l2arc_ndev != 0 && 6042 spa->spa_l2cache.sav_count > 0; 6043 6044 if (vd != NULL && spa_has_l2 && !(l2arc_norw && devw)) { 6045 /* 6046 * Read from the L2ARC if the following are true: 6047 * 1. The L2ARC vdev was previously cached. 6048 * 2. This buffer still has L2ARC metadata. 6049 * 3. This buffer isn't currently writing to the L2ARC. 6050 * 4. The L2ARC entry wasn't evicted, which may 6051 * also have invalidated the vdev. 6052 * 5. This isn't prefetch or l2arc_noprefetch is 0. 6053 */ 6054 if (HDR_HAS_L2HDR(hdr) && 6055 !HDR_L2_WRITING(hdr) && !HDR_L2_EVICTED(hdr) && 6056 !(l2arc_noprefetch && 6057 (*arc_flags & ARC_FLAG_PREFETCH))) { 6058 l2arc_read_callback_t *cb; 6059 abd_t *abd; 6060 uint64_t asize; 6061 6062 DTRACE_PROBE1(l2arc__hit, arc_buf_hdr_t *, hdr); 6063 ARCSTAT_BUMP(arcstat_l2_hits); 6064 hdr->b_l2hdr.b_hits++; 6065 6066 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), 6067 KM_SLEEP); 6068 cb->l2rcb_hdr = hdr; 6069 cb->l2rcb_bp = *bp; 6070 cb->l2rcb_zb = *zb; 6071 cb->l2rcb_flags = zio_flags; 6072 6073 /* 6074 * When Compressed ARC is disabled, but the 6075 * L2ARC block is compressed, arc_hdr_size() 6076 * will have returned LSIZE rather than PSIZE. 6077 */ 6078 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 6079 !HDR_COMPRESSION_ENABLED(hdr) && 6080 HDR_GET_PSIZE(hdr) != 0) { 6081 size = HDR_GET_PSIZE(hdr); 6082 } 6083 6084 asize = vdev_psize_to_asize(vd, size); 6085 if (asize != size) { 6086 abd = abd_alloc_for_io(asize, 6087 HDR_ISTYPE_METADATA(hdr)); 6088 cb->l2rcb_abd = abd; 6089 } else { 6090 abd = hdr_abd; 6091 } 6092 6093 ASSERT(addr >= VDEV_LABEL_START_SIZE && 6094 addr + asize <= vd->vdev_psize - 6095 VDEV_LABEL_END_SIZE); 6096 6097 /* 6098 * l2arc read. The SCL_L2ARC lock will be 6099 * released by l2arc_read_done(). 6100 * Issue a null zio if the underlying buffer 6101 * was squashed to zero size by compression. 6102 */ 6103 ASSERT3U(arc_hdr_get_compress(hdr), !=, 6104 ZIO_COMPRESS_EMPTY); 6105 rzio = zio_read_phys(pio, vd, addr, 6106 asize, abd, 6107 ZIO_CHECKSUM_OFF, 6108 l2arc_read_done, cb, priority, 6109 zio_flags | ZIO_FLAG_CANFAIL | 6110 ZIO_FLAG_DONT_PROPAGATE | 6111 ZIO_FLAG_DONT_RETRY, B_FALSE); 6112 acb->acb_zio_head = rzio; 6113 6114 if (hash_lock != NULL) 6115 mutex_exit(hash_lock); 6116 6117 DTRACE_PROBE2(l2arc__read, vdev_t *, vd, 6118 zio_t *, rzio); 6119 ARCSTAT_INCR(arcstat_l2_read_bytes, 6120 HDR_GET_PSIZE(hdr)); 6121 6122 if (*arc_flags & ARC_FLAG_NOWAIT) { 6123 zio_nowait(rzio); 6124 goto out; 6125 } 6126 6127 ASSERT(*arc_flags & ARC_FLAG_WAIT); 6128 if (zio_wait(rzio) == 0) 6129 goto out; 6130 6131 /* l2arc read error; goto zio_read() */ 6132 if (hash_lock != NULL) 6133 mutex_enter(hash_lock); 6134 } else { 6135 DTRACE_PROBE1(l2arc__miss, 6136 arc_buf_hdr_t *, hdr); 6137 ARCSTAT_BUMP(arcstat_l2_misses); 6138 if (HDR_L2_WRITING(hdr)) 6139 ARCSTAT_BUMP(arcstat_l2_rw_clash); 6140 spa_config_exit(spa, SCL_L2ARC, vd); 6141 } 6142 } else { 6143 if (vd != NULL) 6144 spa_config_exit(spa, SCL_L2ARC, vd); 6145 6146 /* 6147 * Only a spa with l2 should contribute to l2 6148 * miss stats. (Including the case of having a 6149 * faulted cache device - that's also a miss.) 6150 */ 6151 if (spa_has_l2) { 6152 /* 6153 * Skip ARC stat bump for block pointers with 6154 * embedded data. The data are read from the 6155 * blkptr itself via 6156 * decode_embedded_bp_compressed(). 6157 */ 6158 if (!embedded_bp) { 6159 DTRACE_PROBE1(l2arc__miss, 6160 arc_buf_hdr_t *, hdr); 6161 ARCSTAT_BUMP(arcstat_l2_misses); 6162 } 6163 } 6164 } 6165 6166 rzio = zio_read(pio, spa, bp, hdr_abd, size, 6167 arc_read_done, hdr, priority, zio_flags, zb); 6168 acb->acb_zio_head = rzio; 6169 6170 if (hash_lock != NULL) 6171 mutex_exit(hash_lock); 6172 6173 if (*arc_flags & ARC_FLAG_WAIT) { 6174 rc = zio_wait(rzio); 6175 goto out; 6176 } 6177 6178 ASSERT(*arc_flags & ARC_FLAG_NOWAIT); 6179 zio_nowait(rzio); 6180 } 6181 6182 out: 6183 /* embedded bps don't actually go to disk */ 6184 if (!embedded_bp) 6185 spa_read_history_add(spa, zb, *arc_flags); 6186 spl_fstrans_unmark(cookie); 6187 return (rc); 6188 6189 done: 6190 if (done) 6191 done(NULL, zb, bp, buf, private); 6192 if (pio && rc != 0) { 6193 zio_t *zio = zio_null(pio, spa, NULL, NULL, NULL, zio_flags); 6194 zio->io_error = rc; 6195 zio_nowait(zio); 6196 } 6197 goto out; 6198 } 6199 6200 arc_prune_t * 6201 arc_add_prune_callback(arc_prune_func_t *func, void *private) 6202 { 6203 arc_prune_t *p; 6204 6205 p = kmem_alloc(sizeof (*p), KM_SLEEP); 6206 p->p_pfunc = func; 6207 p->p_private = private; 6208 list_link_init(&p->p_node); 6209 zfs_refcount_create(&p->p_refcnt); 6210 6211 mutex_enter(&arc_prune_mtx); 6212 zfs_refcount_add(&p->p_refcnt, &arc_prune_list); 6213 list_insert_head(&arc_prune_list, p); 6214 mutex_exit(&arc_prune_mtx); 6215 6216 return (p); 6217 } 6218 6219 void 6220 arc_remove_prune_callback(arc_prune_t *p) 6221 { 6222 boolean_t wait = B_FALSE; 6223 mutex_enter(&arc_prune_mtx); 6224 list_remove(&arc_prune_list, p); 6225 if (zfs_refcount_remove(&p->p_refcnt, &arc_prune_list) > 0) 6226 wait = B_TRUE; 6227 mutex_exit(&arc_prune_mtx); 6228 6229 /* wait for arc_prune_task to finish */ 6230 if (wait) 6231 taskq_wait_outstanding(arc_prune_taskq, 0); 6232 ASSERT0(zfs_refcount_count(&p->p_refcnt)); 6233 zfs_refcount_destroy(&p->p_refcnt); 6234 kmem_free(p, sizeof (*p)); 6235 } 6236 6237 /* 6238 * Notify the arc that a block was freed, and thus will never be used again. 6239 */ 6240 void 6241 arc_freed(spa_t *spa, const blkptr_t *bp) 6242 { 6243 arc_buf_hdr_t *hdr; 6244 kmutex_t *hash_lock; 6245 uint64_t guid = spa_load_guid(spa); 6246 6247 ASSERT(!BP_IS_EMBEDDED(bp)); 6248 6249 hdr = buf_hash_find(guid, bp, &hash_lock); 6250 if (hdr == NULL) 6251 return; 6252 6253 /* 6254 * We might be trying to free a block that is still doing I/O 6255 * (i.e. prefetch) or has some other reference (i.e. a dedup-ed, 6256 * dmu_sync-ed block). A block may also have a reference if it is 6257 * part of a dedup-ed, dmu_synced write. The dmu_sync() function would 6258 * have written the new block to its final resting place on disk but 6259 * without the dedup flag set. This would have left the hdr in the MRU 6260 * state and discoverable. When the txg finally syncs it detects that 6261 * the block was overridden in open context and issues an override I/O. 6262 * Since this is a dedup block, the override I/O will determine if the 6263 * block is already in the DDT. If so, then it will replace the io_bp 6264 * with the bp from the DDT and allow the I/O to finish. When the I/O 6265 * reaches the done callback, dbuf_write_override_done, it will 6266 * check to see if the io_bp and io_bp_override are identical. 6267 * If they are not, then it indicates that the bp was replaced with 6268 * the bp in the DDT and the override bp is freed. This allows 6269 * us to arrive here with a reference on a block that is being 6270 * freed. So if we have an I/O in progress, or a reference to 6271 * this hdr, then we don't destroy the hdr. 6272 */ 6273 if (!HDR_HAS_L1HDR(hdr) || 6274 zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6275 arc_change_state(arc_anon, hdr); 6276 arc_hdr_destroy(hdr); 6277 mutex_exit(hash_lock); 6278 } else { 6279 mutex_exit(hash_lock); 6280 } 6281 6282 } 6283 6284 /* 6285 * Release this buffer from the cache, making it an anonymous buffer. This 6286 * must be done after a read and prior to modifying the buffer contents. 6287 * If the buffer has more than one reference, we must make 6288 * a new hdr for the buffer. 6289 */ 6290 void 6291 arc_release(arc_buf_t *buf, const void *tag) 6292 { 6293 arc_buf_hdr_t *hdr = buf->b_hdr; 6294 6295 /* 6296 * It would be nice to assert that if its DMU metadata (level > 6297 * 0 || it's the dnode file), then it must be syncing context. 6298 * But we don't know that information at this level. 6299 */ 6300 6301 ASSERT(HDR_HAS_L1HDR(hdr)); 6302 6303 /* 6304 * We don't grab the hash lock prior to this check, because if 6305 * the buffer's header is in the arc_anon state, it won't be 6306 * linked into the hash table. 6307 */ 6308 if (hdr->b_l1hdr.b_state == arc_anon) { 6309 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6310 ASSERT(!HDR_IN_HASH_TABLE(hdr)); 6311 ASSERT(!HDR_HAS_L2HDR(hdr)); 6312 6313 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6314 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), ==, 1); 6315 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6316 6317 hdr->b_l1hdr.b_arc_access = 0; 6318 6319 /* 6320 * If the buf is being overridden then it may already 6321 * have a hdr that is not empty. 6322 */ 6323 buf_discard_identity(hdr); 6324 arc_buf_thaw(buf); 6325 6326 return; 6327 } 6328 6329 kmutex_t *hash_lock = HDR_LOCK(hdr); 6330 mutex_enter(hash_lock); 6331 6332 /* 6333 * This assignment is only valid as long as the hash_lock is 6334 * held, we must be careful not to reference state or the 6335 * b_state field after dropping the lock. 6336 */ 6337 arc_state_t *state = hdr->b_l1hdr.b_state; 6338 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 6339 ASSERT3P(state, !=, arc_anon); 6340 6341 /* this buffer is not on any list */ 6342 ASSERT3S(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt), >, 0); 6343 6344 if (HDR_HAS_L2HDR(hdr)) { 6345 mutex_enter(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6346 6347 /* 6348 * We have to recheck this conditional again now that 6349 * we're holding the l2ad_mtx to prevent a race with 6350 * another thread which might be concurrently calling 6351 * l2arc_evict(). In that case, l2arc_evict() might have 6352 * destroyed the header's L2 portion as we were waiting 6353 * to acquire the l2ad_mtx. 6354 */ 6355 if (HDR_HAS_L2HDR(hdr)) 6356 arc_hdr_l2hdr_destroy(hdr); 6357 6358 mutex_exit(&hdr->b_l2hdr.b_dev->l2ad_mtx); 6359 } 6360 6361 /* 6362 * Do we have more than one buf? 6363 */ 6364 if (hdr->b_l1hdr.b_bufcnt > 1) { 6365 arc_buf_hdr_t *nhdr; 6366 uint64_t spa = hdr->b_spa; 6367 uint64_t psize = HDR_GET_PSIZE(hdr); 6368 uint64_t lsize = HDR_GET_LSIZE(hdr); 6369 boolean_t protected = HDR_PROTECTED(hdr); 6370 enum zio_compress compress = arc_hdr_get_compress(hdr); 6371 arc_buf_contents_t type = arc_buf_type(hdr); 6372 VERIFY3U(hdr->b_type, ==, type); 6373 6374 ASSERT(hdr->b_l1hdr.b_buf != buf || buf->b_next != NULL); 6375 VERIFY3S(remove_reference(hdr, tag), >, 0); 6376 6377 if (arc_buf_is_shared(buf) && !ARC_BUF_COMPRESSED(buf)) { 6378 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6379 ASSERT(ARC_BUF_LAST(buf)); 6380 } 6381 6382 /* 6383 * Pull the data off of this hdr and attach it to 6384 * a new anonymous hdr. Also find the last buffer 6385 * in the hdr's buffer list. 6386 */ 6387 arc_buf_t *lastbuf = arc_buf_remove(hdr, buf); 6388 ASSERT3P(lastbuf, !=, NULL); 6389 6390 /* 6391 * If the current arc_buf_t and the hdr are sharing their data 6392 * buffer, then we must stop sharing that block. 6393 */ 6394 if (arc_buf_is_shared(buf)) { 6395 ASSERT3P(hdr->b_l1hdr.b_buf, !=, buf); 6396 VERIFY(!arc_buf_is_shared(lastbuf)); 6397 6398 /* 6399 * First, sever the block sharing relationship between 6400 * buf and the arc_buf_hdr_t. 6401 */ 6402 arc_unshare_buf(hdr, buf); 6403 6404 /* 6405 * Now we need to recreate the hdr's b_pabd. Since we 6406 * have lastbuf handy, we try to share with it, but if 6407 * we can't then we allocate a new b_pabd and copy the 6408 * data from buf into it. 6409 */ 6410 if (arc_can_share(hdr, lastbuf)) { 6411 arc_share_buf(hdr, lastbuf); 6412 } else { 6413 arc_hdr_alloc_abd(hdr, 0); 6414 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, 6415 buf->b_data, psize); 6416 } 6417 VERIFY3P(lastbuf->b_data, !=, NULL); 6418 } else if (HDR_SHARED_DATA(hdr)) { 6419 /* 6420 * Uncompressed shared buffers are always at the end 6421 * of the list. Compressed buffers don't have the 6422 * same requirements. This makes it hard to 6423 * simply assert that the lastbuf is shared so 6424 * we rely on the hdr's compression flags to determine 6425 * if we have a compressed, shared buffer. 6426 */ 6427 ASSERT(arc_buf_is_shared(lastbuf) || 6428 arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF); 6429 ASSERT(!ARC_BUF_SHARED(buf)); 6430 } 6431 6432 ASSERT(hdr->b_l1hdr.b_pabd != NULL || HDR_HAS_RABD(hdr)); 6433 ASSERT3P(state, !=, arc_l2c_only); 6434 6435 (void) zfs_refcount_remove_many(&state->arcs_size[type], 6436 arc_buf_size(buf), buf); 6437 6438 if (zfs_refcount_is_zero(&hdr->b_l1hdr.b_refcnt)) { 6439 ASSERT3P(state, !=, arc_l2c_only); 6440 (void) zfs_refcount_remove_many( 6441 &state->arcs_esize[type], 6442 arc_buf_size(buf), buf); 6443 } 6444 6445 hdr->b_l1hdr.b_bufcnt -= 1; 6446 if (ARC_BUF_ENCRYPTED(buf)) 6447 hdr->b_crypt_hdr.b_ebufcnt -= 1; 6448 6449 arc_cksum_verify(buf); 6450 arc_buf_unwatch(buf); 6451 6452 /* if this is the last uncompressed buf free the checksum */ 6453 if (!arc_hdr_has_uncompressed_buf(hdr)) 6454 arc_cksum_free(hdr); 6455 6456 mutex_exit(hash_lock); 6457 6458 nhdr = arc_hdr_alloc(spa, psize, lsize, protected, 6459 compress, hdr->b_complevel, type); 6460 ASSERT3P(nhdr->b_l1hdr.b_buf, ==, NULL); 6461 ASSERT0(nhdr->b_l1hdr.b_bufcnt); 6462 ASSERT0(zfs_refcount_count(&nhdr->b_l1hdr.b_refcnt)); 6463 VERIFY3U(nhdr->b_type, ==, type); 6464 ASSERT(!HDR_SHARED_DATA(nhdr)); 6465 6466 nhdr->b_l1hdr.b_buf = buf; 6467 nhdr->b_l1hdr.b_bufcnt = 1; 6468 if (ARC_BUF_ENCRYPTED(buf)) 6469 nhdr->b_crypt_hdr.b_ebufcnt = 1; 6470 (void) zfs_refcount_add(&nhdr->b_l1hdr.b_refcnt, tag); 6471 buf->b_hdr = nhdr; 6472 6473 (void) zfs_refcount_add_many(&arc_anon->arcs_size[type], 6474 arc_buf_size(buf), buf); 6475 } else { 6476 ASSERT(zfs_refcount_count(&hdr->b_l1hdr.b_refcnt) == 1); 6477 /* protected by hash lock, or hdr is on arc_anon */ 6478 ASSERT(!multilist_link_active(&hdr->b_l1hdr.b_arc_node)); 6479 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6480 hdr->b_l1hdr.b_mru_hits = 0; 6481 hdr->b_l1hdr.b_mru_ghost_hits = 0; 6482 hdr->b_l1hdr.b_mfu_hits = 0; 6483 hdr->b_l1hdr.b_mfu_ghost_hits = 0; 6484 arc_change_state(arc_anon, hdr); 6485 hdr->b_l1hdr.b_arc_access = 0; 6486 6487 mutex_exit(hash_lock); 6488 buf_discard_identity(hdr); 6489 arc_buf_thaw(buf); 6490 } 6491 } 6492 6493 int 6494 arc_released(arc_buf_t *buf) 6495 { 6496 return (buf->b_data != NULL && 6497 buf->b_hdr->b_l1hdr.b_state == arc_anon); 6498 } 6499 6500 #ifdef ZFS_DEBUG 6501 int 6502 arc_referenced(arc_buf_t *buf) 6503 { 6504 return (zfs_refcount_count(&buf->b_hdr->b_l1hdr.b_refcnt)); 6505 } 6506 #endif 6507 6508 static void 6509 arc_write_ready(zio_t *zio) 6510 { 6511 arc_write_callback_t *callback = zio->io_private; 6512 arc_buf_t *buf = callback->awcb_buf; 6513 arc_buf_hdr_t *hdr = buf->b_hdr; 6514 blkptr_t *bp = zio->io_bp; 6515 uint64_t psize = BP_IS_HOLE(bp) ? 0 : BP_GET_PSIZE(bp); 6516 fstrans_cookie_t cookie = spl_fstrans_mark(); 6517 6518 ASSERT(HDR_HAS_L1HDR(hdr)); 6519 ASSERT(!zfs_refcount_is_zero(&buf->b_hdr->b_l1hdr.b_refcnt)); 6520 ASSERT(hdr->b_l1hdr.b_bufcnt > 0); 6521 6522 /* 6523 * If we're reexecuting this zio because the pool suspended, then 6524 * cleanup any state that was previously set the first time the 6525 * callback was invoked. 6526 */ 6527 if (zio->io_flags & ZIO_FLAG_REEXECUTED) { 6528 arc_cksum_free(hdr); 6529 arc_buf_unwatch(buf); 6530 if (hdr->b_l1hdr.b_pabd != NULL) { 6531 if (arc_buf_is_shared(buf)) { 6532 arc_unshare_buf(hdr, buf); 6533 } else { 6534 arc_hdr_free_abd(hdr, B_FALSE); 6535 } 6536 } 6537 6538 if (HDR_HAS_RABD(hdr)) 6539 arc_hdr_free_abd(hdr, B_TRUE); 6540 } 6541 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6542 ASSERT(!HDR_HAS_RABD(hdr)); 6543 ASSERT(!HDR_SHARED_DATA(hdr)); 6544 ASSERT(!arc_buf_is_shared(buf)); 6545 6546 callback->awcb_ready(zio, buf, callback->awcb_private); 6547 6548 if (HDR_IO_IN_PROGRESS(hdr)) { 6549 ASSERT(zio->io_flags & ZIO_FLAG_REEXECUTED); 6550 } else { 6551 arc_hdr_set_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6552 add_reference(hdr, hdr); /* For IO_IN_PROGRESS. */ 6553 } 6554 6555 if (BP_IS_PROTECTED(bp) != !!HDR_PROTECTED(hdr)) 6556 hdr = arc_hdr_realloc_crypt(hdr, BP_IS_PROTECTED(bp)); 6557 6558 if (BP_IS_PROTECTED(bp)) { 6559 /* ZIL blocks are written through zio_rewrite */ 6560 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 6561 ASSERT(HDR_PROTECTED(hdr)); 6562 6563 if (BP_SHOULD_BYTESWAP(bp)) { 6564 if (BP_GET_LEVEL(bp) > 0) { 6565 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_UINT64; 6566 } else { 6567 hdr->b_l1hdr.b_byteswap = 6568 DMU_OT_BYTESWAP(BP_GET_TYPE(bp)); 6569 } 6570 } else { 6571 hdr->b_l1hdr.b_byteswap = DMU_BSWAP_NUMFUNCS; 6572 } 6573 6574 hdr->b_crypt_hdr.b_ot = BP_GET_TYPE(bp); 6575 hdr->b_crypt_hdr.b_dsobj = zio->io_bookmark.zb_objset; 6576 zio_crypt_decode_params_bp(bp, hdr->b_crypt_hdr.b_salt, 6577 hdr->b_crypt_hdr.b_iv); 6578 zio_crypt_decode_mac_bp(bp, hdr->b_crypt_hdr.b_mac); 6579 } 6580 6581 /* 6582 * If this block was written for raw encryption but the zio layer 6583 * ended up only authenticating it, adjust the buffer flags now. 6584 */ 6585 if (BP_IS_AUTHENTICATED(bp) && ARC_BUF_ENCRYPTED(buf)) { 6586 arc_hdr_set_flags(hdr, ARC_FLAG_NOAUTH); 6587 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6588 if (BP_GET_COMPRESS(bp) == ZIO_COMPRESS_OFF) 6589 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6590 } else if (BP_IS_HOLE(bp) && ARC_BUF_ENCRYPTED(buf)) { 6591 buf->b_flags &= ~ARC_BUF_FLAG_ENCRYPTED; 6592 buf->b_flags &= ~ARC_BUF_FLAG_COMPRESSED; 6593 } 6594 6595 /* this must be done after the buffer flags are adjusted */ 6596 arc_cksum_compute(buf); 6597 6598 enum zio_compress compress; 6599 if (BP_IS_HOLE(bp) || BP_IS_EMBEDDED(bp)) { 6600 compress = ZIO_COMPRESS_OFF; 6601 } else { 6602 ASSERT3U(HDR_GET_LSIZE(hdr), ==, BP_GET_LSIZE(bp)); 6603 compress = BP_GET_COMPRESS(bp); 6604 } 6605 HDR_SET_PSIZE(hdr, psize); 6606 arc_hdr_set_compress(hdr, compress); 6607 hdr->b_complevel = zio->io_prop.zp_complevel; 6608 6609 if (zio->io_error != 0 || psize == 0) 6610 goto out; 6611 6612 /* 6613 * Fill the hdr with data. If the buffer is encrypted we have no choice 6614 * but to copy the data into b_radb. If the hdr is compressed, the data 6615 * we want is available from the zio, otherwise we can take it from 6616 * the buf. 6617 * 6618 * We might be able to share the buf's data with the hdr here. However, 6619 * doing so would cause the ARC to be full of linear ABDs if we write a 6620 * lot of shareable data. As a compromise, we check whether scattered 6621 * ABDs are allowed, and assume that if they are then the user wants 6622 * the ARC to be primarily filled with them regardless of the data being 6623 * written. Therefore, if they're allowed then we allocate one and copy 6624 * the data into it; otherwise, we share the data directly if we can. 6625 */ 6626 if (ARC_BUF_ENCRYPTED(buf)) { 6627 ASSERT3U(psize, >, 0); 6628 ASSERT(ARC_BUF_COMPRESSED(buf)); 6629 arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA | 6630 ARC_HDR_USE_RESERVE); 6631 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6632 } else if (!(HDR_UNCACHED(hdr) || 6633 abd_size_alloc_linear(arc_buf_size(buf))) || 6634 !arc_can_share(hdr, buf)) { 6635 /* 6636 * Ideally, we would always copy the io_abd into b_pabd, but the 6637 * user may have disabled compressed ARC, thus we must check the 6638 * hdr's compression setting rather than the io_bp's. 6639 */ 6640 if (BP_IS_ENCRYPTED(bp)) { 6641 ASSERT3U(psize, >, 0); 6642 arc_hdr_alloc_abd(hdr, ARC_HDR_ALLOC_RDATA | 6643 ARC_HDR_USE_RESERVE); 6644 abd_copy(hdr->b_crypt_hdr.b_rabd, zio->io_abd, psize); 6645 } else if (arc_hdr_get_compress(hdr) != ZIO_COMPRESS_OFF && 6646 !ARC_BUF_COMPRESSED(buf)) { 6647 ASSERT3U(psize, >, 0); 6648 arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE); 6649 abd_copy(hdr->b_l1hdr.b_pabd, zio->io_abd, psize); 6650 } else { 6651 ASSERT3U(zio->io_orig_size, ==, arc_hdr_size(hdr)); 6652 arc_hdr_alloc_abd(hdr, ARC_HDR_USE_RESERVE); 6653 abd_copy_from_buf(hdr->b_l1hdr.b_pabd, buf->b_data, 6654 arc_buf_size(buf)); 6655 } 6656 } else { 6657 ASSERT3P(buf->b_data, ==, abd_to_buf(zio->io_orig_abd)); 6658 ASSERT3U(zio->io_orig_size, ==, arc_buf_size(buf)); 6659 ASSERT3U(hdr->b_l1hdr.b_bufcnt, ==, 1); 6660 6661 arc_share_buf(hdr, buf); 6662 } 6663 6664 out: 6665 arc_hdr_verify(hdr, bp); 6666 spl_fstrans_unmark(cookie); 6667 } 6668 6669 static void 6670 arc_write_children_ready(zio_t *zio) 6671 { 6672 arc_write_callback_t *callback = zio->io_private; 6673 arc_buf_t *buf = callback->awcb_buf; 6674 6675 callback->awcb_children_ready(zio, buf, callback->awcb_private); 6676 } 6677 6678 static void 6679 arc_write_done(zio_t *zio) 6680 { 6681 arc_write_callback_t *callback = zio->io_private; 6682 arc_buf_t *buf = callback->awcb_buf; 6683 arc_buf_hdr_t *hdr = buf->b_hdr; 6684 6685 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6686 6687 if (zio->io_error == 0) { 6688 arc_hdr_verify(hdr, zio->io_bp); 6689 6690 if (BP_IS_HOLE(zio->io_bp) || BP_IS_EMBEDDED(zio->io_bp)) { 6691 buf_discard_identity(hdr); 6692 } else { 6693 hdr->b_dva = *BP_IDENTITY(zio->io_bp); 6694 hdr->b_birth = BP_PHYSICAL_BIRTH(zio->io_bp); 6695 } 6696 } else { 6697 ASSERT(HDR_EMPTY(hdr)); 6698 } 6699 6700 /* 6701 * If the block to be written was all-zero or compressed enough to be 6702 * embedded in the BP, no write was performed so there will be no 6703 * dva/birth/checksum. The buffer must therefore remain anonymous 6704 * (and uncached). 6705 */ 6706 if (!HDR_EMPTY(hdr)) { 6707 arc_buf_hdr_t *exists; 6708 kmutex_t *hash_lock; 6709 6710 ASSERT3U(zio->io_error, ==, 0); 6711 6712 arc_cksum_verify(buf); 6713 6714 exists = buf_hash_insert(hdr, &hash_lock); 6715 if (exists != NULL) { 6716 /* 6717 * This can only happen if we overwrite for 6718 * sync-to-convergence, because we remove 6719 * buffers from the hash table when we arc_free(). 6720 */ 6721 if (zio->io_flags & ZIO_FLAG_IO_REWRITE) { 6722 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6723 panic("bad overwrite, hdr=%p exists=%p", 6724 (void *)hdr, (void *)exists); 6725 ASSERT(zfs_refcount_is_zero( 6726 &exists->b_l1hdr.b_refcnt)); 6727 arc_change_state(arc_anon, exists); 6728 arc_hdr_destroy(exists); 6729 mutex_exit(hash_lock); 6730 exists = buf_hash_insert(hdr, &hash_lock); 6731 ASSERT3P(exists, ==, NULL); 6732 } else if (zio->io_flags & ZIO_FLAG_NOPWRITE) { 6733 /* nopwrite */ 6734 ASSERT(zio->io_prop.zp_nopwrite); 6735 if (!BP_EQUAL(&zio->io_bp_orig, zio->io_bp)) 6736 panic("bad nopwrite, hdr=%p exists=%p", 6737 (void *)hdr, (void *)exists); 6738 } else { 6739 /* Dedup */ 6740 ASSERT(hdr->b_l1hdr.b_bufcnt == 1); 6741 ASSERT(hdr->b_l1hdr.b_state == arc_anon); 6742 ASSERT(BP_GET_DEDUP(zio->io_bp)); 6743 ASSERT(BP_GET_LEVEL(zio->io_bp) == 0); 6744 } 6745 } 6746 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6747 VERIFY3S(remove_reference(hdr, hdr), >, 0); 6748 /* if it's not anon, we are doing a scrub */ 6749 if (exists == NULL && hdr->b_l1hdr.b_state == arc_anon) 6750 arc_access(hdr, 0, B_FALSE); 6751 mutex_exit(hash_lock); 6752 } else { 6753 arc_hdr_clear_flags(hdr, ARC_FLAG_IO_IN_PROGRESS); 6754 VERIFY3S(remove_reference(hdr, hdr), >, 0); 6755 } 6756 6757 callback->awcb_done(zio, buf, callback->awcb_private); 6758 6759 abd_free(zio->io_abd); 6760 kmem_free(callback, sizeof (arc_write_callback_t)); 6761 } 6762 6763 zio_t * 6764 arc_write(zio_t *pio, spa_t *spa, uint64_t txg, 6765 blkptr_t *bp, arc_buf_t *buf, boolean_t uncached, boolean_t l2arc, 6766 const zio_prop_t *zp, arc_write_done_func_t *ready, 6767 arc_write_done_func_t *children_ready, arc_write_done_func_t *done, 6768 void *private, zio_priority_t priority, int zio_flags, 6769 const zbookmark_phys_t *zb) 6770 { 6771 arc_buf_hdr_t *hdr = buf->b_hdr; 6772 arc_write_callback_t *callback; 6773 zio_t *zio; 6774 zio_prop_t localprop = *zp; 6775 6776 ASSERT3P(ready, !=, NULL); 6777 ASSERT3P(done, !=, NULL); 6778 ASSERT(!HDR_IO_ERROR(hdr)); 6779 ASSERT(!HDR_IO_IN_PROGRESS(hdr)); 6780 ASSERT3P(hdr->b_l1hdr.b_acb, ==, NULL); 6781 ASSERT3U(hdr->b_l1hdr.b_bufcnt, >, 0); 6782 if (uncached) 6783 arc_hdr_set_flags(hdr, ARC_FLAG_UNCACHED); 6784 else if (l2arc) 6785 arc_hdr_set_flags(hdr, ARC_FLAG_L2CACHE); 6786 6787 if (ARC_BUF_ENCRYPTED(buf)) { 6788 ASSERT(ARC_BUF_COMPRESSED(buf)); 6789 localprop.zp_encrypt = B_TRUE; 6790 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6791 localprop.zp_complevel = hdr->b_complevel; 6792 localprop.zp_byteorder = 6793 (hdr->b_l1hdr.b_byteswap == DMU_BSWAP_NUMFUNCS) ? 6794 ZFS_HOST_BYTEORDER : !ZFS_HOST_BYTEORDER; 6795 memcpy(localprop.zp_salt, hdr->b_crypt_hdr.b_salt, 6796 ZIO_DATA_SALT_LEN); 6797 memcpy(localprop.zp_iv, hdr->b_crypt_hdr.b_iv, 6798 ZIO_DATA_IV_LEN); 6799 memcpy(localprop.zp_mac, hdr->b_crypt_hdr.b_mac, 6800 ZIO_DATA_MAC_LEN); 6801 if (DMU_OT_IS_ENCRYPTED(localprop.zp_type)) { 6802 localprop.zp_nopwrite = B_FALSE; 6803 localprop.zp_copies = 6804 MIN(localprop.zp_copies, SPA_DVAS_PER_BP - 1); 6805 } 6806 zio_flags |= ZIO_FLAG_RAW; 6807 } else if (ARC_BUF_COMPRESSED(buf)) { 6808 ASSERT3U(HDR_GET_LSIZE(hdr), !=, arc_buf_size(buf)); 6809 localprop.zp_compress = HDR_GET_COMPRESS(hdr); 6810 localprop.zp_complevel = hdr->b_complevel; 6811 zio_flags |= ZIO_FLAG_RAW_COMPRESS; 6812 } 6813 callback = kmem_zalloc(sizeof (arc_write_callback_t), KM_SLEEP); 6814 callback->awcb_ready = ready; 6815 callback->awcb_children_ready = children_ready; 6816 callback->awcb_done = done; 6817 callback->awcb_private = private; 6818 callback->awcb_buf = buf; 6819 6820 /* 6821 * The hdr's b_pabd is now stale, free it now. A new data block 6822 * will be allocated when the zio pipeline calls arc_write_ready(). 6823 */ 6824 if (hdr->b_l1hdr.b_pabd != NULL) { 6825 /* 6826 * If the buf is currently sharing the data block with 6827 * the hdr then we need to break that relationship here. 6828 * The hdr will remain with a NULL data pointer and the 6829 * buf will take sole ownership of the block. 6830 */ 6831 if (arc_buf_is_shared(buf)) { 6832 arc_unshare_buf(hdr, buf); 6833 } else { 6834 arc_hdr_free_abd(hdr, B_FALSE); 6835 } 6836 VERIFY3P(buf->b_data, !=, NULL); 6837 } 6838 6839 if (HDR_HAS_RABD(hdr)) 6840 arc_hdr_free_abd(hdr, B_TRUE); 6841 6842 if (!(zio_flags & ZIO_FLAG_RAW)) 6843 arc_hdr_set_compress(hdr, ZIO_COMPRESS_OFF); 6844 6845 ASSERT(!arc_buf_is_shared(buf)); 6846 ASSERT3P(hdr->b_l1hdr.b_pabd, ==, NULL); 6847 6848 zio = zio_write(pio, spa, txg, bp, 6849 abd_get_from_buf(buf->b_data, HDR_GET_LSIZE(hdr)), 6850 HDR_GET_LSIZE(hdr), arc_buf_size(buf), &localprop, arc_write_ready, 6851 (children_ready != NULL) ? arc_write_children_ready : NULL, 6852 arc_write_done, callback, priority, zio_flags, zb); 6853 6854 return (zio); 6855 } 6856 6857 void 6858 arc_tempreserve_clear(uint64_t reserve) 6859 { 6860 atomic_add_64(&arc_tempreserve, -reserve); 6861 ASSERT((int64_t)arc_tempreserve >= 0); 6862 } 6863 6864 int 6865 arc_tempreserve_space(spa_t *spa, uint64_t reserve, uint64_t txg) 6866 { 6867 int error; 6868 uint64_t anon_size; 6869 6870 if (!arc_no_grow && 6871 reserve > arc_c/4 && 6872 reserve * 4 > (2ULL << SPA_MAXBLOCKSHIFT)) 6873 arc_c = MIN(arc_c_max, reserve * 4); 6874 6875 /* 6876 * Throttle when the calculated memory footprint for the TXG 6877 * exceeds the target ARC size. 6878 */ 6879 if (reserve > arc_c) { 6880 DMU_TX_STAT_BUMP(dmu_tx_memory_reserve); 6881 return (SET_ERROR(ERESTART)); 6882 } 6883 6884 /* 6885 * Don't count loaned bufs as in flight dirty data to prevent long 6886 * network delays from blocking transactions that are ready to be 6887 * assigned to a txg. 6888 */ 6889 6890 /* assert that it has not wrapped around */ 6891 ASSERT3S(atomic_add_64_nv(&arc_loaned_bytes, 0), >=, 0); 6892 6893 anon_size = MAX((int64_t) 6894 (zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_DATA]) + 6895 zfs_refcount_count(&arc_anon->arcs_size[ARC_BUFC_METADATA]) - 6896 arc_loaned_bytes), 0); 6897 6898 /* 6899 * Writes will, almost always, require additional memory allocations 6900 * in order to compress/encrypt/etc the data. We therefore need to 6901 * make sure that there is sufficient available memory for this. 6902 */ 6903 error = arc_memory_throttle(spa, reserve, txg); 6904 if (error != 0) 6905 return (error); 6906 6907 /* 6908 * Throttle writes when the amount of dirty data in the cache 6909 * gets too large. We try to keep the cache less than half full 6910 * of dirty blocks so that our sync times don't grow too large. 6911 * 6912 * In the case of one pool being built on another pool, we want 6913 * to make sure we don't end up throttling the lower (backing) 6914 * pool when the upper pool is the majority contributor to dirty 6915 * data. To insure we make forward progress during throttling, we 6916 * also check the current pool's net dirty data and only throttle 6917 * if it exceeds zfs_arc_pool_dirty_percent of the anonymous dirty 6918 * data in the cache. 6919 * 6920 * Note: if two requests come in concurrently, we might let them 6921 * both succeed, when one of them should fail. Not a huge deal. 6922 */ 6923 uint64_t total_dirty = reserve + arc_tempreserve + anon_size; 6924 uint64_t spa_dirty_anon = spa_dirty_data(spa); 6925 uint64_t rarc_c = arc_warm ? arc_c : arc_c_max; 6926 if (total_dirty > rarc_c * zfs_arc_dirty_limit_percent / 100 && 6927 anon_size > rarc_c * zfs_arc_anon_limit_percent / 100 && 6928 spa_dirty_anon > anon_size * zfs_arc_pool_dirty_percent / 100) { 6929 #ifdef ZFS_DEBUG 6930 uint64_t meta_esize = zfs_refcount_count( 6931 &arc_anon->arcs_esize[ARC_BUFC_METADATA]); 6932 uint64_t data_esize = 6933 zfs_refcount_count(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 6934 dprintf("failing, arc_tempreserve=%lluK anon_meta=%lluK " 6935 "anon_data=%lluK tempreserve=%lluK rarc_c=%lluK\n", 6936 (u_longlong_t)arc_tempreserve >> 10, 6937 (u_longlong_t)meta_esize >> 10, 6938 (u_longlong_t)data_esize >> 10, 6939 (u_longlong_t)reserve >> 10, 6940 (u_longlong_t)rarc_c >> 10); 6941 #endif 6942 DMU_TX_STAT_BUMP(dmu_tx_dirty_throttle); 6943 return (SET_ERROR(ERESTART)); 6944 } 6945 atomic_add_64(&arc_tempreserve, reserve); 6946 return (0); 6947 } 6948 6949 static void 6950 arc_kstat_update_state(arc_state_t *state, kstat_named_t *size, 6951 kstat_named_t *data, kstat_named_t *metadata, 6952 kstat_named_t *evict_data, kstat_named_t *evict_metadata) 6953 { 6954 data->value.ui64 = 6955 zfs_refcount_count(&state->arcs_size[ARC_BUFC_DATA]); 6956 metadata->value.ui64 = 6957 zfs_refcount_count(&state->arcs_size[ARC_BUFC_METADATA]); 6958 size->value.ui64 = data->value.ui64 + metadata->value.ui64; 6959 evict_data->value.ui64 = 6960 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_DATA]); 6961 evict_metadata->value.ui64 = 6962 zfs_refcount_count(&state->arcs_esize[ARC_BUFC_METADATA]); 6963 } 6964 6965 static int 6966 arc_kstat_update(kstat_t *ksp, int rw) 6967 { 6968 arc_stats_t *as = ksp->ks_data; 6969 6970 if (rw == KSTAT_WRITE) 6971 return (SET_ERROR(EACCES)); 6972 6973 as->arcstat_hits.value.ui64 = 6974 wmsum_value(&arc_sums.arcstat_hits); 6975 as->arcstat_iohits.value.ui64 = 6976 wmsum_value(&arc_sums.arcstat_iohits); 6977 as->arcstat_misses.value.ui64 = 6978 wmsum_value(&arc_sums.arcstat_misses); 6979 as->arcstat_demand_data_hits.value.ui64 = 6980 wmsum_value(&arc_sums.arcstat_demand_data_hits); 6981 as->arcstat_demand_data_iohits.value.ui64 = 6982 wmsum_value(&arc_sums.arcstat_demand_data_iohits); 6983 as->arcstat_demand_data_misses.value.ui64 = 6984 wmsum_value(&arc_sums.arcstat_demand_data_misses); 6985 as->arcstat_demand_metadata_hits.value.ui64 = 6986 wmsum_value(&arc_sums.arcstat_demand_metadata_hits); 6987 as->arcstat_demand_metadata_iohits.value.ui64 = 6988 wmsum_value(&arc_sums.arcstat_demand_metadata_iohits); 6989 as->arcstat_demand_metadata_misses.value.ui64 = 6990 wmsum_value(&arc_sums.arcstat_demand_metadata_misses); 6991 as->arcstat_prefetch_data_hits.value.ui64 = 6992 wmsum_value(&arc_sums.arcstat_prefetch_data_hits); 6993 as->arcstat_prefetch_data_iohits.value.ui64 = 6994 wmsum_value(&arc_sums.arcstat_prefetch_data_iohits); 6995 as->arcstat_prefetch_data_misses.value.ui64 = 6996 wmsum_value(&arc_sums.arcstat_prefetch_data_misses); 6997 as->arcstat_prefetch_metadata_hits.value.ui64 = 6998 wmsum_value(&arc_sums.arcstat_prefetch_metadata_hits); 6999 as->arcstat_prefetch_metadata_iohits.value.ui64 = 7000 wmsum_value(&arc_sums.arcstat_prefetch_metadata_iohits); 7001 as->arcstat_prefetch_metadata_misses.value.ui64 = 7002 wmsum_value(&arc_sums.arcstat_prefetch_metadata_misses); 7003 as->arcstat_mru_hits.value.ui64 = 7004 wmsum_value(&arc_sums.arcstat_mru_hits); 7005 as->arcstat_mru_ghost_hits.value.ui64 = 7006 wmsum_value(&arc_sums.arcstat_mru_ghost_hits); 7007 as->arcstat_mfu_hits.value.ui64 = 7008 wmsum_value(&arc_sums.arcstat_mfu_hits); 7009 as->arcstat_mfu_ghost_hits.value.ui64 = 7010 wmsum_value(&arc_sums.arcstat_mfu_ghost_hits); 7011 as->arcstat_uncached_hits.value.ui64 = 7012 wmsum_value(&arc_sums.arcstat_uncached_hits); 7013 as->arcstat_deleted.value.ui64 = 7014 wmsum_value(&arc_sums.arcstat_deleted); 7015 as->arcstat_mutex_miss.value.ui64 = 7016 wmsum_value(&arc_sums.arcstat_mutex_miss); 7017 as->arcstat_access_skip.value.ui64 = 7018 wmsum_value(&arc_sums.arcstat_access_skip); 7019 as->arcstat_evict_skip.value.ui64 = 7020 wmsum_value(&arc_sums.arcstat_evict_skip); 7021 as->arcstat_evict_not_enough.value.ui64 = 7022 wmsum_value(&arc_sums.arcstat_evict_not_enough); 7023 as->arcstat_evict_l2_cached.value.ui64 = 7024 wmsum_value(&arc_sums.arcstat_evict_l2_cached); 7025 as->arcstat_evict_l2_eligible.value.ui64 = 7026 wmsum_value(&arc_sums.arcstat_evict_l2_eligible); 7027 as->arcstat_evict_l2_eligible_mfu.value.ui64 = 7028 wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mfu); 7029 as->arcstat_evict_l2_eligible_mru.value.ui64 = 7030 wmsum_value(&arc_sums.arcstat_evict_l2_eligible_mru); 7031 as->arcstat_evict_l2_ineligible.value.ui64 = 7032 wmsum_value(&arc_sums.arcstat_evict_l2_ineligible); 7033 as->arcstat_evict_l2_skip.value.ui64 = 7034 wmsum_value(&arc_sums.arcstat_evict_l2_skip); 7035 as->arcstat_hash_collisions.value.ui64 = 7036 wmsum_value(&arc_sums.arcstat_hash_collisions); 7037 as->arcstat_hash_chains.value.ui64 = 7038 wmsum_value(&arc_sums.arcstat_hash_chains); 7039 as->arcstat_size.value.ui64 = 7040 aggsum_value(&arc_sums.arcstat_size); 7041 as->arcstat_compressed_size.value.ui64 = 7042 wmsum_value(&arc_sums.arcstat_compressed_size); 7043 as->arcstat_uncompressed_size.value.ui64 = 7044 wmsum_value(&arc_sums.arcstat_uncompressed_size); 7045 as->arcstat_overhead_size.value.ui64 = 7046 wmsum_value(&arc_sums.arcstat_overhead_size); 7047 as->arcstat_hdr_size.value.ui64 = 7048 wmsum_value(&arc_sums.arcstat_hdr_size); 7049 as->arcstat_data_size.value.ui64 = 7050 wmsum_value(&arc_sums.arcstat_data_size); 7051 as->arcstat_metadata_size.value.ui64 = 7052 wmsum_value(&arc_sums.arcstat_metadata_size); 7053 as->arcstat_dbuf_size.value.ui64 = 7054 wmsum_value(&arc_sums.arcstat_dbuf_size); 7055 #if defined(COMPAT_FREEBSD11) 7056 as->arcstat_other_size.value.ui64 = 7057 wmsum_value(&arc_sums.arcstat_bonus_size) + 7058 wmsum_value(&arc_sums.arcstat_dnode_size) + 7059 wmsum_value(&arc_sums.arcstat_dbuf_size); 7060 #endif 7061 7062 arc_kstat_update_state(arc_anon, 7063 &as->arcstat_anon_size, 7064 &as->arcstat_anon_data, 7065 &as->arcstat_anon_metadata, 7066 &as->arcstat_anon_evictable_data, 7067 &as->arcstat_anon_evictable_metadata); 7068 arc_kstat_update_state(arc_mru, 7069 &as->arcstat_mru_size, 7070 &as->arcstat_mru_data, 7071 &as->arcstat_mru_metadata, 7072 &as->arcstat_mru_evictable_data, 7073 &as->arcstat_mru_evictable_metadata); 7074 arc_kstat_update_state(arc_mru_ghost, 7075 &as->arcstat_mru_ghost_size, 7076 &as->arcstat_mru_ghost_data, 7077 &as->arcstat_mru_ghost_metadata, 7078 &as->arcstat_mru_ghost_evictable_data, 7079 &as->arcstat_mru_ghost_evictable_metadata); 7080 arc_kstat_update_state(arc_mfu, 7081 &as->arcstat_mfu_size, 7082 &as->arcstat_mfu_data, 7083 &as->arcstat_mfu_metadata, 7084 &as->arcstat_mfu_evictable_data, 7085 &as->arcstat_mfu_evictable_metadata); 7086 arc_kstat_update_state(arc_mfu_ghost, 7087 &as->arcstat_mfu_ghost_size, 7088 &as->arcstat_mfu_ghost_data, 7089 &as->arcstat_mfu_ghost_metadata, 7090 &as->arcstat_mfu_ghost_evictable_data, 7091 &as->arcstat_mfu_ghost_evictable_metadata); 7092 arc_kstat_update_state(arc_uncached, 7093 &as->arcstat_uncached_size, 7094 &as->arcstat_uncached_data, 7095 &as->arcstat_uncached_metadata, 7096 &as->arcstat_uncached_evictable_data, 7097 &as->arcstat_uncached_evictable_metadata); 7098 7099 as->arcstat_dnode_size.value.ui64 = 7100 wmsum_value(&arc_sums.arcstat_dnode_size); 7101 as->arcstat_bonus_size.value.ui64 = 7102 wmsum_value(&arc_sums.arcstat_bonus_size); 7103 as->arcstat_l2_hits.value.ui64 = 7104 wmsum_value(&arc_sums.arcstat_l2_hits); 7105 as->arcstat_l2_misses.value.ui64 = 7106 wmsum_value(&arc_sums.arcstat_l2_misses); 7107 as->arcstat_l2_prefetch_asize.value.ui64 = 7108 wmsum_value(&arc_sums.arcstat_l2_prefetch_asize); 7109 as->arcstat_l2_mru_asize.value.ui64 = 7110 wmsum_value(&arc_sums.arcstat_l2_mru_asize); 7111 as->arcstat_l2_mfu_asize.value.ui64 = 7112 wmsum_value(&arc_sums.arcstat_l2_mfu_asize); 7113 as->arcstat_l2_bufc_data_asize.value.ui64 = 7114 wmsum_value(&arc_sums.arcstat_l2_bufc_data_asize); 7115 as->arcstat_l2_bufc_metadata_asize.value.ui64 = 7116 wmsum_value(&arc_sums.arcstat_l2_bufc_metadata_asize); 7117 as->arcstat_l2_feeds.value.ui64 = 7118 wmsum_value(&arc_sums.arcstat_l2_feeds); 7119 as->arcstat_l2_rw_clash.value.ui64 = 7120 wmsum_value(&arc_sums.arcstat_l2_rw_clash); 7121 as->arcstat_l2_read_bytes.value.ui64 = 7122 wmsum_value(&arc_sums.arcstat_l2_read_bytes); 7123 as->arcstat_l2_write_bytes.value.ui64 = 7124 wmsum_value(&arc_sums.arcstat_l2_write_bytes); 7125 as->arcstat_l2_writes_sent.value.ui64 = 7126 wmsum_value(&arc_sums.arcstat_l2_writes_sent); 7127 as->arcstat_l2_writes_done.value.ui64 = 7128 wmsum_value(&arc_sums.arcstat_l2_writes_done); 7129 as->arcstat_l2_writes_error.value.ui64 = 7130 wmsum_value(&arc_sums.arcstat_l2_writes_error); 7131 as->arcstat_l2_writes_lock_retry.value.ui64 = 7132 wmsum_value(&arc_sums.arcstat_l2_writes_lock_retry); 7133 as->arcstat_l2_evict_lock_retry.value.ui64 = 7134 wmsum_value(&arc_sums.arcstat_l2_evict_lock_retry); 7135 as->arcstat_l2_evict_reading.value.ui64 = 7136 wmsum_value(&arc_sums.arcstat_l2_evict_reading); 7137 as->arcstat_l2_evict_l1cached.value.ui64 = 7138 wmsum_value(&arc_sums.arcstat_l2_evict_l1cached); 7139 as->arcstat_l2_free_on_write.value.ui64 = 7140 wmsum_value(&arc_sums.arcstat_l2_free_on_write); 7141 as->arcstat_l2_abort_lowmem.value.ui64 = 7142 wmsum_value(&arc_sums.arcstat_l2_abort_lowmem); 7143 as->arcstat_l2_cksum_bad.value.ui64 = 7144 wmsum_value(&arc_sums.arcstat_l2_cksum_bad); 7145 as->arcstat_l2_io_error.value.ui64 = 7146 wmsum_value(&arc_sums.arcstat_l2_io_error); 7147 as->arcstat_l2_lsize.value.ui64 = 7148 wmsum_value(&arc_sums.arcstat_l2_lsize); 7149 as->arcstat_l2_psize.value.ui64 = 7150 wmsum_value(&arc_sums.arcstat_l2_psize); 7151 as->arcstat_l2_hdr_size.value.ui64 = 7152 aggsum_value(&arc_sums.arcstat_l2_hdr_size); 7153 as->arcstat_l2_log_blk_writes.value.ui64 = 7154 wmsum_value(&arc_sums.arcstat_l2_log_blk_writes); 7155 as->arcstat_l2_log_blk_asize.value.ui64 = 7156 wmsum_value(&arc_sums.arcstat_l2_log_blk_asize); 7157 as->arcstat_l2_log_blk_count.value.ui64 = 7158 wmsum_value(&arc_sums.arcstat_l2_log_blk_count); 7159 as->arcstat_l2_rebuild_success.value.ui64 = 7160 wmsum_value(&arc_sums.arcstat_l2_rebuild_success); 7161 as->arcstat_l2_rebuild_abort_unsupported.value.ui64 = 7162 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_unsupported); 7163 as->arcstat_l2_rebuild_abort_io_errors.value.ui64 = 7164 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_io_errors); 7165 as->arcstat_l2_rebuild_abort_dh_errors.value.ui64 = 7166 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); 7167 as->arcstat_l2_rebuild_abort_cksum_lb_errors.value.ui64 = 7168 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); 7169 as->arcstat_l2_rebuild_abort_lowmem.value.ui64 = 7170 wmsum_value(&arc_sums.arcstat_l2_rebuild_abort_lowmem); 7171 as->arcstat_l2_rebuild_size.value.ui64 = 7172 wmsum_value(&arc_sums.arcstat_l2_rebuild_size); 7173 as->arcstat_l2_rebuild_asize.value.ui64 = 7174 wmsum_value(&arc_sums.arcstat_l2_rebuild_asize); 7175 as->arcstat_l2_rebuild_bufs.value.ui64 = 7176 wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs); 7177 as->arcstat_l2_rebuild_bufs_precached.value.ui64 = 7178 wmsum_value(&arc_sums.arcstat_l2_rebuild_bufs_precached); 7179 as->arcstat_l2_rebuild_log_blks.value.ui64 = 7180 wmsum_value(&arc_sums.arcstat_l2_rebuild_log_blks); 7181 as->arcstat_memory_throttle_count.value.ui64 = 7182 wmsum_value(&arc_sums.arcstat_memory_throttle_count); 7183 as->arcstat_memory_direct_count.value.ui64 = 7184 wmsum_value(&arc_sums.arcstat_memory_direct_count); 7185 as->arcstat_memory_indirect_count.value.ui64 = 7186 wmsum_value(&arc_sums.arcstat_memory_indirect_count); 7187 7188 as->arcstat_memory_all_bytes.value.ui64 = 7189 arc_all_memory(); 7190 as->arcstat_memory_free_bytes.value.ui64 = 7191 arc_free_memory(); 7192 as->arcstat_memory_available_bytes.value.i64 = 7193 arc_available_memory(); 7194 7195 as->arcstat_prune.value.ui64 = 7196 wmsum_value(&arc_sums.arcstat_prune); 7197 as->arcstat_meta_used.value.ui64 = 7198 wmsum_value(&arc_sums.arcstat_meta_used); 7199 as->arcstat_async_upgrade_sync.value.ui64 = 7200 wmsum_value(&arc_sums.arcstat_async_upgrade_sync); 7201 as->arcstat_predictive_prefetch.value.ui64 = 7202 wmsum_value(&arc_sums.arcstat_predictive_prefetch); 7203 as->arcstat_demand_hit_predictive_prefetch.value.ui64 = 7204 wmsum_value(&arc_sums.arcstat_demand_hit_predictive_prefetch); 7205 as->arcstat_demand_iohit_predictive_prefetch.value.ui64 = 7206 wmsum_value(&arc_sums.arcstat_demand_iohit_predictive_prefetch); 7207 as->arcstat_prescient_prefetch.value.ui64 = 7208 wmsum_value(&arc_sums.arcstat_prescient_prefetch); 7209 as->arcstat_demand_hit_prescient_prefetch.value.ui64 = 7210 wmsum_value(&arc_sums.arcstat_demand_hit_prescient_prefetch); 7211 as->arcstat_demand_iohit_prescient_prefetch.value.ui64 = 7212 wmsum_value(&arc_sums.arcstat_demand_iohit_prescient_prefetch); 7213 as->arcstat_raw_size.value.ui64 = 7214 wmsum_value(&arc_sums.arcstat_raw_size); 7215 as->arcstat_cached_only_in_progress.value.ui64 = 7216 wmsum_value(&arc_sums.arcstat_cached_only_in_progress); 7217 as->arcstat_abd_chunk_waste_size.value.ui64 = 7218 wmsum_value(&arc_sums.arcstat_abd_chunk_waste_size); 7219 7220 return (0); 7221 } 7222 7223 /* 7224 * This function *must* return indices evenly distributed between all 7225 * sublists of the multilist. This is needed due to how the ARC eviction 7226 * code is laid out; arc_evict_state() assumes ARC buffers are evenly 7227 * distributed between all sublists and uses this assumption when 7228 * deciding which sublist to evict from and how much to evict from it. 7229 */ 7230 static unsigned int 7231 arc_state_multilist_index_func(multilist_t *ml, void *obj) 7232 { 7233 arc_buf_hdr_t *hdr = obj; 7234 7235 /* 7236 * We rely on b_dva to generate evenly distributed index 7237 * numbers using buf_hash below. So, as an added precaution, 7238 * let's make sure we never add empty buffers to the arc lists. 7239 */ 7240 ASSERT(!HDR_EMPTY(hdr)); 7241 7242 /* 7243 * The assumption here, is the hash value for a given 7244 * arc_buf_hdr_t will remain constant throughout its lifetime 7245 * (i.e. its b_spa, b_dva, and b_birth fields don't change). 7246 * Thus, we don't need to store the header's sublist index 7247 * on insertion, as this index can be recalculated on removal. 7248 * 7249 * Also, the low order bits of the hash value are thought to be 7250 * distributed evenly. Otherwise, in the case that the multilist 7251 * has a power of two number of sublists, each sublists' usage 7252 * would not be evenly distributed. In this context full 64bit 7253 * division would be a waste of time, so limit it to 32 bits. 7254 */ 7255 return ((unsigned int)buf_hash(hdr->b_spa, &hdr->b_dva, hdr->b_birth) % 7256 multilist_get_num_sublists(ml)); 7257 } 7258 7259 static unsigned int 7260 arc_state_l2c_multilist_index_func(multilist_t *ml, void *obj) 7261 { 7262 panic("Header %p insert into arc_l2c_only %p", obj, ml); 7263 } 7264 7265 #define WARN_IF_TUNING_IGNORED(tuning, value, do_warn) do { \ 7266 if ((do_warn) && (tuning) && ((tuning) != (value))) { \ 7267 cmn_err(CE_WARN, \ 7268 "ignoring tunable %s (using %llu instead)", \ 7269 (#tuning), (u_longlong_t)(value)); \ 7270 } \ 7271 } while (0) 7272 7273 /* 7274 * Called during module initialization and periodically thereafter to 7275 * apply reasonable changes to the exposed performance tunings. Can also be 7276 * called explicitly by param_set_arc_*() functions when ARC tunables are 7277 * updated manually. Non-zero zfs_* values which differ from the currently set 7278 * values will be applied. 7279 */ 7280 void 7281 arc_tuning_update(boolean_t verbose) 7282 { 7283 uint64_t allmem = arc_all_memory(); 7284 7285 /* Valid range: 32M - <arc_c_max> */ 7286 if ((zfs_arc_min) && (zfs_arc_min != arc_c_min) && 7287 (zfs_arc_min >= 2ULL << SPA_MAXBLOCKSHIFT) && 7288 (zfs_arc_min <= arc_c_max)) { 7289 arc_c_min = zfs_arc_min; 7290 arc_c = MAX(arc_c, arc_c_min); 7291 } 7292 WARN_IF_TUNING_IGNORED(zfs_arc_min, arc_c_min, verbose); 7293 7294 /* Valid range: 64M - <all physical memory> */ 7295 if ((zfs_arc_max) && (zfs_arc_max != arc_c_max) && 7296 (zfs_arc_max >= MIN_ARC_MAX) && (zfs_arc_max < allmem) && 7297 (zfs_arc_max > arc_c_min)) { 7298 arc_c_max = zfs_arc_max; 7299 arc_c = MIN(arc_c, arc_c_max); 7300 if (arc_dnode_limit > arc_c_max) 7301 arc_dnode_limit = arc_c_max; 7302 } 7303 WARN_IF_TUNING_IGNORED(zfs_arc_max, arc_c_max, verbose); 7304 7305 /* Valid range: 0 - <all physical memory> */ 7306 arc_dnode_limit = zfs_arc_dnode_limit ? zfs_arc_dnode_limit : 7307 MIN(zfs_arc_dnode_limit_percent, 100) * arc_c_max / 100; 7308 WARN_IF_TUNING_IGNORED(zfs_arc_dnode_limit, arc_dnode_limit, verbose); 7309 7310 /* Valid range: 1 - N */ 7311 if (zfs_arc_grow_retry) 7312 arc_grow_retry = zfs_arc_grow_retry; 7313 7314 /* Valid range: 1 - N */ 7315 if (zfs_arc_shrink_shift) { 7316 arc_shrink_shift = zfs_arc_shrink_shift; 7317 arc_no_grow_shift = MIN(arc_no_grow_shift, arc_shrink_shift -1); 7318 } 7319 7320 /* Valid range: 1 - N ms */ 7321 if (zfs_arc_min_prefetch_ms) 7322 arc_min_prefetch_ms = zfs_arc_min_prefetch_ms; 7323 7324 /* Valid range: 1 - N ms */ 7325 if (zfs_arc_min_prescient_prefetch_ms) { 7326 arc_min_prescient_prefetch_ms = 7327 zfs_arc_min_prescient_prefetch_ms; 7328 } 7329 7330 /* Valid range: 0 - 100 */ 7331 if (zfs_arc_lotsfree_percent <= 100) 7332 arc_lotsfree_percent = zfs_arc_lotsfree_percent; 7333 WARN_IF_TUNING_IGNORED(zfs_arc_lotsfree_percent, arc_lotsfree_percent, 7334 verbose); 7335 7336 /* Valid range: 0 - <all physical memory> */ 7337 if ((zfs_arc_sys_free) && (zfs_arc_sys_free != arc_sys_free)) 7338 arc_sys_free = MIN(zfs_arc_sys_free, allmem); 7339 WARN_IF_TUNING_IGNORED(zfs_arc_sys_free, arc_sys_free, verbose); 7340 } 7341 7342 static void 7343 arc_state_multilist_init(multilist_t *ml, 7344 multilist_sublist_index_func_t *index_func, int *maxcountp) 7345 { 7346 multilist_create(ml, sizeof (arc_buf_hdr_t), 7347 offsetof(arc_buf_hdr_t, b_l1hdr.b_arc_node), index_func); 7348 *maxcountp = MAX(*maxcountp, multilist_get_num_sublists(ml)); 7349 } 7350 7351 static void 7352 arc_state_init(void) 7353 { 7354 int num_sublists = 0; 7355 7356 arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_METADATA], 7357 arc_state_multilist_index_func, &num_sublists); 7358 arc_state_multilist_init(&arc_mru->arcs_list[ARC_BUFC_DATA], 7359 arc_state_multilist_index_func, &num_sublists); 7360 arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA], 7361 arc_state_multilist_index_func, &num_sublists); 7362 arc_state_multilist_init(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA], 7363 arc_state_multilist_index_func, &num_sublists); 7364 arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_METADATA], 7365 arc_state_multilist_index_func, &num_sublists); 7366 arc_state_multilist_init(&arc_mfu->arcs_list[ARC_BUFC_DATA], 7367 arc_state_multilist_index_func, &num_sublists); 7368 arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA], 7369 arc_state_multilist_index_func, &num_sublists); 7370 arc_state_multilist_init(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA], 7371 arc_state_multilist_index_func, &num_sublists); 7372 arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_METADATA], 7373 arc_state_multilist_index_func, &num_sublists); 7374 arc_state_multilist_init(&arc_uncached->arcs_list[ARC_BUFC_DATA], 7375 arc_state_multilist_index_func, &num_sublists); 7376 7377 /* 7378 * L2 headers should never be on the L2 state list since they don't 7379 * have L1 headers allocated. Special index function asserts that. 7380 */ 7381 arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA], 7382 arc_state_l2c_multilist_index_func, &num_sublists); 7383 arc_state_multilist_init(&arc_l2c_only->arcs_list[ARC_BUFC_DATA], 7384 arc_state_l2c_multilist_index_func, &num_sublists); 7385 7386 /* 7387 * Keep track of the number of markers needed to reclaim buffers from 7388 * any ARC state. The markers will be pre-allocated so as to minimize 7389 * the number of memory allocations performed by the eviction thread. 7390 */ 7391 arc_state_evict_marker_count = num_sublists; 7392 7393 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7394 zfs_refcount_create(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7395 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7396 zfs_refcount_create(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7397 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7398 zfs_refcount_create(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7399 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7400 zfs_refcount_create(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7401 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7402 zfs_refcount_create(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7403 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7404 zfs_refcount_create(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7405 zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]); 7406 zfs_refcount_create(&arc_uncached->arcs_esize[ARC_BUFC_DATA]); 7407 7408 zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_DATA]); 7409 zfs_refcount_create(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 7410 zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_DATA]); 7411 zfs_refcount_create(&arc_mru->arcs_size[ARC_BUFC_METADATA]); 7412 zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]); 7413 zfs_refcount_create(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]); 7414 zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 7415 zfs_refcount_create(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 7416 zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]); 7417 zfs_refcount_create(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]); 7418 zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]); 7419 zfs_refcount_create(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]); 7420 zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_DATA]); 7421 zfs_refcount_create(&arc_uncached->arcs_size[ARC_BUFC_METADATA]); 7422 7423 wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA], 0); 7424 wmsum_init(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA], 0); 7425 wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA], 0); 7426 wmsum_init(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA], 0); 7427 7428 wmsum_init(&arc_sums.arcstat_hits, 0); 7429 wmsum_init(&arc_sums.arcstat_iohits, 0); 7430 wmsum_init(&arc_sums.arcstat_misses, 0); 7431 wmsum_init(&arc_sums.arcstat_demand_data_hits, 0); 7432 wmsum_init(&arc_sums.arcstat_demand_data_iohits, 0); 7433 wmsum_init(&arc_sums.arcstat_demand_data_misses, 0); 7434 wmsum_init(&arc_sums.arcstat_demand_metadata_hits, 0); 7435 wmsum_init(&arc_sums.arcstat_demand_metadata_iohits, 0); 7436 wmsum_init(&arc_sums.arcstat_demand_metadata_misses, 0); 7437 wmsum_init(&arc_sums.arcstat_prefetch_data_hits, 0); 7438 wmsum_init(&arc_sums.arcstat_prefetch_data_iohits, 0); 7439 wmsum_init(&arc_sums.arcstat_prefetch_data_misses, 0); 7440 wmsum_init(&arc_sums.arcstat_prefetch_metadata_hits, 0); 7441 wmsum_init(&arc_sums.arcstat_prefetch_metadata_iohits, 0); 7442 wmsum_init(&arc_sums.arcstat_prefetch_metadata_misses, 0); 7443 wmsum_init(&arc_sums.arcstat_mru_hits, 0); 7444 wmsum_init(&arc_sums.arcstat_mru_ghost_hits, 0); 7445 wmsum_init(&arc_sums.arcstat_mfu_hits, 0); 7446 wmsum_init(&arc_sums.arcstat_mfu_ghost_hits, 0); 7447 wmsum_init(&arc_sums.arcstat_uncached_hits, 0); 7448 wmsum_init(&arc_sums.arcstat_deleted, 0); 7449 wmsum_init(&arc_sums.arcstat_mutex_miss, 0); 7450 wmsum_init(&arc_sums.arcstat_access_skip, 0); 7451 wmsum_init(&arc_sums.arcstat_evict_skip, 0); 7452 wmsum_init(&arc_sums.arcstat_evict_not_enough, 0); 7453 wmsum_init(&arc_sums.arcstat_evict_l2_cached, 0); 7454 wmsum_init(&arc_sums.arcstat_evict_l2_eligible, 0); 7455 wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mfu, 0); 7456 wmsum_init(&arc_sums.arcstat_evict_l2_eligible_mru, 0); 7457 wmsum_init(&arc_sums.arcstat_evict_l2_ineligible, 0); 7458 wmsum_init(&arc_sums.arcstat_evict_l2_skip, 0); 7459 wmsum_init(&arc_sums.arcstat_hash_collisions, 0); 7460 wmsum_init(&arc_sums.arcstat_hash_chains, 0); 7461 aggsum_init(&arc_sums.arcstat_size, 0); 7462 wmsum_init(&arc_sums.arcstat_compressed_size, 0); 7463 wmsum_init(&arc_sums.arcstat_uncompressed_size, 0); 7464 wmsum_init(&arc_sums.arcstat_overhead_size, 0); 7465 wmsum_init(&arc_sums.arcstat_hdr_size, 0); 7466 wmsum_init(&arc_sums.arcstat_data_size, 0); 7467 wmsum_init(&arc_sums.arcstat_metadata_size, 0); 7468 wmsum_init(&arc_sums.arcstat_dbuf_size, 0); 7469 wmsum_init(&arc_sums.arcstat_dnode_size, 0); 7470 wmsum_init(&arc_sums.arcstat_bonus_size, 0); 7471 wmsum_init(&arc_sums.arcstat_l2_hits, 0); 7472 wmsum_init(&arc_sums.arcstat_l2_misses, 0); 7473 wmsum_init(&arc_sums.arcstat_l2_prefetch_asize, 0); 7474 wmsum_init(&arc_sums.arcstat_l2_mru_asize, 0); 7475 wmsum_init(&arc_sums.arcstat_l2_mfu_asize, 0); 7476 wmsum_init(&arc_sums.arcstat_l2_bufc_data_asize, 0); 7477 wmsum_init(&arc_sums.arcstat_l2_bufc_metadata_asize, 0); 7478 wmsum_init(&arc_sums.arcstat_l2_feeds, 0); 7479 wmsum_init(&arc_sums.arcstat_l2_rw_clash, 0); 7480 wmsum_init(&arc_sums.arcstat_l2_read_bytes, 0); 7481 wmsum_init(&arc_sums.arcstat_l2_write_bytes, 0); 7482 wmsum_init(&arc_sums.arcstat_l2_writes_sent, 0); 7483 wmsum_init(&arc_sums.arcstat_l2_writes_done, 0); 7484 wmsum_init(&arc_sums.arcstat_l2_writes_error, 0); 7485 wmsum_init(&arc_sums.arcstat_l2_writes_lock_retry, 0); 7486 wmsum_init(&arc_sums.arcstat_l2_evict_lock_retry, 0); 7487 wmsum_init(&arc_sums.arcstat_l2_evict_reading, 0); 7488 wmsum_init(&arc_sums.arcstat_l2_evict_l1cached, 0); 7489 wmsum_init(&arc_sums.arcstat_l2_free_on_write, 0); 7490 wmsum_init(&arc_sums.arcstat_l2_abort_lowmem, 0); 7491 wmsum_init(&arc_sums.arcstat_l2_cksum_bad, 0); 7492 wmsum_init(&arc_sums.arcstat_l2_io_error, 0); 7493 wmsum_init(&arc_sums.arcstat_l2_lsize, 0); 7494 wmsum_init(&arc_sums.arcstat_l2_psize, 0); 7495 aggsum_init(&arc_sums.arcstat_l2_hdr_size, 0); 7496 wmsum_init(&arc_sums.arcstat_l2_log_blk_writes, 0); 7497 wmsum_init(&arc_sums.arcstat_l2_log_blk_asize, 0); 7498 wmsum_init(&arc_sums.arcstat_l2_log_blk_count, 0); 7499 wmsum_init(&arc_sums.arcstat_l2_rebuild_success, 0); 7500 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_unsupported, 0); 7501 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_io_errors, 0); 7502 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_dh_errors, 0); 7503 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors, 0); 7504 wmsum_init(&arc_sums.arcstat_l2_rebuild_abort_lowmem, 0); 7505 wmsum_init(&arc_sums.arcstat_l2_rebuild_size, 0); 7506 wmsum_init(&arc_sums.arcstat_l2_rebuild_asize, 0); 7507 wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs, 0); 7508 wmsum_init(&arc_sums.arcstat_l2_rebuild_bufs_precached, 0); 7509 wmsum_init(&arc_sums.arcstat_l2_rebuild_log_blks, 0); 7510 wmsum_init(&arc_sums.arcstat_memory_throttle_count, 0); 7511 wmsum_init(&arc_sums.arcstat_memory_direct_count, 0); 7512 wmsum_init(&arc_sums.arcstat_memory_indirect_count, 0); 7513 wmsum_init(&arc_sums.arcstat_prune, 0); 7514 wmsum_init(&arc_sums.arcstat_meta_used, 0); 7515 wmsum_init(&arc_sums.arcstat_async_upgrade_sync, 0); 7516 wmsum_init(&arc_sums.arcstat_predictive_prefetch, 0); 7517 wmsum_init(&arc_sums.arcstat_demand_hit_predictive_prefetch, 0); 7518 wmsum_init(&arc_sums.arcstat_demand_iohit_predictive_prefetch, 0); 7519 wmsum_init(&arc_sums.arcstat_prescient_prefetch, 0); 7520 wmsum_init(&arc_sums.arcstat_demand_hit_prescient_prefetch, 0); 7521 wmsum_init(&arc_sums.arcstat_demand_iohit_prescient_prefetch, 0); 7522 wmsum_init(&arc_sums.arcstat_raw_size, 0); 7523 wmsum_init(&arc_sums.arcstat_cached_only_in_progress, 0); 7524 wmsum_init(&arc_sums.arcstat_abd_chunk_waste_size, 0); 7525 7526 arc_anon->arcs_state = ARC_STATE_ANON; 7527 arc_mru->arcs_state = ARC_STATE_MRU; 7528 arc_mru_ghost->arcs_state = ARC_STATE_MRU_GHOST; 7529 arc_mfu->arcs_state = ARC_STATE_MFU; 7530 arc_mfu_ghost->arcs_state = ARC_STATE_MFU_GHOST; 7531 arc_l2c_only->arcs_state = ARC_STATE_L2C_ONLY; 7532 arc_uncached->arcs_state = ARC_STATE_UNCACHED; 7533 } 7534 7535 static void 7536 arc_state_fini(void) 7537 { 7538 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_METADATA]); 7539 zfs_refcount_destroy(&arc_anon->arcs_esize[ARC_BUFC_DATA]); 7540 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_METADATA]); 7541 zfs_refcount_destroy(&arc_mru->arcs_esize[ARC_BUFC_DATA]); 7542 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_METADATA]); 7543 zfs_refcount_destroy(&arc_mru_ghost->arcs_esize[ARC_BUFC_DATA]); 7544 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_METADATA]); 7545 zfs_refcount_destroy(&arc_mfu->arcs_esize[ARC_BUFC_DATA]); 7546 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_METADATA]); 7547 zfs_refcount_destroy(&arc_mfu_ghost->arcs_esize[ARC_BUFC_DATA]); 7548 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_METADATA]); 7549 zfs_refcount_destroy(&arc_l2c_only->arcs_esize[ARC_BUFC_DATA]); 7550 zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_METADATA]); 7551 zfs_refcount_destroy(&arc_uncached->arcs_esize[ARC_BUFC_DATA]); 7552 7553 zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_DATA]); 7554 zfs_refcount_destroy(&arc_anon->arcs_size[ARC_BUFC_METADATA]); 7555 zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_DATA]); 7556 zfs_refcount_destroy(&arc_mru->arcs_size[ARC_BUFC_METADATA]); 7557 zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_DATA]); 7558 zfs_refcount_destroy(&arc_mru_ghost->arcs_size[ARC_BUFC_METADATA]); 7559 zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_DATA]); 7560 zfs_refcount_destroy(&arc_mfu->arcs_size[ARC_BUFC_METADATA]); 7561 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_DATA]); 7562 zfs_refcount_destroy(&arc_mfu_ghost->arcs_size[ARC_BUFC_METADATA]); 7563 zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_DATA]); 7564 zfs_refcount_destroy(&arc_l2c_only->arcs_size[ARC_BUFC_METADATA]); 7565 zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_DATA]); 7566 zfs_refcount_destroy(&arc_uncached->arcs_size[ARC_BUFC_METADATA]); 7567 7568 multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_METADATA]); 7569 multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_METADATA]); 7570 multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_METADATA]); 7571 multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_METADATA]); 7572 multilist_destroy(&arc_mru->arcs_list[ARC_BUFC_DATA]); 7573 multilist_destroy(&arc_mru_ghost->arcs_list[ARC_BUFC_DATA]); 7574 multilist_destroy(&arc_mfu->arcs_list[ARC_BUFC_DATA]); 7575 multilist_destroy(&arc_mfu_ghost->arcs_list[ARC_BUFC_DATA]); 7576 multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_METADATA]); 7577 multilist_destroy(&arc_l2c_only->arcs_list[ARC_BUFC_DATA]); 7578 multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_METADATA]); 7579 multilist_destroy(&arc_uncached->arcs_list[ARC_BUFC_DATA]); 7580 7581 wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_DATA]); 7582 wmsum_fini(&arc_mru_ghost->arcs_hits[ARC_BUFC_METADATA]); 7583 wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_DATA]); 7584 wmsum_fini(&arc_mfu_ghost->arcs_hits[ARC_BUFC_METADATA]); 7585 7586 wmsum_fini(&arc_sums.arcstat_hits); 7587 wmsum_fini(&arc_sums.arcstat_iohits); 7588 wmsum_fini(&arc_sums.arcstat_misses); 7589 wmsum_fini(&arc_sums.arcstat_demand_data_hits); 7590 wmsum_fini(&arc_sums.arcstat_demand_data_iohits); 7591 wmsum_fini(&arc_sums.arcstat_demand_data_misses); 7592 wmsum_fini(&arc_sums.arcstat_demand_metadata_hits); 7593 wmsum_fini(&arc_sums.arcstat_demand_metadata_iohits); 7594 wmsum_fini(&arc_sums.arcstat_demand_metadata_misses); 7595 wmsum_fini(&arc_sums.arcstat_prefetch_data_hits); 7596 wmsum_fini(&arc_sums.arcstat_prefetch_data_iohits); 7597 wmsum_fini(&arc_sums.arcstat_prefetch_data_misses); 7598 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_hits); 7599 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_iohits); 7600 wmsum_fini(&arc_sums.arcstat_prefetch_metadata_misses); 7601 wmsum_fini(&arc_sums.arcstat_mru_hits); 7602 wmsum_fini(&arc_sums.arcstat_mru_ghost_hits); 7603 wmsum_fini(&arc_sums.arcstat_mfu_hits); 7604 wmsum_fini(&arc_sums.arcstat_mfu_ghost_hits); 7605 wmsum_fini(&arc_sums.arcstat_uncached_hits); 7606 wmsum_fini(&arc_sums.arcstat_deleted); 7607 wmsum_fini(&arc_sums.arcstat_mutex_miss); 7608 wmsum_fini(&arc_sums.arcstat_access_skip); 7609 wmsum_fini(&arc_sums.arcstat_evict_skip); 7610 wmsum_fini(&arc_sums.arcstat_evict_not_enough); 7611 wmsum_fini(&arc_sums.arcstat_evict_l2_cached); 7612 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible); 7613 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mfu); 7614 wmsum_fini(&arc_sums.arcstat_evict_l2_eligible_mru); 7615 wmsum_fini(&arc_sums.arcstat_evict_l2_ineligible); 7616 wmsum_fini(&arc_sums.arcstat_evict_l2_skip); 7617 wmsum_fini(&arc_sums.arcstat_hash_collisions); 7618 wmsum_fini(&arc_sums.arcstat_hash_chains); 7619 aggsum_fini(&arc_sums.arcstat_size); 7620 wmsum_fini(&arc_sums.arcstat_compressed_size); 7621 wmsum_fini(&arc_sums.arcstat_uncompressed_size); 7622 wmsum_fini(&arc_sums.arcstat_overhead_size); 7623 wmsum_fini(&arc_sums.arcstat_hdr_size); 7624 wmsum_fini(&arc_sums.arcstat_data_size); 7625 wmsum_fini(&arc_sums.arcstat_metadata_size); 7626 wmsum_fini(&arc_sums.arcstat_dbuf_size); 7627 wmsum_fini(&arc_sums.arcstat_dnode_size); 7628 wmsum_fini(&arc_sums.arcstat_bonus_size); 7629 wmsum_fini(&arc_sums.arcstat_l2_hits); 7630 wmsum_fini(&arc_sums.arcstat_l2_misses); 7631 wmsum_fini(&arc_sums.arcstat_l2_prefetch_asize); 7632 wmsum_fini(&arc_sums.arcstat_l2_mru_asize); 7633 wmsum_fini(&arc_sums.arcstat_l2_mfu_asize); 7634 wmsum_fini(&arc_sums.arcstat_l2_bufc_data_asize); 7635 wmsum_fini(&arc_sums.arcstat_l2_bufc_metadata_asize); 7636 wmsum_fini(&arc_sums.arcstat_l2_feeds); 7637 wmsum_fini(&arc_sums.arcstat_l2_rw_clash); 7638 wmsum_fini(&arc_sums.arcstat_l2_read_bytes); 7639 wmsum_fini(&arc_sums.arcstat_l2_write_bytes); 7640 wmsum_fini(&arc_sums.arcstat_l2_writes_sent); 7641 wmsum_fini(&arc_sums.arcstat_l2_writes_done); 7642 wmsum_fini(&arc_sums.arcstat_l2_writes_error); 7643 wmsum_fini(&arc_sums.arcstat_l2_writes_lock_retry); 7644 wmsum_fini(&arc_sums.arcstat_l2_evict_lock_retry); 7645 wmsum_fini(&arc_sums.arcstat_l2_evict_reading); 7646 wmsum_fini(&arc_sums.arcstat_l2_evict_l1cached); 7647 wmsum_fini(&arc_sums.arcstat_l2_free_on_write); 7648 wmsum_fini(&arc_sums.arcstat_l2_abort_lowmem); 7649 wmsum_fini(&arc_sums.arcstat_l2_cksum_bad); 7650 wmsum_fini(&arc_sums.arcstat_l2_io_error); 7651 wmsum_fini(&arc_sums.arcstat_l2_lsize); 7652 wmsum_fini(&arc_sums.arcstat_l2_psize); 7653 aggsum_fini(&arc_sums.arcstat_l2_hdr_size); 7654 wmsum_fini(&arc_sums.arcstat_l2_log_blk_writes); 7655 wmsum_fini(&arc_sums.arcstat_l2_log_blk_asize); 7656 wmsum_fini(&arc_sums.arcstat_l2_log_blk_count); 7657 wmsum_fini(&arc_sums.arcstat_l2_rebuild_success); 7658 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_unsupported); 7659 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_io_errors); 7660 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_dh_errors); 7661 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_cksum_lb_errors); 7662 wmsum_fini(&arc_sums.arcstat_l2_rebuild_abort_lowmem); 7663 wmsum_fini(&arc_sums.arcstat_l2_rebuild_size); 7664 wmsum_fini(&arc_sums.arcstat_l2_rebuild_asize); 7665 wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs); 7666 wmsum_fini(&arc_sums.arcstat_l2_rebuild_bufs_precached); 7667 wmsum_fini(&arc_sums.arcstat_l2_rebuild_log_blks); 7668 wmsum_fini(&arc_sums.arcstat_memory_throttle_count); 7669 wmsum_fini(&arc_sums.arcstat_memory_direct_count); 7670 wmsum_fini(&arc_sums.arcstat_memory_indirect_count); 7671 wmsum_fini(&arc_sums.arcstat_prune); 7672 wmsum_fini(&arc_sums.arcstat_meta_used); 7673 wmsum_fini(&arc_sums.arcstat_async_upgrade_sync); 7674 wmsum_fini(&arc_sums.arcstat_predictive_prefetch); 7675 wmsum_fini(&arc_sums.arcstat_demand_hit_predictive_prefetch); 7676 wmsum_fini(&arc_sums.arcstat_demand_iohit_predictive_prefetch); 7677 wmsum_fini(&arc_sums.arcstat_prescient_prefetch); 7678 wmsum_fini(&arc_sums.arcstat_demand_hit_prescient_prefetch); 7679 wmsum_fini(&arc_sums.arcstat_demand_iohit_prescient_prefetch); 7680 wmsum_fini(&arc_sums.arcstat_raw_size); 7681 wmsum_fini(&arc_sums.arcstat_cached_only_in_progress); 7682 wmsum_fini(&arc_sums.arcstat_abd_chunk_waste_size); 7683 } 7684 7685 uint64_t 7686 arc_target_bytes(void) 7687 { 7688 return (arc_c); 7689 } 7690 7691 void 7692 arc_set_limits(uint64_t allmem) 7693 { 7694 /* Set min cache to 1/32 of all memory, or 32MB, whichever is more. */ 7695 arc_c_min = MAX(allmem / 32, 2ULL << SPA_MAXBLOCKSHIFT); 7696 7697 /* How to set default max varies by platform. */ 7698 arc_c_max = arc_default_max(arc_c_min, allmem); 7699 } 7700 void 7701 arc_init(void) 7702 { 7703 uint64_t percent, allmem = arc_all_memory(); 7704 mutex_init(&arc_evict_lock, NULL, MUTEX_DEFAULT, NULL); 7705 list_create(&arc_evict_waiters, sizeof (arc_evict_waiter_t), 7706 offsetof(arc_evict_waiter_t, aew_node)); 7707 7708 arc_min_prefetch_ms = 1000; 7709 arc_min_prescient_prefetch_ms = 6000; 7710 7711 #if defined(_KERNEL) 7712 arc_lowmem_init(); 7713 #endif 7714 7715 arc_set_limits(allmem); 7716 7717 #ifdef _KERNEL 7718 /* 7719 * If zfs_arc_max is non-zero at init, meaning it was set in the kernel 7720 * environment before the module was loaded, don't block setting the 7721 * maximum because it is less than arc_c_min, instead, reset arc_c_min 7722 * to a lower value. 7723 * zfs_arc_min will be handled by arc_tuning_update(). 7724 */ 7725 if (zfs_arc_max != 0 && zfs_arc_max >= MIN_ARC_MAX && 7726 zfs_arc_max < allmem) { 7727 arc_c_max = zfs_arc_max; 7728 if (arc_c_min >= arc_c_max) { 7729 arc_c_min = MAX(zfs_arc_max / 2, 7730 2ULL << SPA_MAXBLOCKSHIFT); 7731 } 7732 } 7733 #else 7734 /* 7735 * In userland, there's only the memory pressure that we artificially 7736 * create (see arc_available_memory()). Don't let arc_c get too 7737 * small, because it can cause transactions to be larger than 7738 * arc_c, causing arc_tempreserve_space() to fail. 7739 */ 7740 arc_c_min = MAX(arc_c_max / 2, 2ULL << SPA_MAXBLOCKSHIFT); 7741 #endif 7742 7743 arc_c = arc_c_min; 7744 /* 7745 * 32-bit fixed point fractions of metadata from total ARC size, 7746 * MRU data from all data and MRU metadata from all metadata. 7747 */ 7748 arc_meta = (1ULL << 32) / 4; /* Metadata is 25% of arc_c. */ 7749 arc_pd = (1ULL << 32) / 2; /* Data MRU is 50% of data. */ 7750 arc_pm = (1ULL << 32) / 2; /* Metadata MRU is 50% of metadata. */ 7751 7752 percent = MIN(zfs_arc_dnode_limit_percent, 100); 7753 arc_dnode_limit = arc_c_max * percent / 100; 7754 7755 /* Apply user specified tunings */ 7756 arc_tuning_update(B_TRUE); 7757 7758 /* if kmem_flags are set, lets try to use less memory */ 7759 if (kmem_debugging()) 7760 arc_c = arc_c / 2; 7761 if (arc_c < arc_c_min) 7762 arc_c = arc_c_min; 7763 7764 arc_register_hotplug(); 7765 7766 arc_state_init(); 7767 7768 buf_init(); 7769 7770 list_create(&arc_prune_list, sizeof (arc_prune_t), 7771 offsetof(arc_prune_t, p_node)); 7772 mutex_init(&arc_prune_mtx, NULL, MUTEX_DEFAULT, NULL); 7773 7774 arc_prune_taskq = taskq_create("arc_prune", zfs_arc_prune_task_threads, 7775 defclsyspri, 100, INT_MAX, TASKQ_PREPOPULATE | TASKQ_DYNAMIC); 7776 7777 arc_ksp = kstat_create("zfs", 0, "arcstats", "misc", KSTAT_TYPE_NAMED, 7778 sizeof (arc_stats) / sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL); 7779 7780 if (arc_ksp != NULL) { 7781 arc_ksp->ks_data = &arc_stats; 7782 arc_ksp->ks_update = arc_kstat_update; 7783 kstat_install(arc_ksp); 7784 } 7785 7786 arc_state_evict_markers = 7787 arc_state_alloc_markers(arc_state_evict_marker_count); 7788 arc_evict_zthr = zthr_create_timer("arc_evict", 7789 arc_evict_cb_check, arc_evict_cb, NULL, SEC2NSEC(1), defclsyspri); 7790 arc_reap_zthr = zthr_create_timer("arc_reap", 7791 arc_reap_cb_check, arc_reap_cb, NULL, SEC2NSEC(1), minclsyspri); 7792 7793 arc_warm = B_FALSE; 7794 7795 /* 7796 * Calculate maximum amount of dirty data per pool. 7797 * 7798 * If it has been set by a module parameter, take that. 7799 * Otherwise, use a percentage of physical memory defined by 7800 * zfs_dirty_data_max_percent (default 10%) with a cap at 7801 * zfs_dirty_data_max_max (default 4G or 25% of physical memory). 7802 */ 7803 #ifdef __LP64__ 7804 if (zfs_dirty_data_max_max == 0) 7805 zfs_dirty_data_max_max = MIN(4ULL * 1024 * 1024 * 1024, 7806 allmem * zfs_dirty_data_max_max_percent / 100); 7807 #else 7808 if (zfs_dirty_data_max_max == 0) 7809 zfs_dirty_data_max_max = MIN(1ULL * 1024 * 1024 * 1024, 7810 allmem * zfs_dirty_data_max_max_percent / 100); 7811 #endif 7812 7813 if (zfs_dirty_data_max == 0) { 7814 zfs_dirty_data_max = allmem * 7815 zfs_dirty_data_max_percent / 100; 7816 zfs_dirty_data_max = MIN(zfs_dirty_data_max, 7817 zfs_dirty_data_max_max); 7818 } 7819 7820 if (zfs_wrlog_data_max == 0) { 7821 7822 /* 7823 * dp_wrlog_total is reduced for each txg at the end of 7824 * spa_sync(). However, dp_dirty_total is reduced every time 7825 * a block is written out. Thus under normal operation, 7826 * dp_wrlog_total could grow 2 times as big as 7827 * zfs_dirty_data_max. 7828 */ 7829 zfs_wrlog_data_max = zfs_dirty_data_max * 2; 7830 } 7831 } 7832 7833 void 7834 arc_fini(void) 7835 { 7836 arc_prune_t *p; 7837 7838 #ifdef _KERNEL 7839 arc_lowmem_fini(); 7840 #endif /* _KERNEL */ 7841 7842 /* Use B_TRUE to ensure *all* buffers are evicted */ 7843 arc_flush(NULL, B_TRUE); 7844 7845 if (arc_ksp != NULL) { 7846 kstat_delete(arc_ksp); 7847 arc_ksp = NULL; 7848 } 7849 7850 taskq_wait(arc_prune_taskq); 7851 taskq_destroy(arc_prune_taskq); 7852 7853 mutex_enter(&arc_prune_mtx); 7854 while ((p = list_remove_head(&arc_prune_list)) != NULL) { 7855 zfs_refcount_remove(&p->p_refcnt, &arc_prune_list); 7856 zfs_refcount_destroy(&p->p_refcnt); 7857 kmem_free(p, sizeof (*p)); 7858 } 7859 mutex_exit(&arc_prune_mtx); 7860 7861 list_destroy(&arc_prune_list); 7862 mutex_destroy(&arc_prune_mtx); 7863 7864 (void) zthr_cancel(arc_evict_zthr); 7865 (void) zthr_cancel(arc_reap_zthr); 7866 arc_state_free_markers(arc_state_evict_markers, 7867 arc_state_evict_marker_count); 7868 7869 mutex_destroy(&arc_evict_lock); 7870 list_destroy(&arc_evict_waiters); 7871 7872 /* 7873 * Free any buffers that were tagged for destruction. This needs 7874 * to occur before arc_state_fini() runs and destroys the aggsum 7875 * values which are updated when freeing scatter ABDs. 7876 */ 7877 l2arc_do_free_on_write(); 7878 7879 /* 7880 * buf_fini() must proceed arc_state_fini() because buf_fin() may 7881 * trigger the release of kmem magazines, which can callback to 7882 * arc_space_return() which accesses aggsums freed in act_state_fini(). 7883 */ 7884 buf_fini(); 7885 arc_state_fini(); 7886 7887 arc_unregister_hotplug(); 7888 7889 /* 7890 * We destroy the zthrs after all the ARC state has been 7891 * torn down to avoid the case of them receiving any 7892 * wakeup() signals after they are destroyed. 7893 */ 7894 zthr_destroy(arc_evict_zthr); 7895 zthr_destroy(arc_reap_zthr); 7896 7897 ASSERT0(arc_loaned_bytes); 7898 } 7899 7900 /* 7901 * Level 2 ARC 7902 * 7903 * The level 2 ARC (L2ARC) is a cache layer in-between main memory and disk. 7904 * It uses dedicated storage devices to hold cached data, which are populated 7905 * using large infrequent writes. The main role of this cache is to boost 7906 * the performance of random read workloads. The intended L2ARC devices 7907 * include short-stroked disks, solid state disks, and other media with 7908 * substantially faster read latency than disk. 7909 * 7910 * +-----------------------+ 7911 * | ARC | 7912 * +-----------------------+ 7913 * | ^ ^ 7914 * | | | 7915 * l2arc_feed_thread() arc_read() 7916 * | | | 7917 * | l2arc read | 7918 * V | | 7919 * +---------------+ | 7920 * | L2ARC | | 7921 * +---------------+ | 7922 * | ^ | 7923 * l2arc_write() | | 7924 * | | | 7925 * V | | 7926 * +-------+ +-------+ 7927 * | vdev | | vdev | 7928 * | cache | | cache | 7929 * +-------+ +-------+ 7930 * +=========+ .-----. 7931 * : L2ARC : |-_____-| 7932 * : devices : | Disks | 7933 * +=========+ `-_____-' 7934 * 7935 * Read requests are satisfied from the following sources, in order: 7936 * 7937 * 1) ARC 7938 * 2) vdev cache of L2ARC devices 7939 * 3) L2ARC devices 7940 * 4) vdev cache of disks 7941 * 5) disks 7942 * 7943 * Some L2ARC device types exhibit extremely slow write performance. 7944 * To accommodate for this there are some significant differences between 7945 * the L2ARC and traditional cache design: 7946 * 7947 * 1. There is no eviction path from the ARC to the L2ARC. Evictions from 7948 * the ARC behave as usual, freeing buffers and placing headers on ghost 7949 * lists. The ARC does not send buffers to the L2ARC during eviction as 7950 * this would add inflated write latencies for all ARC memory pressure. 7951 * 7952 * 2. The L2ARC attempts to cache data from the ARC before it is evicted. 7953 * It does this by periodically scanning buffers from the eviction-end of 7954 * the MFU and MRU ARC lists, copying them to the L2ARC devices if they are 7955 * not already there. It scans until a headroom of buffers is satisfied, 7956 * which itself is a buffer for ARC eviction. If a compressible buffer is 7957 * found during scanning and selected for writing to an L2ARC device, we 7958 * temporarily boost scanning headroom during the next scan cycle to make 7959 * sure we adapt to compression effects (which might significantly reduce 7960 * the data volume we write to L2ARC). The thread that does this is 7961 * l2arc_feed_thread(), illustrated below; example sizes are included to 7962 * provide a better sense of ratio than this diagram: 7963 * 7964 * head --> tail 7965 * +---------------------+----------+ 7966 * ARC_mfu |:::::#:::::::::::::::|o#o###o###|-->. # already on L2ARC 7967 * +---------------------+----------+ | o L2ARC eligible 7968 * ARC_mru |:#:::::::::::::::::::|#o#ooo####|-->| : ARC buffer 7969 * +---------------------+----------+ | 7970 * 15.9 Gbytes ^ 32 Mbytes | 7971 * headroom | 7972 * l2arc_feed_thread() 7973 * | 7974 * l2arc write hand <--[oooo]--' 7975 * | 8 Mbyte 7976 * | write max 7977 * V 7978 * +==============================+ 7979 * L2ARC dev |####|#|###|###| |####| ... | 7980 * +==============================+ 7981 * 32 Gbytes 7982 * 7983 * 3. If an ARC buffer is copied to the L2ARC but then hit instead of 7984 * evicted, then the L2ARC has cached a buffer much sooner than it probably 7985 * needed to, potentially wasting L2ARC device bandwidth and storage. It is 7986 * safe to say that this is an uncommon case, since buffers at the end of 7987 * the ARC lists have moved there due to inactivity. 7988 * 7989 * 4. If the ARC evicts faster than the L2ARC can maintain a headroom, 7990 * then the L2ARC simply misses copying some buffers. This serves as a 7991 * pressure valve to prevent heavy read workloads from both stalling the ARC 7992 * with waits and clogging the L2ARC with writes. This also helps prevent 7993 * the potential for the L2ARC to churn if it attempts to cache content too 7994 * quickly, such as during backups of the entire pool. 7995 * 7996 * 5. After system boot and before the ARC has filled main memory, there are 7997 * no evictions from the ARC and so the tails of the ARC_mfu and ARC_mru 7998 * lists can remain mostly static. Instead of searching from tail of these 7999 * lists as pictured, the l2arc_feed_thread() will search from the list heads 8000 * for eligible buffers, greatly increasing its chance of finding them. 8001 * 8002 * The L2ARC device write speed is also boosted during this time so that 8003 * the L2ARC warms up faster. Since there have been no ARC evictions yet, 8004 * there are no L2ARC reads, and no fear of degrading read performance 8005 * through increased writes. 8006 * 8007 * 6. Writes to the L2ARC devices are grouped and sent in-sequence, so that 8008 * the vdev queue can aggregate them into larger and fewer writes. Each 8009 * device is written to in a rotor fashion, sweeping writes through 8010 * available space then repeating. 8011 * 8012 * 7. The L2ARC does not store dirty content. It never needs to flush 8013 * write buffers back to disk based storage. 8014 * 8015 * 8. If an ARC buffer is written (and dirtied) which also exists in the 8016 * L2ARC, the now stale L2ARC buffer is immediately dropped. 8017 * 8018 * The performance of the L2ARC can be tweaked by a number of tunables, which 8019 * may be necessary for different workloads: 8020 * 8021 * l2arc_write_max max write bytes per interval 8022 * l2arc_write_boost extra write bytes during device warmup 8023 * l2arc_noprefetch skip caching prefetched buffers 8024 * l2arc_headroom number of max device writes to precache 8025 * l2arc_headroom_boost when we find compressed buffers during ARC 8026 * scanning, we multiply headroom by this 8027 * percentage factor for the next scan cycle, 8028 * since more compressed buffers are likely to 8029 * be present 8030 * l2arc_feed_secs seconds between L2ARC writing 8031 * 8032 * Tunables may be removed or added as future performance improvements are 8033 * integrated, and also may become zpool properties. 8034 * 8035 * There are three key functions that control how the L2ARC warms up: 8036 * 8037 * l2arc_write_eligible() check if a buffer is eligible to cache 8038 * l2arc_write_size() calculate how much to write 8039 * l2arc_write_interval() calculate sleep delay between writes 8040 * 8041 * These three functions determine what to write, how much, and how quickly 8042 * to send writes. 8043 * 8044 * L2ARC persistence: 8045 * 8046 * When writing buffers to L2ARC, we periodically add some metadata to 8047 * make sure we can pick them up after reboot, thus dramatically reducing 8048 * the impact that any downtime has on the performance of storage systems 8049 * with large caches. 8050 * 8051 * The implementation works fairly simply by integrating the following two 8052 * modifications: 8053 * 8054 * *) When writing to the L2ARC, we occasionally write a "l2arc log block", 8055 * which is an additional piece of metadata which describes what's been 8056 * written. This allows us to rebuild the arc_buf_hdr_t structures of the 8057 * main ARC buffers. There are 2 linked-lists of log blocks headed by 8058 * dh_start_lbps[2]. We alternate which chain we append to, so they are 8059 * time-wise and offset-wise interleaved, but that is an optimization rather 8060 * than for correctness. The log block also includes a pointer to the 8061 * previous block in its chain. 8062 * 8063 * *) We reserve SPA_MINBLOCKSIZE of space at the start of each L2ARC device 8064 * for our header bookkeeping purposes. This contains a device header, 8065 * which contains our top-level reference structures. We update it each 8066 * time we write a new log block, so that we're able to locate it in the 8067 * L2ARC device. If this write results in an inconsistent device header 8068 * (e.g. due to power failure), we detect this by verifying the header's 8069 * checksum and simply fail to reconstruct the L2ARC after reboot. 8070 * 8071 * Implementation diagram: 8072 * 8073 * +=== L2ARC device (not to scale) ======================================+ 8074 * | ___two newest log block pointers__.__________ | 8075 * | / \dh_start_lbps[1] | 8076 * | / \ \dh_start_lbps[0]| 8077 * |.___/__. V V | 8078 * ||L2 dev|....|lb |bufs |lb |bufs |lb |bufs |lb |bufs |lb |---(empty)---| 8079 * || hdr| ^ /^ /^ / / | 8080 * |+------+ ...--\-------/ \-----/--\------/ / | 8081 * | \--------------/ \--------------/ | 8082 * +======================================================================+ 8083 * 8084 * As can be seen on the diagram, rather than using a simple linked list, 8085 * we use a pair of linked lists with alternating elements. This is a 8086 * performance enhancement due to the fact that we only find out the 8087 * address of the next log block access once the current block has been 8088 * completely read in. Obviously, this hurts performance, because we'd be 8089 * keeping the device's I/O queue at only a 1 operation deep, thus 8090 * incurring a large amount of I/O round-trip latency. Having two lists 8091 * allows us to fetch two log blocks ahead of where we are currently 8092 * rebuilding L2ARC buffers. 8093 * 8094 * On-device data structures: 8095 * 8096 * L2ARC device header: l2arc_dev_hdr_phys_t 8097 * L2ARC log block: l2arc_log_blk_phys_t 8098 * 8099 * L2ARC reconstruction: 8100 * 8101 * When writing data, we simply write in the standard rotary fashion, 8102 * evicting buffers as we go and simply writing new data over them (writing 8103 * a new log block every now and then). This obviously means that once we 8104 * loop around the end of the device, we will start cutting into an already 8105 * committed log block (and its referenced data buffers), like so: 8106 * 8107 * current write head__ __old tail 8108 * \ / 8109 * V V 8110 * <--|bufs |lb |bufs |lb | |bufs |lb |bufs |lb |--> 8111 * ^ ^^^^^^^^^___________________________________ 8112 * | \ 8113 * <<nextwrite>> may overwrite this blk and/or its bufs --' 8114 * 8115 * When importing the pool, we detect this situation and use it to stop 8116 * our scanning process (see l2arc_rebuild). 8117 * 8118 * There is one significant caveat to consider when rebuilding ARC contents 8119 * from an L2ARC device: what about invalidated buffers? Given the above 8120 * construction, we cannot update blocks which we've already written to amend 8121 * them to remove buffers which were invalidated. Thus, during reconstruction, 8122 * we might be populating the cache with buffers for data that's not on the 8123 * main pool anymore, or may have been overwritten! 8124 * 8125 * As it turns out, this isn't a problem. Every arc_read request includes 8126 * both the DVA and, crucially, the birth TXG of the BP the caller is 8127 * looking for. So even if the cache were populated by completely rotten 8128 * blocks for data that had been long deleted and/or overwritten, we'll 8129 * never actually return bad data from the cache, since the DVA with the 8130 * birth TXG uniquely identify a block in space and time - once created, 8131 * a block is immutable on disk. The worst thing we have done is wasted 8132 * some time and memory at l2arc rebuild to reconstruct outdated ARC 8133 * entries that will get dropped from the l2arc as it is being updated 8134 * with new blocks. 8135 * 8136 * L2ARC buffers that have been evicted by l2arc_evict() ahead of the write 8137 * hand are not restored. This is done by saving the offset (in bytes) 8138 * l2arc_evict() has evicted to in the L2ARC device header and taking it 8139 * into account when restoring buffers. 8140 */ 8141 8142 static boolean_t 8143 l2arc_write_eligible(uint64_t spa_guid, arc_buf_hdr_t *hdr) 8144 { 8145 /* 8146 * A buffer is *not* eligible for the L2ARC if it: 8147 * 1. belongs to a different spa. 8148 * 2. is already cached on the L2ARC. 8149 * 3. has an I/O in progress (it may be an incomplete read). 8150 * 4. is flagged not eligible (zfs property). 8151 */ 8152 if (hdr->b_spa != spa_guid || HDR_HAS_L2HDR(hdr) || 8153 HDR_IO_IN_PROGRESS(hdr) || !HDR_L2CACHE(hdr)) 8154 return (B_FALSE); 8155 8156 return (B_TRUE); 8157 } 8158 8159 static uint64_t 8160 l2arc_write_size(l2arc_dev_t *dev) 8161 { 8162 uint64_t size; 8163 8164 /* 8165 * Make sure our globals have meaningful values in case the user 8166 * altered them. 8167 */ 8168 size = l2arc_write_max; 8169 if (size == 0) { 8170 cmn_err(CE_NOTE, "Bad value for l2arc_write_max, value must " 8171 "be greater than zero, resetting it to the default (%d)", 8172 L2ARC_WRITE_SIZE); 8173 size = l2arc_write_max = L2ARC_WRITE_SIZE; 8174 } 8175 8176 if (arc_warm == B_FALSE) 8177 size += l2arc_write_boost; 8178 8179 /* We need to add in the worst case scenario of log block overhead. */ 8180 size += l2arc_log_blk_overhead(size, dev); 8181 if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) { 8182 /* 8183 * Trim ahead of the write size 64MB or (l2arc_trim_ahead/100) 8184 * times the writesize, whichever is greater. 8185 */ 8186 size += MAX(64 * 1024 * 1024, 8187 (size * l2arc_trim_ahead) / 100); 8188 } 8189 8190 /* 8191 * Make sure the write size does not exceed the size of the cache 8192 * device. This is important in l2arc_evict(), otherwise infinite 8193 * iteration can occur. 8194 */ 8195 if (size > dev->l2ad_end - dev->l2ad_start) { 8196 cmn_err(CE_NOTE, "l2arc_write_max or l2arc_write_boost " 8197 "plus the overhead of log blocks (persistent L2ARC, " 8198 "%llu bytes) exceeds the size of the cache device " 8199 "(guid %llu), resetting them to the default (%d)", 8200 (u_longlong_t)l2arc_log_blk_overhead(size, dev), 8201 (u_longlong_t)dev->l2ad_vdev->vdev_guid, L2ARC_WRITE_SIZE); 8202 8203 size = l2arc_write_max = l2arc_write_boost = L2ARC_WRITE_SIZE; 8204 8205 if (l2arc_trim_ahead > 1) { 8206 cmn_err(CE_NOTE, "l2arc_trim_ahead set to 1"); 8207 l2arc_trim_ahead = 1; 8208 } 8209 8210 if (arc_warm == B_FALSE) 8211 size += l2arc_write_boost; 8212 8213 size += l2arc_log_blk_overhead(size, dev); 8214 if (dev->l2ad_vdev->vdev_has_trim && l2arc_trim_ahead > 0) { 8215 size += MAX(64 * 1024 * 1024, 8216 (size * l2arc_trim_ahead) / 100); 8217 } 8218 } 8219 8220 return (size); 8221 8222 } 8223 8224 static clock_t 8225 l2arc_write_interval(clock_t began, uint64_t wanted, uint64_t wrote) 8226 { 8227 clock_t interval, next, now; 8228 8229 /* 8230 * If the ARC lists are busy, increase our write rate; if the 8231 * lists are stale, idle back. This is achieved by checking 8232 * how much we previously wrote - if it was more than half of 8233 * what we wanted, schedule the next write much sooner. 8234 */ 8235 if (l2arc_feed_again && wrote > (wanted / 2)) 8236 interval = (hz * l2arc_feed_min_ms) / 1000; 8237 else 8238 interval = hz * l2arc_feed_secs; 8239 8240 now = ddi_get_lbolt(); 8241 next = MAX(now, MIN(now + interval, began + interval)); 8242 8243 return (next); 8244 } 8245 8246 /* 8247 * Cycle through L2ARC devices. This is how L2ARC load balances. 8248 * If a device is returned, this also returns holding the spa config lock. 8249 */ 8250 static l2arc_dev_t * 8251 l2arc_dev_get_next(void) 8252 { 8253 l2arc_dev_t *first, *next = NULL; 8254 8255 /* 8256 * Lock out the removal of spas (spa_namespace_lock), then removal 8257 * of cache devices (l2arc_dev_mtx). Once a device has been selected, 8258 * both locks will be dropped and a spa config lock held instead. 8259 */ 8260 mutex_enter(&spa_namespace_lock); 8261 mutex_enter(&l2arc_dev_mtx); 8262 8263 /* if there are no vdevs, there is nothing to do */ 8264 if (l2arc_ndev == 0) 8265 goto out; 8266 8267 first = NULL; 8268 next = l2arc_dev_last; 8269 do { 8270 /* loop around the list looking for a non-faulted vdev */ 8271 if (next == NULL) { 8272 next = list_head(l2arc_dev_list); 8273 } else { 8274 next = list_next(l2arc_dev_list, next); 8275 if (next == NULL) 8276 next = list_head(l2arc_dev_list); 8277 } 8278 8279 /* if we have come back to the start, bail out */ 8280 if (first == NULL) 8281 first = next; 8282 else if (next == first) 8283 break; 8284 8285 ASSERT3P(next, !=, NULL); 8286 } while (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8287 next->l2ad_trim_all); 8288 8289 /* if we were unable to find any usable vdevs, return NULL */ 8290 if (vdev_is_dead(next->l2ad_vdev) || next->l2ad_rebuild || 8291 next->l2ad_trim_all) 8292 next = NULL; 8293 8294 l2arc_dev_last = next; 8295 8296 out: 8297 mutex_exit(&l2arc_dev_mtx); 8298 8299 /* 8300 * Grab the config lock to prevent the 'next' device from being 8301 * removed while we are writing to it. 8302 */ 8303 if (next != NULL) 8304 spa_config_enter(next->l2ad_spa, SCL_L2ARC, next, RW_READER); 8305 mutex_exit(&spa_namespace_lock); 8306 8307 return (next); 8308 } 8309 8310 /* 8311 * Free buffers that were tagged for destruction. 8312 */ 8313 static void 8314 l2arc_do_free_on_write(void) 8315 { 8316 l2arc_data_free_t *df; 8317 8318 mutex_enter(&l2arc_free_on_write_mtx); 8319 while ((df = list_remove_head(l2arc_free_on_write)) != NULL) { 8320 ASSERT3P(df->l2df_abd, !=, NULL); 8321 abd_free(df->l2df_abd); 8322 kmem_free(df, sizeof (l2arc_data_free_t)); 8323 } 8324 mutex_exit(&l2arc_free_on_write_mtx); 8325 } 8326 8327 /* 8328 * A write to a cache device has completed. Update all headers to allow 8329 * reads from these buffers to begin. 8330 */ 8331 static void 8332 l2arc_write_done(zio_t *zio) 8333 { 8334 l2arc_write_callback_t *cb; 8335 l2arc_lb_abd_buf_t *abd_buf; 8336 l2arc_lb_ptr_buf_t *lb_ptr_buf; 8337 l2arc_dev_t *dev; 8338 l2arc_dev_hdr_phys_t *l2dhdr; 8339 list_t *buflist; 8340 arc_buf_hdr_t *head, *hdr, *hdr_prev; 8341 kmutex_t *hash_lock; 8342 int64_t bytes_dropped = 0; 8343 8344 cb = zio->io_private; 8345 ASSERT3P(cb, !=, NULL); 8346 dev = cb->l2wcb_dev; 8347 l2dhdr = dev->l2ad_dev_hdr; 8348 ASSERT3P(dev, !=, NULL); 8349 head = cb->l2wcb_head; 8350 ASSERT3P(head, !=, NULL); 8351 buflist = &dev->l2ad_buflist; 8352 ASSERT3P(buflist, !=, NULL); 8353 DTRACE_PROBE2(l2arc__iodone, zio_t *, zio, 8354 l2arc_write_callback_t *, cb); 8355 8356 /* 8357 * All writes completed, or an error was hit. 8358 */ 8359 top: 8360 mutex_enter(&dev->l2ad_mtx); 8361 for (hdr = list_prev(buflist, head); hdr; hdr = hdr_prev) { 8362 hdr_prev = list_prev(buflist, hdr); 8363 8364 hash_lock = HDR_LOCK(hdr); 8365 8366 /* 8367 * We cannot use mutex_enter or else we can deadlock 8368 * with l2arc_write_buffers (due to swapping the order 8369 * the hash lock and l2ad_mtx are taken). 8370 */ 8371 if (!mutex_tryenter(hash_lock)) { 8372 /* 8373 * Missed the hash lock. We must retry so we 8374 * don't leave the ARC_FLAG_L2_WRITING bit set. 8375 */ 8376 ARCSTAT_BUMP(arcstat_l2_writes_lock_retry); 8377 8378 /* 8379 * We don't want to rescan the headers we've 8380 * already marked as having been written out, so 8381 * we reinsert the head node so we can pick up 8382 * where we left off. 8383 */ 8384 list_remove(buflist, head); 8385 list_insert_after(buflist, hdr, head); 8386 8387 mutex_exit(&dev->l2ad_mtx); 8388 8389 /* 8390 * We wait for the hash lock to become available 8391 * to try and prevent busy waiting, and increase 8392 * the chance we'll be able to acquire the lock 8393 * the next time around. 8394 */ 8395 mutex_enter(hash_lock); 8396 mutex_exit(hash_lock); 8397 goto top; 8398 } 8399 8400 /* 8401 * We could not have been moved into the arc_l2c_only 8402 * state while in-flight due to our ARC_FLAG_L2_WRITING 8403 * bit being set. Let's just ensure that's being enforced. 8404 */ 8405 ASSERT(HDR_HAS_L1HDR(hdr)); 8406 8407 /* 8408 * Skipped - drop L2ARC entry and mark the header as no 8409 * longer L2 eligibile. 8410 */ 8411 if (zio->io_error != 0) { 8412 /* 8413 * Error - drop L2ARC entry. 8414 */ 8415 list_remove(buflist, hdr); 8416 arc_hdr_clear_flags(hdr, ARC_FLAG_HAS_L2HDR); 8417 8418 uint64_t psize = HDR_GET_PSIZE(hdr); 8419 l2arc_hdr_arcstats_decrement(hdr); 8420 8421 bytes_dropped += 8422 vdev_psize_to_asize(dev->l2ad_vdev, psize); 8423 (void) zfs_refcount_remove_many(&dev->l2ad_alloc, 8424 arc_hdr_size(hdr), hdr); 8425 } 8426 8427 /* 8428 * Allow ARC to begin reads and ghost list evictions to 8429 * this L2ARC entry. 8430 */ 8431 arc_hdr_clear_flags(hdr, ARC_FLAG_L2_WRITING); 8432 8433 mutex_exit(hash_lock); 8434 } 8435 8436 /* 8437 * Free the allocated abd buffers for writing the log blocks. 8438 * If the zio failed reclaim the allocated space and remove the 8439 * pointers to these log blocks from the log block pointer list 8440 * of the L2ARC device. 8441 */ 8442 while ((abd_buf = list_remove_tail(&cb->l2wcb_abd_list)) != NULL) { 8443 abd_free(abd_buf->abd); 8444 zio_buf_free(abd_buf, sizeof (*abd_buf)); 8445 if (zio->io_error != 0) { 8446 lb_ptr_buf = list_remove_head(&dev->l2ad_lbptr_list); 8447 /* 8448 * L2BLK_GET_PSIZE returns aligned size for log 8449 * blocks. 8450 */ 8451 uint64_t asize = 8452 L2BLK_GET_PSIZE((lb_ptr_buf->lb_ptr)->lbp_prop); 8453 bytes_dropped += asize; 8454 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8455 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8456 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8457 lb_ptr_buf); 8458 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8459 kmem_free(lb_ptr_buf->lb_ptr, 8460 sizeof (l2arc_log_blkptr_t)); 8461 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8462 } 8463 } 8464 list_destroy(&cb->l2wcb_abd_list); 8465 8466 if (zio->io_error != 0) { 8467 ARCSTAT_BUMP(arcstat_l2_writes_error); 8468 8469 /* 8470 * Restore the lbps array in the header to its previous state. 8471 * If the list of log block pointers is empty, zero out the 8472 * log block pointers in the device header. 8473 */ 8474 lb_ptr_buf = list_head(&dev->l2ad_lbptr_list); 8475 for (int i = 0; i < 2; i++) { 8476 if (lb_ptr_buf == NULL) { 8477 /* 8478 * If the list is empty zero out the device 8479 * header. Otherwise zero out the second log 8480 * block pointer in the header. 8481 */ 8482 if (i == 0) { 8483 memset(l2dhdr, 0, 8484 dev->l2ad_dev_hdr_asize); 8485 } else { 8486 memset(&l2dhdr->dh_start_lbps[i], 0, 8487 sizeof (l2arc_log_blkptr_t)); 8488 } 8489 break; 8490 } 8491 memcpy(&l2dhdr->dh_start_lbps[i], lb_ptr_buf->lb_ptr, 8492 sizeof (l2arc_log_blkptr_t)); 8493 lb_ptr_buf = list_next(&dev->l2ad_lbptr_list, 8494 lb_ptr_buf); 8495 } 8496 } 8497 8498 ARCSTAT_BUMP(arcstat_l2_writes_done); 8499 list_remove(buflist, head); 8500 ASSERT(!HDR_HAS_L1HDR(head)); 8501 kmem_cache_free(hdr_l2only_cache, head); 8502 mutex_exit(&dev->l2ad_mtx); 8503 8504 ASSERT(dev->l2ad_vdev != NULL); 8505 vdev_space_update(dev->l2ad_vdev, -bytes_dropped, 0, 0); 8506 8507 l2arc_do_free_on_write(); 8508 8509 kmem_free(cb, sizeof (l2arc_write_callback_t)); 8510 } 8511 8512 static int 8513 l2arc_untransform(zio_t *zio, l2arc_read_callback_t *cb) 8514 { 8515 int ret; 8516 spa_t *spa = zio->io_spa; 8517 arc_buf_hdr_t *hdr = cb->l2rcb_hdr; 8518 blkptr_t *bp = zio->io_bp; 8519 uint8_t salt[ZIO_DATA_SALT_LEN]; 8520 uint8_t iv[ZIO_DATA_IV_LEN]; 8521 uint8_t mac[ZIO_DATA_MAC_LEN]; 8522 boolean_t no_crypt = B_FALSE; 8523 8524 /* 8525 * ZIL data is never be written to the L2ARC, so we don't need 8526 * special handling for its unique MAC storage. 8527 */ 8528 ASSERT3U(BP_GET_TYPE(bp), !=, DMU_OT_INTENT_LOG); 8529 ASSERT(MUTEX_HELD(HDR_LOCK(hdr))); 8530 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8531 8532 /* 8533 * If the data was encrypted, decrypt it now. Note that 8534 * we must check the bp here and not the hdr, since the 8535 * hdr does not have its encryption parameters updated 8536 * until arc_read_done(). 8537 */ 8538 if (BP_IS_ENCRYPTED(bp)) { 8539 abd_t *eabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8540 ARC_HDR_USE_RESERVE); 8541 8542 zio_crypt_decode_params_bp(bp, salt, iv); 8543 zio_crypt_decode_mac_bp(bp, mac); 8544 8545 ret = spa_do_crypt_abd(B_FALSE, spa, &cb->l2rcb_zb, 8546 BP_GET_TYPE(bp), BP_GET_DEDUP(bp), BP_SHOULD_BYTESWAP(bp), 8547 salt, iv, mac, HDR_GET_PSIZE(hdr), eabd, 8548 hdr->b_l1hdr.b_pabd, &no_crypt); 8549 if (ret != 0) { 8550 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8551 goto error; 8552 } 8553 8554 /* 8555 * If we actually performed decryption, replace b_pabd 8556 * with the decrypted data. Otherwise we can just throw 8557 * our decryption buffer away. 8558 */ 8559 if (!no_crypt) { 8560 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8561 arc_hdr_size(hdr), hdr); 8562 hdr->b_l1hdr.b_pabd = eabd; 8563 zio->io_abd = eabd; 8564 } else { 8565 arc_free_data_abd(hdr, eabd, arc_hdr_size(hdr), hdr); 8566 } 8567 } 8568 8569 /* 8570 * If the L2ARC block was compressed, but ARC compression 8571 * is disabled we decompress the data into a new buffer and 8572 * replace the existing data. 8573 */ 8574 if (HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 8575 !HDR_COMPRESSION_ENABLED(hdr)) { 8576 abd_t *cabd = arc_get_data_abd(hdr, arc_hdr_size(hdr), hdr, 8577 ARC_HDR_USE_RESERVE); 8578 void *tmp = abd_borrow_buf(cabd, arc_hdr_size(hdr)); 8579 8580 ret = zio_decompress_data(HDR_GET_COMPRESS(hdr), 8581 hdr->b_l1hdr.b_pabd, tmp, HDR_GET_PSIZE(hdr), 8582 HDR_GET_LSIZE(hdr), &hdr->b_complevel); 8583 if (ret != 0) { 8584 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8585 arc_free_data_abd(hdr, cabd, arc_hdr_size(hdr), hdr); 8586 goto error; 8587 } 8588 8589 abd_return_buf_copy(cabd, tmp, arc_hdr_size(hdr)); 8590 arc_free_data_abd(hdr, hdr->b_l1hdr.b_pabd, 8591 arc_hdr_size(hdr), hdr); 8592 hdr->b_l1hdr.b_pabd = cabd; 8593 zio->io_abd = cabd; 8594 zio->io_size = HDR_GET_LSIZE(hdr); 8595 } 8596 8597 return (0); 8598 8599 error: 8600 return (ret); 8601 } 8602 8603 8604 /* 8605 * A read to a cache device completed. Validate buffer contents before 8606 * handing over to the regular ARC routines. 8607 */ 8608 static void 8609 l2arc_read_done(zio_t *zio) 8610 { 8611 int tfm_error = 0; 8612 l2arc_read_callback_t *cb = zio->io_private; 8613 arc_buf_hdr_t *hdr; 8614 kmutex_t *hash_lock; 8615 boolean_t valid_cksum; 8616 boolean_t using_rdata = (BP_IS_ENCRYPTED(&cb->l2rcb_bp) && 8617 (cb->l2rcb_flags & ZIO_FLAG_RAW_ENCRYPT)); 8618 8619 ASSERT3P(zio->io_vd, !=, NULL); 8620 ASSERT(zio->io_flags & ZIO_FLAG_DONT_PROPAGATE); 8621 8622 spa_config_exit(zio->io_spa, SCL_L2ARC, zio->io_vd); 8623 8624 ASSERT3P(cb, !=, NULL); 8625 hdr = cb->l2rcb_hdr; 8626 ASSERT3P(hdr, !=, NULL); 8627 8628 hash_lock = HDR_LOCK(hdr); 8629 mutex_enter(hash_lock); 8630 ASSERT3P(hash_lock, ==, HDR_LOCK(hdr)); 8631 8632 /* 8633 * If the data was read into a temporary buffer, 8634 * move it and free the buffer. 8635 */ 8636 if (cb->l2rcb_abd != NULL) { 8637 ASSERT3U(arc_hdr_size(hdr), <, zio->io_size); 8638 if (zio->io_error == 0) { 8639 if (using_rdata) { 8640 abd_copy(hdr->b_crypt_hdr.b_rabd, 8641 cb->l2rcb_abd, arc_hdr_size(hdr)); 8642 } else { 8643 abd_copy(hdr->b_l1hdr.b_pabd, 8644 cb->l2rcb_abd, arc_hdr_size(hdr)); 8645 } 8646 } 8647 8648 /* 8649 * The following must be done regardless of whether 8650 * there was an error: 8651 * - free the temporary buffer 8652 * - point zio to the real ARC buffer 8653 * - set zio size accordingly 8654 * These are required because zio is either re-used for 8655 * an I/O of the block in the case of the error 8656 * or the zio is passed to arc_read_done() and it 8657 * needs real data. 8658 */ 8659 abd_free(cb->l2rcb_abd); 8660 zio->io_size = zio->io_orig_size = arc_hdr_size(hdr); 8661 8662 if (using_rdata) { 8663 ASSERT(HDR_HAS_RABD(hdr)); 8664 zio->io_abd = zio->io_orig_abd = 8665 hdr->b_crypt_hdr.b_rabd; 8666 } else { 8667 ASSERT3P(hdr->b_l1hdr.b_pabd, !=, NULL); 8668 zio->io_abd = zio->io_orig_abd = hdr->b_l1hdr.b_pabd; 8669 } 8670 } 8671 8672 ASSERT3P(zio->io_abd, !=, NULL); 8673 8674 /* 8675 * Check this survived the L2ARC journey. 8676 */ 8677 ASSERT(zio->io_abd == hdr->b_l1hdr.b_pabd || 8678 (HDR_HAS_RABD(hdr) && zio->io_abd == hdr->b_crypt_hdr.b_rabd)); 8679 zio->io_bp_copy = cb->l2rcb_bp; /* XXX fix in L2ARC 2.0 */ 8680 zio->io_bp = &zio->io_bp_copy; /* XXX fix in L2ARC 2.0 */ 8681 zio->io_prop.zp_complevel = hdr->b_complevel; 8682 8683 valid_cksum = arc_cksum_is_equal(hdr, zio); 8684 8685 /* 8686 * b_rabd will always match the data as it exists on disk if it is 8687 * being used. Therefore if we are reading into b_rabd we do not 8688 * attempt to untransform the data. 8689 */ 8690 if (valid_cksum && !using_rdata) 8691 tfm_error = l2arc_untransform(zio, cb); 8692 8693 if (valid_cksum && tfm_error == 0 && zio->io_error == 0 && 8694 !HDR_L2_EVICTED(hdr)) { 8695 mutex_exit(hash_lock); 8696 zio->io_private = hdr; 8697 arc_read_done(zio); 8698 } else { 8699 /* 8700 * Buffer didn't survive caching. Increment stats and 8701 * reissue to the original storage device. 8702 */ 8703 if (zio->io_error != 0) { 8704 ARCSTAT_BUMP(arcstat_l2_io_error); 8705 } else { 8706 zio->io_error = SET_ERROR(EIO); 8707 } 8708 if (!valid_cksum || tfm_error != 0) 8709 ARCSTAT_BUMP(arcstat_l2_cksum_bad); 8710 8711 /* 8712 * If there's no waiter, issue an async i/o to the primary 8713 * storage now. If there *is* a waiter, the caller must 8714 * issue the i/o in a context where it's OK to block. 8715 */ 8716 if (zio->io_waiter == NULL) { 8717 zio_t *pio = zio_unique_parent(zio); 8718 void *abd = (using_rdata) ? 8719 hdr->b_crypt_hdr.b_rabd : hdr->b_l1hdr.b_pabd; 8720 8721 ASSERT(!pio || pio->io_child_type == ZIO_CHILD_LOGICAL); 8722 8723 zio = zio_read(pio, zio->io_spa, zio->io_bp, 8724 abd, zio->io_size, arc_read_done, 8725 hdr, zio->io_priority, cb->l2rcb_flags, 8726 &cb->l2rcb_zb); 8727 8728 /* 8729 * Original ZIO will be freed, so we need to update 8730 * ARC header with the new ZIO pointer to be used 8731 * by zio_change_priority() in arc_read(). 8732 */ 8733 for (struct arc_callback *acb = hdr->b_l1hdr.b_acb; 8734 acb != NULL; acb = acb->acb_next) 8735 acb->acb_zio_head = zio; 8736 8737 mutex_exit(hash_lock); 8738 zio_nowait(zio); 8739 } else { 8740 mutex_exit(hash_lock); 8741 } 8742 } 8743 8744 kmem_free(cb, sizeof (l2arc_read_callback_t)); 8745 } 8746 8747 /* 8748 * This is the list priority from which the L2ARC will search for pages to 8749 * cache. This is used within loops (0..3) to cycle through lists in the 8750 * desired order. This order can have a significant effect on cache 8751 * performance. 8752 * 8753 * Currently the metadata lists are hit first, MFU then MRU, followed by 8754 * the data lists. This function returns a locked list, and also returns 8755 * the lock pointer. 8756 */ 8757 static multilist_sublist_t * 8758 l2arc_sublist_lock(int list_num) 8759 { 8760 multilist_t *ml = NULL; 8761 unsigned int idx; 8762 8763 ASSERT(list_num >= 0 && list_num < L2ARC_FEED_TYPES); 8764 8765 switch (list_num) { 8766 case 0: 8767 ml = &arc_mfu->arcs_list[ARC_BUFC_METADATA]; 8768 break; 8769 case 1: 8770 ml = &arc_mru->arcs_list[ARC_BUFC_METADATA]; 8771 break; 8772 case 2: 8773 ml = &arc_mfu->arcs_list[ARC_BUFC_DATA]; 8774 break; 8775 case 3: 8776 ml = &arc_mru->arcs_list[ARC_BUFC_DATA]; 8777 break; 8778 default: 8779 return (NULL); 8780 } 8781 8782 /* 8783 * Return a randomly-selected sublist. This is acceptable 8784 * because the caller feeds only a little bit of data for each 8785 * call (8MB). Subsequent calls will result in different 8786 * sublists being selected. 8787 */ 8788 idx = multilist_get_random_index(ml); 8789 return (multilist_sublist_lock(ml, idx)); 8790 } 8791 8792 /* 8793 * Calculates the maximum overhead of L2ARC metadata log blocks for a given 8794 * L2ARC write size. l2arc_evict and l2arc_write_size need to include this 8795 * overhead in processing to make sure there is enough headroom available 8796 * when writing buffers. 8797 */ 8798 static inline uint64_t 8799 l2arc_log_blk_overhead(uint64_t write_sz, l2arc_dev_t *dev) 8800 { 8801 if (dev->l2ad_log_entries == 0) { 8802 return (0); 8803 } else { 8804 uint64_t log_entries = write_sz >> SPA_MINBLOCKSHIFT; 8805 8806 uint64_t log_blocks = (log_entries + 8807 dev->l2ad_log_entries - 1) / 8808 dev->l2ad_log_entries; 8809 8810 return (vdev_psize_to_asize(dev->l2ad_vdev, 8811 sizeof (l2arc_log_blk_phys_t)) * log_blocks); 8812 } 8813 } 8814 8815 /* 8816 * Evict buffers from the device write hand to the distance specified in 8817 * bytes. This distance may span populated buffers, it may span nothing. 8818 * This is clearing a region on the L2ARC device ready for writing. 8819 * If the 'all' boolean is set, every buffer is evicted. 8820 */ 8821 static void 8822 l2arc_evict(l2arc_dev_t *dev, uint64_t distance, boolean_t all) 8823 { 8824 list_t *buflist; 8825 arc_buf_hdr_t *hdr, *hdr_prev; 8826 kmutex_t *hash_lock; 8827 uint64_t taddr; 8828 l2arc_lb_ptr_buf_t *lb_ptr_buf, *lb_ptr_buf_prev; 8829 vdev_t *vd = dev->l2ad_vdev; 8830 boolean_t rerun; 8831 8832 buflist = &dev->l2ad_buflist; 8833 8834 top: 8835 rerun = B_FALSE; 8836 if (dev->l2ad_hand + distance > dev->l2ad_end) { 8837 /* 8838 * When there is no space to accommodate upcoming writes, 8839 * evict to the end. Then bump the write and evict hands 8840 * to the start and iterate. This iteration does not 8841 * happen indefinitely as we make sure in 8842 * l2arc_write_size() that when the write hand is reset, 8843 * the write size does not exceed the end of the device. 8844 */ 8845 rerun = B_TRUE; 8846 taddr = dev->l2ad_end; 8847 } else { 8848 taddr = dev->l2ad_hand + distance; 8849 } 8850 DTRACE_PROBE4(l2arc__evict, l2arc_dev_t *, dev, list_t *, buflist, 8851 uint64_t, taddr, boolean_t, all); 8852 8853 if (!all) { 8854 /* 8855 * This check has to be placed after deciding whether to 8856 * iterate (rerun). 8857 */ 8858 if (dev->l2ad_first) { 8859 /* 8860 * This is the first sweep through the device. There is 8861 * nothing to evict. We have already trimmmed the 8862 * whole device. 8863 */ 8864 goto out; 8865 } else { 8866 /* 8867 * Trim the space to be evicted. 8868 */ 8869 if (vd->vdev_has_trim && dev->l2ad_evict < taddr && 8870 l2arc_trim_ahead > 0) { 8871 /* 8872 * We have to drop the spa_config lock because 8873 * vdev_trim_range() will acquire it. 8874 * l2ad_evict already accounts for the label 8875 * size. To prevent vdev_trim_ranges() from 8876 * adding it again, we subtract it from 8877 * l2ad_evict. 8878 */ 8879 spa_config_exit(dev->l2ad_spa, SCL_L2ARC, dev); 8880 vdev_trim_simple(vd, 8881 dev->l2ad_evict - VDEV_LABEL_START_SIZE, 8882 taddr - dev->l2ad_evict); 8883 spa_config_enter(dev->l2ad_spa, SCL_L2ARC, dev, 8884 RW_READER); 8885 } 8886 8887 /* 8888 * When rebuilding L2ARC we retrieve the evict hand 8889 * from the header of the device. Of note, l2arc_evict() 8890 * does not actually delete buffers from the cache 8891 * device, but trimming may do so depending on the 8892 * hardware implementation. Thus keeping track of the 8893 * evict hand is useful. 8894 */ 8895 dev->l2ad_evict = MAX(dev->l2ad_evict, taddr); 8896 } 8897 } 8898 8899 retry: 8900 mutex_enter(&dev->l2ad_mtx); 8901 /* 8902 * We have to account for evicted log blocks. Run vdev_space_update() 8903 * on log blocks whose offset (in bytes) is before the evicted offset 8904 * (in bytes) by searching in the list of pointers to log blocks 8905 * present in the L2ARC device. 8906 */ 8907 for (lb_ptr_buf = list_tail(&dev->l2ad_lbptr_list); lb_ptr_buf; 8908 lb_ptr_buf = lb_ptr_buf_prev) { 8909 8910 lb_ptr_buf_prev = list_prev(&dev->l2ad_lbptr_list, lb_ptr_buf); 8911 8912 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 8913 uint64_t asize = L2BLK_GET_PSIZE( 8914 (lb_ptr_buf->lb_ptr)->lbp_prop); 8915 8916 /* 8917 * We don't worry about log blocks left behind (ie 8918 * lbp_payload_start < l2ad_hand) because l2arc_write_buffers() 8919 * will never write more than l2arc_evict() evicts. 8920 */ 8921 if (!all && l2arc_log_blkptr_valid(dev, lb_ptr_buf->lb_ptr)) { 8922 break; 8923 } else { 8924 vdev_space_update(vd, -asize, 0, 0); 8925 ARCSTAT_INCR(arcstat_l2_log_blk_asize, -asize); 8926 ARCSTAT_BUMPDOWN(arcstat_l2_log_blk_count); 8927 zfs_refcount_remove_many(&dev->l2ad_lb_asize, asize, 8928 lb_ptr_buf); 8929 zfs_refcount_remove(&dev->l2ad_lb_count, lb_ptr_buf); 8930 list_remove(&dev->l2ad_lbptr_list, lb_ptr_buf); 8931 kmem_free(lb_ptr_buf->lb_ptr, 8932 sizeof (l2arc_log_blkptr_t)); 8933 kmem_free(lb_ptr_buf, sizeof (l2arc_lb_ptr_buf_t)); 8934 } 8935 } 8936 8937 for (hdr = list_tail(buflist); hdr; hdr = hdr_prev) { 8938 hdr_prev = list_prev(buflist, hdr); 8939 8940 ASSERT(!HDR_EMPTY(hdr)); 8941 hash_lock = HDR_LOCK(hdr); 8942 8943 /* 8944 * We cannot use mutex_enter or else we can deadlock 8945 * with l2arc_write_buffers (due to swapping the order 8946 * the hash lock and l2ad_mtx are taken). 8947 */ 8948 if (!mutex_tryenter(hash_lock)) { 8949 /* 8950 * Missed the hash lock. Retry. 8951 */ 8952 ARCSTAT_BUMP(arcstat_l2_evict_lock_retry); 8953 mutex_exit(&dev->l2ad_mtx); 8954 mutex_enter(hash_lock); 8955 mutex_exit(hash_lock); 8956 goto retry; 8957 } 8958 8959 /* 8960 * A header can't be on this list if it doesn't have L2 header. 8961 */ 8962 ASSERT(HDR_HAS_L2HDR(hdr)); 8963 8964 /* Ensure this header has finished being written. */ 8965 ASSERT(!HDR_L2_WRITING(hdr)); 8966 ASSERT(!HDR_L2_WRITE_HEAD(hdr)); 8967 8968 if (!all && (hdr->b_l2hdr.b_daddr >= dev->l2ad_evict || 8969 hdr->b_l2hdr.b_daddr < dev->l2ad_hand)) { 8970 /* 8971 * We've evicted to the target address, 8972 * or the end of the device. 8973 */ 8974 mutex_exit(hash_lock); 8975 break; 8976 } 8977 8978 if (!HDR_HAS_L1HDR(hdr)) { 8979 ASSERT(!HDR_L2_READING(hdr)); 8980 /* 8981 * This doesn't exist in the ARC. Destroy. 8982 * arc_hdr_destroy() will call list_remove() 8983 * and decrement arcstat_l2_lsize. 8984 */ 8985 arc_change_state(arc_anon, hdr); 8986 arc_hdr_destroy(hdr); 8987 } else { 8988 ASSERT(hdr->b_l1hdr.b_state != arc_l2c_only); 8989 ARCSTAT_BUMP(arcstat_l2_evict_l1cached); 8990 /* 8991 * Invalidate issued or about to be issued 8992 * reads, since we may be about to write 8993 * over this location. 8994 */ 8995 if (HDR_L2_READING(hdr)) { 8996 ARCSTAT_BUMP(arcstat_l2_evict_reading); 8997 arc_hdr_set_flags(hdr, ARC_FLAG_L2_EVICTED); 8998 } 8999 9000 arc_hdr_l2hdr_destroy(hdr); 9001 } 9002 mutex_exit(hash_lock); 9003 } 9004 mutex_exit(&dev->l2ad_mtx); 9005 9006 out: 9007 /* 9008 * We need to check if we evict all buffers, otherwise we may iterate 9009 * unnecessarily. 9010 */ 9011 if (!all && rerun) { 9012 /* 9013 * Bump device hand to the device start if it is approaching the 9014 * end. l2arc_evict() has already evicted ahead for this case. 9015 */ 9016 dev->l2ad_hand = dev->l2ad_start; 9017 dev->l2ad_evict = dev->l2ad_start; 9018 dev->l2ad_first = B_FALSE; 9019 goto top; 9020 } 9021 9022 if (!all) { 9023 /* 9024 * In case of cache device removal (all) the following 9025 * assertions may be violated without functional consequences 9026 * as the device is about to be removed. 9027 */ 9028 ASSERT3U(dev->l2ad_hand + distance, <, dev->l2ad_end); 9029 if (!dev->l2ad_first) 9030 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 9031 } 9032 } 9033 9034 /* 9035 * Handle any abd transforms that might be required for writing to the L2ARC. 9036 * If successful, this function will always return an abd with the data 9037 * transformed as it is on disk in a new abd of asize bytes. 9038 */ 9039 static int 9040 l2arc_apply_transforms(spa_t *spa, arc_buf_hdr_t *hdr, uint64_t asize, 9041 abd_t **abd_out) 9042 { 9043 int ret; 9044 void *tmp = NULL; 9045 abd_t *cabd = NULL, *eabd = NULL, *to_write = hdr->b_l1hdr.b_pabd; 9046 enum zio_compress compress = HDR_GET_COMPRESS(hdr); 9047 uint64_t psize = HDR_GET_PSIZE(hdr); 9048 uint64_t size = arc_hdr_size(hdr); 9049 boolean_t ismd = HDR_ISTYPE_METADATA(hdr); 9050 boolean_t bswap = (hdr->b_l1hdr.b_byteswap != DMU_BSWAP_NUMFUNCS); 9051 dsl_crypto_key_t *dck = NULL; 9052 uint8_t mac[ZIO_DATA_MAC_LEN] = { 0 }; 9053 boolean_t no_crypt = B_FALSE; 9054 9055 ASSERT((HDR_GET_COMPRESS(hdr) != ZIO_COMPRESS_OFF && 9056 !HDR_COMPRESSION_ENABLED(hdr)) || 9057 HDR_ENCRYPTED(hdr) || HDR_SHARED_DATA(hdr) || psize != asize); 9058 ASSERT3U(psize, <=, asize); 9059 9060 /* 9061 * If this data simply needs its own buffer, we simply allocate it 9062 * and copy the data. This may be done to eliminate a dependency on a 9063 * shared buffer or to reallocate the buffer to match asize. 9064 */ 9065 if (HDR_HAS_RABD(hdr) && asize != psize) { 9066 ASSERT3U(asize, >=, psize); 9067 to_write = abd_alloc_for_io(asize, ismd); 9068 abd_copy(to_write, hdr->b_crypt_hdr.b_rabd, psize); 9069 if (psize != asize) 9070 abd_zero_off(to_write, psize, asize - psize); 9071 goto out; 9072 } 9073 9074 if ((compress == ZIO_COMPRESS_OFF || HDR_COMPRESSION_ENABLED(hdr)) && 9075 !HDR_ENCRYPTED(hdr)) { 9076 ASSERT3U(size, ==, psize); 9077 to_write = abd_alloc_for_io(asize, ismd); 9078 abd_copy(to_write, hdr->b_l1hdr.b_pabd, size); 9079 if (size != asize) 9080 abd_zero_off(to_write, size, asize - size); 9081 goto out; 9082 } 9083 9084 if (compress != ZIO_COMPRESS_OFF && !HDR_COMPRESSION_ENABLED(hdr)) { 9085 /* 9086 * In some cases, we can wind up with size > asize, so 9087 * we need to opt for the larger allocation option here. 9088 * 9089 * (We also need abd_return_buf_copy in all cases because 9090 * it's an ASSERT() to modify the buffer before returning it 9091 * with arc_return_buf(), and all the compressors 9092 * write things before deciding to fail compression in nearly 9093 * every case.) 9094 */ 9095 uint64_t bufsize = MAX(size, asize); 9096 cabd = abd_alloc_for_io(bufsize, ismd); 9097 tmp = abd_borrow_buf(cabd, bufsize); 9098 9099 psize = zio_compress_data(compress, to_write, &tmp, size, 9100 hdr->b_complevel); 9101 9102 if (psize >= asize) { 9103 psize = HDR_GET_PSIZE(hdr); 9104 abd_return_buf_copy(cabd, tmp, bufsize); 9105 HDR_SET_COMPRESS(hdr, ZIO_COMPRESS_OFF); 9106 to_write = cabd; 9107 abd_copy(to_write, hdr->b_l1hdr.b_pabd, psize); 9108 if (psize != asize) 9109 abd_zero_off(to_write, psize, asize - psize); 9110 goto encrypt; 9111 } 9112 ASSERT3U(psize, <=, HDR_GET_PSIZE(hdr)); 9113 if (psize < asize) 9114 memset((char *)tmp + psize, 0, bufsize - psize); 9115 psize = HDR_GET_PSIZE(hdr); 9116 abd_return_buf_copy(cabd, tmp, bufsize); 9117 to_write = cabd; 9118 } 9119 9120 encrypt: 9121 if (HDR_ENCRYPTED(hdr)) { 9122 eabd = abd_alloc_for_io(asize, ismd); 9123 9124 /* 9125 * If the dataset was disowned before the buffer 9126 * made it to this point, the key to re-encrypt 9127 * it won't be available. In this case we simply 9128 * won't write the buffer to the L2ARC. 9129 */ 9130 ret = spa_keystore_lookup_key(spa, hdr->b_crypt_hdr.b_dsobj, 9131 FTAG, &dck); 9132 if (ret != 0) 9133 goto error; 9134 9135 ret = zio_do_crypt_abd(B_TRUE, &dck->dck_key, 9136 hdr->b_crypt_hdr.b_ot, bswap, hdr->b_crypt_hdr.b_salt, 9137 hdr->b_crypt_hdr.b_iv, mac, psize, to_write, eabd, 9138 &no_crypt); 9139 if (ret != 0) 9140 goto error; 9141 9142 if (no_crypt) 9143 abd_copy(eabd, to_write, psize); 9144 9145 if (psize != asize) 9146 abd_zero_off(eabd, psize, asize - psize); 9147 9148 /* assert that the MAC we got here matches the one we saved */ 9149 ASSERT0(memcmp(mac, hdr->b_crypt_hdr.b_mac, ZIO_DATA_MAC_LEN)); 9150 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9151 9152 if (to_write == cabd) 9153 abd_free(cabd); 9154 9155 to_write = eabd; 9156 } 9157 9158 out: 9159 ASSERT3P(to_write, !=, hdr->b_l1hdr.b_pabd); 9160 *abd_out = to_write; 9161 return (0); 9162 9163 error: 9164 if (dck != NULL) 9165 spa_keystore_dsl_key_rele(spa, dck, FTAG); 9166 if (cabd != NULL) 9167 abd_free(cabd); 9168 if (eabd != NULL) 9169 abd_free(eabd); 9170 9171 *abd_out = NULL; 9172 return (ret); 9173 } 9174 9175 static void 9176 l2arc_blk_fetch_done(zio_t *zio) 9177 { 9178 l2arc_read_callback_t *cb; 9179 9180 cb = zio->io_private; 9181 if (cb->l2rcb_abd != NULL) 9182 abd_free(cb->l2rcb_abd); 9183 kmem_free(cb, sizeof (l2arc_read_callback_t)); 9184 } 9185 9186 /* 9187 * Find and write ARC buffers to the L2ARC device. 9188 * 9189 * An ARC_FLAG_L2_WRITING flag is set so that the L2ARC buffers are not valid 9190 * for reading until they have completed writing. 9191 * The headroom_boost is an in-out parameter used to maintain headroom boost 9192 * state between calls to this function. 9193 * 9194 * Returns the number of bytes actually written (which may be smaller than 9195 * the delta by which the device hand has changed due to alignment and the 9196 * writing of log blocks). 9197 */ 9198 static uint64_t 9199 l2arc_write_buffers(spa_t *spa, l2arc_dev_t *dev, uint64_t target_sz) 9200 { 9201 arc_buf_hdr_t *hdr, *hdr_prev, *head; 9202 uint64_t write_asize, write_psize, write_lsize, headroom; 9203 boolean_t full; 9204 l2arc_write_callback_t *cb = NULL; 9205 zio_t *pio, *wzio; 9206 uint64_t guid = spa_load_guid(spa); 9207 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9208 9209 ASSERT3P(dev->l2ad_vdev, !=, NULL); 9210 9211 pio = NULL; 9212 write_lsize = write_asize = write_psize = 0; 9213 full = B_FALSE; 9214 head = kmem_cache_alloc(hdr_l2only_cache, KM_PUSHPAGE); 9215 arc_hdr_set_flags(head, ARC_FLAG_L2_WRITE_HEAD | ARC_FLAG_HAS_L2HDR); 9216 9217 /* 9218 * Copy buffers for L2ARC writing. 9219 */ 9220 for (int pass = 0; pass < L2ARC_FEED_TYPES; pass++) { 9221 /* 9222 * If pass == 1 or 3, we cache MRU metadata and data 9223 * respectively. 9224 */ 9225 if (l2arc_mfuonly) { 9226 if (pass == 1 || pass == 3) 9227 continue; 9228 } 9229 9230 multilist_sublist_t *mls = l2arc_sublist_lock(pass); 9231 uint64_t passed_sz = 0; 9232 9233 VERIFY3P(mls, !=, NULL); 9234 9235 /* 9236 * L2ARC fast warmup. 9237 * 9238 * Until the ARC is warm and starts to evict, read from the 9239 * head of the ARC lists rather than the tail. 9240 */ 9241 if (arc_warm == B_FALSE) 9242 hdr = multilist_sublist_head(mls); 9243 else 9244 hdr = multilist_sublist_tail(mls); 9245 9246 headroom = target_sz * l2arc_headroom; 9247 if (zfs_compressed_arc_enabled) 9248 headroom = (headroom * l2arc_headroom_boost) / 100; 9249 9250 for (; hdr; hdr = hdr_prev) { 9251 kmutex_t *hash_lock; 9252 abd_t *to_write = NULL; 9253 9254 if (arc_warm == B_FALSE) 9255 hdr_prev = multilist_sublist_next(mls, hdr); 9256 else 9257 hdr_prev = multilist_sublist_prev(mls, hdr); 9258 9259 hash_lock = HDR_LOCK(hdr); 9260 if (!mutex_tryenter(hash_lock)) { 9261 /* 9262 * Skip this buffer rather than waiting. 9263 */ 9264 continue; 9265 } 9266 9267 passed_sz += HDR_GET_LSIZE(hdr); 9268 if (l2arc_headroom != 0 && passed_sz > headroom) { 9269 /* 9270 * Searched too far. 9271 */ 9272 mutex_exit(hash_lock); 9273 break; 9274 } 9275 9276 if (!l2arc_write_eligible(guid, hdr)) { 9277 mutex_exit(hash_lock); 9278 continue; 9279 } 9280 9281 ASSERT(HDR_HAS_L1HDR(hdr)); 9282 9283 ASSERT3U(HDR_GET_PSIZE(hdr), >, 0); 9284 ASSERT3U(arc_hdr_size(hdr), >, 0); 9285 ASSERT(hdr->b_l1hdr.b_pabd != NULL || 9286 HDR_HAS_RABD(hdr)); 9287 uint64_t psize = HDR_GET_PSIZE(hdr); 9288 uint64_t asize = vdev_psize_to_asize(dev->l2ad_vdev, 9289 psize); 9290 9291 /* 9292 * If the allocated size of this buffer plus the max 9293 * size for the pending log block exceeds the evicted 9294 * target size, terminate writing buffers for this run. 9295 */ 9296 if (write_asize + asize + 9297 sizeof (l2arc_log_blk_phys_t) > target_sz) { 9298 full = B_TRUE; 9299 mutex_exit(hash_lock); 9300 break; 9301 } 9302 9303 /* 9304 * We rely on the L1 portion of the header below, so 9305 * it's invalid for this header to have been evicted out 9306 * of the ghost cache, prior to being written out. The 9307 * ARC_FLAG_L2_WRITING bit ensures this won't happen. 9308 */ 9309 arc_hdr_set_flags(hdr, ARC_FLAG_L2_WRITING); 9310 9311 /* 9312 * If this header has b_rabd, we can use this since it 9313 * must always match the data exactly as it exists on 9314 * disk. Otherwise, the L2ARC can normally use the 9315 * hdr's data, but if we're sharing data between the 9316 * hdr and one of its bufs, L2ARC needs its own copy of 9317 * the data so that the ZIO below can't race with the 9318 * buf consumer. To ensure that this copy will be 9319 * available for the lifetime of the ZIO and be cleaned 9320 * up afterwards, we add it to the l2arc_free_on_write 9321 * queue. If we need to apply any transforms to the 9322 * data (compression, encryption) we will also need the 9323 * extra buffer. 9324 */ 9325 if (HDR_HAS_RABD(hdr) && psize == asize) { 9326 to_write = hdr->b_crypt_hdr.b_rabd; 9327 } else if ((HDR_COMPRESSION_ENABLED(hdr) || 9328 HDR_GET_COMPRESS(hdr) == ZIO_COMPRESS_OFF) && 9329 !HDR_ENCRYPTED(hdr) && !HDR_SHARED_DATA(hdr) && 9330 psize == asize) { 9331 to_write = hdr->b_l1hdr.b_pabd; 9332 } else { 9333 int ret; 9334 arc_buf_contents_t type = arc_buf_type(hdr); 9335 9336 ret = l2arc_apply_transforms(spa, hdr, asize, 9337 &to_write); 9338 if (ret != 0) { 9339 arc_hdr_clear_flags(hdr, 9340 ARC_FLAG_L2_WRITING); 9341 mutex_exit(hash_lock); 9342 continue; 9343 } 9344 9345 l2arc_free_abd_on_write(to_write, asize, type); 9346 } 9347 9348 if (pio == NULL) { 9349 /* 9350 * Insert a dummy header on the buflist so 9351 * l2arc_write_done() can find where the 9352 * write buffers begin without searching. 9353 */ 9354 mutex_enter(&dev->l2ad_mtx); 9355 list_insert_head(&dev->l2ad_buflist, head); 9356 mutex_exit(&dev->l2ad_mtx); 9357 9358 cb = kmem_alloc( 9359 sizeof (l2arc_write_callback_t), KM_SLEEP); 9360 cb->l2wcb_dev = dev; 9361 cb->l2wcb_head = head; 9362 /* 9363 * Create a list to save allocated abd buffers 9364 * for l2arc_log_blk_commit(). 9365 */ 9366 list_create(&cb->l2wcb_abd_list, 9367 sizeof (l2arc_lb_abd_buf_t), 9368 offsetof(l2arc_lb_abd_buf_t, node)); 9369 pio = zio_root(spa, l2arc_write_done, cb, 9370 ZIO_FLAG_CANFAIL); 9371 } 9372 9373 hdr->b_l2hdr.b_dev = dev; 9374 hdr->b_l2hdr.b_hits = 0; 9375 9376 hdr->b_l2hdr.b_daddr = dev->l2ad_hand; 9377 hdr->b_l2hdr.b_arcs_state = 9378 hdr->b_l1hdr.b_state->arcs_state; 9379 arc_hdr_set_flags(hdr, ARC_FLAG_HAS_L2HDR); 9380 9381 mutex_enter(&dev->l2ad_mtx); 9382 list_insert_head(&dev->l2ad_buflist, hdr); 9383 mutex_exit(&dev->l2ad_mtx); 9384 9385 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 9386 arc_hdr_size(hdr), hdr); 9387 9388 wzio = zio_write_phys(pio, dev->l2ad_vdev, 9389 hdr->b_l2hdr.b_daddr, asize, to_write, 9390 ZIO_CHECKSUM_OFF, NULL, hdr, 9391 ZIO_PRIORITY_ASYNC_WRITE, 9392 ZIO_FLAG_CANFAIL, B_FALSE); 9393 9394 write_lsize += HDR_GET_LSIZE(hdr); 9395 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, 9396 zio_t *, wzio); 9397 9398 write_psize += psize; 9399 write_asize += asize; 9400 dev->l2ad_hand += asize; 9401 l2arc_hdr_arcstats_increment(hdr); 9402 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 9403 9404 mutex_exit(hash_lock); 9405 9406 /* 9407 * Append buf info to current log and commit if full. 9408 * arcstat_l2_{size,asize} kstats are updated 9409 * internally. 9410 */ 9411 if (l2arc_log_blk_insert(dev, hdr)) { 9412 /* 9413 * l2ad_hand will be adjusted in 9414 * l2arc_log_blk_commit(). 9415 */ 9416 write_asize += 9417 l2arc_log_blk_commit(dev, pio, cb); 9418 } 9419 9420 zio_nowait(wzio); 9421 } 9422 9423 multilist_sublist_unlock(mls); 9424 9425 if (full == B_TRUE) 9426 break; 9427 } 9428 9429 /* No buffers selected for writing? */ 9430 if (pio == NULL) { 9431 ASSERT0(write_lsize); 9432 ASSERT(!HDR_HAS_L1HDR(head)); 9433 kmem_cache_free(hdr_l2only_cache, head); 9434 9435 /* 9436 * Although we did not write any buffers l2ad_evict may 9437 * have advanced. 9438 */ 9439 if (dev->l2ad_evict != l2dhdr->dh_evict) 9440 l2arc_dev_hdr_update(dev); 9441 9442 return (0); 9443 } 9444 9445 if (!dev->l2ad_first) 9446 ASSERT3U(dev->l2ad_hand, <=, dev->l2ad_evict); 9447 9448 ASSERT3U(write_asize, <=, target_sz); 9449 ARCSTAT_BUMP(arcstat_l2_writes_sent); 9450 ARCSTAT_INCR(arcstat_l2_write_bytes, write_psize); 9451 9452 dev->l2ad_writing = B_TRUE; 9453 (void) zio_wait(pio); 9454 dev->l2ad_writing = B_FALSE; 9455 9456 /* 9457 * Update the device header after the zio completes as 9458 * l2arc_write_done() may have updated the memory holding the log block 9459 * pointers in the device header. 9460 */ 9461 l2arc_dev_hdr_update(dev); 9462 9463 return (write_asize); 9464 } 9465 9466 static boolean_t 9467 l2arc_hdr_limit_reached(void) 9468 { 9469 int64_t s = aggsum_upper_bound(&arc_sums.arcstat_l2_hdr_size); 9470 9471 return (arc_reclaim_needed() || 9472 (s > (arc_warm ? arc_c : arc_c_max) * l2arc_meta_percent / 100)); 9473 } 9474 9475 /* 9476 * This thread feeds the L2ARC at regular intervals. This is the beating 9477 * heart of the L2ARC. 9478 */ 9479 static __attribute__((noreturn)) void 9480 l2arc_feed_thread(void *unused) 9481 { 9482 (void) unused; 9483 callb_cpr_t cpr; 9484 l2arc_dev_t *dev; 9485 spa_t *spa; 9486 uint64_t size, wrote; 9487 clock_t begin, next = ddi_get_lbolt(); 9488 fstrans_cookie_t cookie; 9489 9490 CALLB_CPR_INIT(&cpr, &l2arc_feed_thr_lock, callb_generic_cpr, FTAG); 9491 9492 mutex_enter(&l2arc_feed_thr_lock); 9493 9494 cookie = spl_fstrans_mark(); 9495 while (l2arc_thread_exit == 0) { 9496 CALLB_CPR_SAFE_BEGIN(&cpr); 9497 (void) cv_timedwait_idle(&l2arc_feed_thr_cv, 9498 &l2arc_feed_thr_lock, next); 9499 CALLB_CPR_SAFE_END(&cpr, &l2arc_feed_thr_lock); 9500 next = ddi_get_lbolt() + hz; 9501 9502 /* 9503 * Quick check for L2ARC devices. 9504 */ 9505 mutex_enter(&l2arc_dev_mtx); 9506 if (l2arc_ndev == 0) { 9507 mutex_exit(&l2arc_dev_mtx); 9508 continue; 9509 } 9510 mutex_exit(&l2arc_dev_mtx); 9511 begin = ddi_get_lbolt(); 9512 9513 /* 9514 * This selects the next l2arc device to write to, and in 9515 * doing so the next spa to feed from: dev->l2ad_spa. This 9516 * will return NULL if there are now no l2arc devices or if 9517 * they are all faulted. 9518 * 9519 * If a device is returned, its spa's config lock is also 9520 * held to prevent device removal. l2arc_dev_get_next() 9521 * will grab and release l2arc_dev_mtx. 9522 */ 9523 if ((dev = l2arc_dev_get_next()) == NULL) 9524 continue; 9525 9526 spa = dev->l2ad_spa; 9527 ASSERT3P(spa, !=, NULL); 9528 9529 /* 9530 * If the pool is read-only then force the feed thread to 9531 * sleep a little longer. 9532 */ 9533 if (!spa_writeable(spa)) { 9534 next = ddi_get_lbolt() + 5 * l2arc_feed_secs * hz; 9535 spa_config_exit(spa, SCL_L2ARC, dev); 9536 continue; 9537 } 9538 9539 /* 9540 * Avoid contributing to memory pressure. 9541 */ 9542 if (l2arc_hdr_limit_reached()) { 9543 ARCSTAT_BUMP(arcstat_l2_abort_lowmem); 9544 spa_config_exit(spa, SCL_L2ARC, dev); 9545 continue; 9546 } 9547 9548 ARCSTAT_BUMP(arcstat_l2_feeds); 9549 9550 size = l2arc_write_size(dev); 9551 9552 /* 9553 * Evict L2ARC buffers that will be overwritten. 9554 */ 9555 l2arc_evict(dev, size, B_FALSE); 9556 9557 /* 9558 * Write ARC buffers. 9559 */ 9560 wrote = l2arc_write_buffers(spa, dev, size); 9561 9562 /* 9563 * Calculate interval between writes. 9564 */ 9565 next = l2arc_write_interval(begin, size, wrote); 9566 spa_config_exit(spa, SCL_L2ARC, dev); 9567 } 9568 spl_fstrans_unmark(cookie); 9569 9570 l2arc_thread_exit = 0; 9571 cv_broadcast(&l2arc_feed_thr_cv); 9572 CALLB_CPR_EXIT(&cpr); /* drops l2arc_feed_thr_lock */ 9573 thread_exit(); 9574 } 9575 9576 boolean_t 9577 l2arc_vdev_present(vdev_t *vd) 9578 { 9579 return (l2arc_vdev_get(vd) != NULL); 9580 } 9581 9582 /* 9583 * Returns the l2arc_dev_t associated with a particular vdev_t or NULL if 9584 * the vdev_t isn't an L2ARC device. 9585 */ 9586 l2arc_dev_t * 9587 l2arc_vdev_get(vdev_t *vd) 9588 { 9589 l2arc_dev_t *dev; 9590 9591 mutex_enter(&l2arc_dev_mtx); 9592 for (dev = list_head(l2arc_dev_list); dev != NULL; 9593 dev = list_next(l2arc_dev_list, dev)) { 9594 if (dev->l2ad_vdev == vd) 9595 break; 9596 } 9597 mutex_exit(&l2arc_dev_mtx); 9598 9599 return (dev); 9600 } 9601 9602 static void 9603 l2arc_rebuild_dev(l2arc_dev_t *dev, boolean_t reopen) 9604 { 9605 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9606 uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 9607 spa_t *spa = dev->l2ad_spa; 9608 9609 /* 9610 * The L2ARC has to hold at least the payload of one log block for 9611 * them to be restored (persistent L2ARC). The payload of a log block 9612 * depends on the amount of its log entries. We always write log blocks 9613 * with 1022 entries. How many of them are committed or restored depends 9614 * on the size of the L2ARC device. Thus the maximum payload of 9615 * one log block is 1022 * SPA_MAXBLOCKSIZE = 16GB. If the L2ARC device 9616 * is less than that, we reduce the amount of committed and restored 9617 * log entries per block so as to enable persistence. 9618 */ 9619 if (dev->l2ad_end < l2arc_rebuild_blocks_min_l2size) { 9620 dev->l2ad_log_entries = 0; 9621 } else { 9622 dev->l2ad_log_entries = MIN((dev->l2ad_end - 9623 dev->l2ad_start) >> SPA_MAXBLOCKSHIFT, 9624 L2ARC_LOG_BLK_MAX_ENTRIES); 9625 } 9626 9627 /* 9628 * Read the device header, if an error is returned do not rebuild L2ARC. 9629 */ 9630 if (l2arc_dev_hdr_read(dev) == 0 && dev->l2ad_log_entries > 0) { 9631 /* 9632 * If we are onlining a cache device (vdev_reopen) that was 9633 * still present (l2arc_vdev_present()) and rebuild is enabled, 9634 * we should evict all ARC buffers and pointers to log blocks 9635 * and reclaim their space before restoring its contents to 9636 * L2ARC. 9637 */ 9638 if (reopen) { 9639 if (!l2arc_rebuild_enabled) { 9640 return; 9641 } else { 9642 l2arc_evict(dev, 0, B_TRUE); 9643 /* start a new log block */ 9644 dev->l2ad_log_ent_idx = 0; 9645 dev->l2ad_log_blk_payload_asize = 0; 9646 dev->l2ad_log_blk_payload_start = 0; 9647 } 9648 } 9649 /* 9650 * Just mark the device as pending for a rebuild. We won't 9651 * be starting a rebuild in line here as it would block pool 9652 * import. Instead spa_load_impl will hand that off to an 9653 * async task which will call l2arc_spa_rebuild_start. 9654 */ 9655 dev->l2ad_rebuild = B_TRUE; 9656 } else if (spa_writeable(spa)) { 9657 /* 9658 * In this case TRIM the whole device if l2arc_trim_ahead > 0, 9659 * otherwise create a new header. We zero out the memory holding 9660 * the header to reset dh_start_lbps. If we TRIM the whole 9661 * device the new header will be written by 9662 * vdev_trim_l2arc_thread() at the end of the TRIM to update the 9663 * trim_state in the header too. When reading the header, if 9664 * trim_state is not VDEV_TRIM_COMPLETE and l2arc_trim_ahead > 0 9665 * we opt to TRIM the whole device again. 9666 */ 9667 if (l2arc_trim_ahead > 0) { 9668 dev->l2ad_trim_all = B_TRUE; 9669 } else { 9670 memset(l2dhdr, 0, l2dhdr_asize); 9671 l2arc_dev_hdr_update(dev); 9672 } 9673 } 9674 } 9675 9676 /* 9677 * Add a vdev for use by the L2ARC. By this point the spa has already 9678 * validated the vdev and opened it. 9679 */ 9680 void 9681 l2arc_add_vdev(spa_t *spa, vdev_t *vd) 9682 { 9683 l2arc_dev_t *adddev; 9684 uint64_t l2dhdr_asize; 9685 9686 ASSERT(!l2arc_vdev_present(vd)); 9687 9688 /* 9689 * Create a new l2arc device entry. 9690 */ 9691 adddev = vmem_zalloc(sizeof (l2arc_dev_t), KM_SLEEP); 9692 adddev->l2ad_spa = spa; 9693 adddev->l2ad_vdev = vd; 9694 /* leave extra size for an l2arc device header */ 9695 l2dhdr_asize = adddev->l2ad_dev_hdr_asize = 9696 MAX(sizeof (*adddev->l2ad_dev_hdr), 1 << vd->vdev_ashift); 9697 adddev->l2ad_start = VDEV_LABEL_START_SIZE + l2dhdr_asize; 9698 adddev->l2ad_end = VDEV_LABEL_START_SIZE + vdev_get_min_asize(vd); 9699 ASSERT3U(adddev->l2ad_start, <, adddev->l2ad_end); 9700 adddev->l2ad_hand = adddev->l2ad_start; 9701 adddev->l2ad_evict = adddev->l2ad_start; 9702 adddev->l2ad_first = B_TRUE; 9703 adddev->l2ad_writing = B_FALSE; 9704 adddev->l2ad_trim_all = B_FALSE; 9705 list_link_init(&adddev->l2ad_node); 9706 adddev->l2ad_dev_hdr = kmem_zalloc(l2dhdr_asize, KM_SLEEP); 9707 9708 mutex_init(&adddev->l2ad_mtx, NULL, MUTEX_DEFAULT, NULL); 9709 /* 9710 * This is a list of all ARC buffers that are still valid on the 9711 * device. 9712 */ 9713 list_create(&adddev->l2ad_buflist, sizeof (arc_buf_hdr_t), 9714 offsetof(arc_buf_hdr_t, b_l2hdr.b_l2node)); 9715 9716 /* 9717 * This is a list of pointers to log blocks that are still present 9718 * on the device. 9719 */ 9720 list_create(&adddev->l2ad_lbptr_list, sizeof (l2arc_lb_ptr_buf_t), 9721 offsetof(l2arc_lb_ptr_buf_t, node)); 9722 9723 vdev_space_update(vd, 0, 0, adddev->l2ad_end - adddev->l2ad_hand); 9724 zfs_refcount_create(&adddev->l2ad_alloc); 9725 zfs_refcount_create(&adddev->l2ad_lb_asize); 9726 zfs_refcount_create(&adddev->l2ad_lb_count); 9727 9728 /* 9729 * Decide if dev is eligible for L2ARC rebuild or whole device 9730 * trimming. This has to happen before the device is added in the 9731 * cache device list and l2arc_dev_mtx is released. Otherwise 9732 * l2arc_feed_thread() might already start writing on the 9733 * device. 9734 */ 9735 l2arc_rebuild_dev(adddev, B_FALSE); 9736 9737 /* 9738 * Add device to global list 9739 */ 9740 mutex_enter(&l2arc_dev_mtx); 9741 list_insert_head(l2arc_dev_list, adddev); 9742 atomic_inc_64(&l2arc_ndev); 9743 mutex_exit(&l2arc_dev_mtx); 9744 } 9745 9746 /* 9747 * Decide if a vdev is eligible for L2ARC rebuild, called from vdev_reopen() 9748 * in case of onlining a cache device. 9749 */ 9750 void 9751 l2arc_rebuild_vdev(vdev_t *vd, boolean_t reopen) 9752 { 9753 l2arc_dev_t *dev = NULL; 9754 9755 dev = l2arc_vdev_get(vd); 9756 ASSERT3P(dev, !=, NULL); 9757 9758 /* 9759 * In contrast to l2arc_add_vdev() we do not have to worry about 9760 * l2arc_feed_thread() invalidating previous content when onlining a 9761 * cache device. The device parameters (l2ad*) are not cleared when 9762 * offlining the device and writing new buffers will not invalidate 9763 * all previous content. In worst case only buffers that have not had 9764 * their log block written to the device will be lost. 9765 * When onlining the cache device (ie offline->online without exporting 9766 * the pool in between) this happens: 9767 * vdev_reopen() -> vdev_open() -> l2arc_rebuild_vdev() 9768 * | | 9769 * vdev_is_dead() = B_FALSE l2ad_rebuild = B_TRUE 9770 * During the time where vdev_is_dead = B_FALSE and until l2ad_rebuild 9771 * is set to B_TRUE we might write additional buffers to the device. 9772 */ 9773 l2arc_rebuild_dev(dev, reopen); 9774 } 9775 9776 /* 9777 * Remove a vdev from the L2ARC. 9778 */ 9779 void 9780 l2arc_remove_vdev(vdev_t *vd) 9781 { 9782 l2arc_dev_t *remdev = NULL; 9783 9784 /* 9785 * Find the device by vdev 9786 */ 9787 remdev = l2arc_vdev_get(vd); 9788 ASSERT3P(remdev, !=, NULL); 9789 9790 /* 9791 * Cancel any ongoing or scheduled rebuild. 9792 */ 9793 mutex_enter(&l2arc_rebuild_thr_lock); 9794 if (remdev->l2ad_rebuild_began == B_TRUE) { 9795 remdev->l2ad_rebuild_cancel = B_TRUE; 9796 while (remdev->l2ad_rebuild == B_TRUE) 9797 cv_wait(&l2arc_rebuild_thr_cv, &l2arc_rebuild_thr_lock); 9798 } 9799 mutex_exit(&l2arc_rebuild_thr_lock); 9800 9801 /* 9802 * Remove device from global list 9803 */ 9804 mutex_enter(&l2arc_dev_mtx); 9805 list_remove(l2arc_dev_list, remdev); 9806 l2arc_dev_last = NULL; /* may have been invalidated */ 9807 atomic_dec_64(&l2arc_ndev); 9808 mutex_exit(&l2arc_dev_mtx); 9809 9810 /* 9811 * Clear all buflists and ARC references. L2ARC device flush. 9812 */ 9813 l2arc_evict(remdev, 0, B_TRUE); 9814 list_destroy(&remdev->l2ad_buflist); 9815 ASSERT(list_is_empty(&remdev->l2ad_lbptr_list)); 9816 list_destroy(&remdev->l2ad_lbptr_list); 9817 mutex_destroy(&remdev->l2ad_mtx); 9818 zfs_refcount_destroy(&remdev->l2ad_alloc); 9819 zfs_refcount_destroy(&remdev->l2ad_lb_asize); 9820 zfs_refcount_destroy(&remdev->l2ad_lb_count); 9821 kmem_free(remdev->l2ad_dev_hdr, remdev->l2ad_dev_hdr_asize); 9822 vmem_free(remdev, sizeof (l2arc_dev_t)); 9823 } 9824 9825 void 9826 l2arc_init(void) 9827 { 9828 l2arc_thread_exit = 0; 9829 l2arc_ndev = 0; 9830 9831 mutex_init(&l2arc_feed_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9832 cv_init(&l2arc_feed_thr_cv, NULL, CV_DEFAULT, NULL); 9833 mutex_init(&l2arc_rebuild_thr_lock, NULL, MUTEX_DEFAULT, NULL); 9834 cv_init(&l2arc_rebuild_thr_cv, NULL, CV_DEFAULT, NULL); 9835 mutex_init(&l2arc_dev_mtx, NULL, MUTEX_DEFAULT, NULL); 9836 mutex_init(&l2arc_free_on_write_mtx, NULL, MUTEX_DEFAULT, NULL); 9837 9838 l2arc_dev_list = &L2ARC_dev_list; 9839 l2arc_free_on_write = &L2ARC_free_on_write; 9840 list_create(l2arc_dev_list, sizeof (l2arc_dev_t), 9841 offsetof(l2arc_dev_t, l2ad_node)); 9842 list_create(l2arc_free_on_write, sizeof (l2arc_data_free_t), 9843 offsetof(l2arc_data_free_t, l2df_list_node)); 9844 } 9845 9846 void 9847 l2arc_fini(void) 9848 { 9849 mutex_destroy(&l2arc_feed_thr_lock); 9850 cv_destroy(&l2arc_feed_thr_cv); 9851 mutex_destroy(&l2arc_rebuild_thr_lock); 9852 cv_destroy(&l2arc_rebuild_thr_cv); 9853 mutex_destroy(&l2arc_dev_mtx); 9854 mutex_destroy(&l2arc_free_on_write_mtx); 9855 9856 list_destroy(l2arc_dev_list); 9857 list_destroy(l2arc_free_on_write); 9858 } 9859 9860 void 9861 l2arc_start(void) 9862 { 9863 if (!(spa_mode_global & SPA_MODE_WRITE)) 9864 return; 9865 9866 (void) thread_create(NULL, 0, l2arc_feed_thread, NULL, 0, &p0, 9867 TS_RUN, defclsyspri); 9868 } 9869 9870 void 9871 l2arc_stop(void) 9872 { 9873 if (!(spa_mode_global & SPA_MODE_WRITE)) 9874 return; 9875 9876 mutex_enter(&l2arc_feed_thr_lock); 9877 cv_signal(&l2arc_feed_thr_cv); /* kick thread out of startup */ 9878 l2arc_thread_exit = 1; 9879 while (l2arc_thread_exit != 0) 9880 cv_wait(&l2arc_feed_thr_cv, &l2arc_feed_thr_lock); 9881 mutex_exit(&l2arc_feed_thr_lock); 9882 } 9883 9884 /* 9885 * Punches out rebuild threads for the L2ARC devices in a spa. This should 9886 * be called after pool import from the spa async thread, since starting 9887 * these threads directly from spa_import() will make them part of the 9888 * "zpool import" context and delay process exit (and thus pool import). 9889 */ 9890 void 9891 l2arc_spa_rebuild_start(spa_t *spa) 9892 { 9893 ASSERT(MUTEX_HELD(&spa_namespace_lock)); 9894 9895 /* 9896 * Locate the spa's l2arc devices and kick off rebuild threads. 9897 */ 9898 for (int i = 0; i < spa->spa_l2cache.sav_count; i++) { 9899 l2arc_dev_t *dev = 9900 l2arc_vdev_get(spa->spa_l2cache.sav_vdevs[i]); 9901 if (dev == NULL) { 9902 /* Don't attempt a rebuild if the vdev is UNAVAIL */ 9903 continue; 9904 } 9905 mutex_enter(&l2arc_rebuild_thr_lock); 9906 if (dev->l2ad_rebuild && !dev->l2ad_rebuild_cancel) { 9907 dev->l2ad_rebuild_began = B_TRUE; 9908 (void) thread_create(NULL, 0, l2arc_dev_rebuild_thread, 9909 dev, 0, &p0, TS_RUN, minclsyspri); 9910 } 9911 mutex_exit(&l2arc_rebuild_thr_lock); 9912 } 9913 } 9914 9915 /* 9916 * Main entry point for L2ARC rebuilding. 9917 */ 9918 static __attribute__((noreturn)) void 9919 l2arc_dev_rebuild_thread(void *arg) 9920 { 9921 l2arc_dev_t *dev = arg; 9922 9923 VERIFY(!dev->l2ad_rebuild_cancel); 9924 VERIFY(dev->l2ad_rebuild); 9925 (void) l2arc_rebuild(dev); 9926 mutex_enter(&l2arc_rebuild_thr_lock); 9927 dev->l2ad_rebuild_began = B_FALSE; 9928 dev->l2ad_rebuild = B_FALSE; 9929 mutex_exit(&l2arc_rebuild_thr_lock); 9930 9931 thread_exit(); 9932 } 9933 9934 /* 9935 * This function implements the actual L2ARC metadata rebuild. It: 9936 * starts reading the log block chain and restores each block's contents 9937 * to memory (reconstructing arc_buf_hdr_t's). 9938 * 9939 * Operation stops under any of the following conditions: 9940 * 9941 * 1) We reach the end of the log block chain. 9942 * 2) We encounter *any* error condition (cksum errors, io errors) 9943 */ 9944 static int 9945 l2arc_rebuild(l2arc_dev_t *dev) 9946 { 9947 vdev_t *vd = dev->l2ad_vdev; 9948 spa_t *spa = vd->vdev_spa; 9949 int err = 0; 9950 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 9951 l2arc_log_blk_phys_t *this_lb, *next_lb; 9952 zio_t *this_io = NULL, *next_io = NULL; 9953 l2arc_log_blkptr_t lbps[2]; 9954 l2arc_lb_ptr_buf_t *lb_ptr_buf; 9955 boolean_t lock_held; 9956 9957 this_lb = vmem_zalloc(sizeof (*this_lb), KM_SLEEP); 9958 next_lb = vmem_zalloc(sizeof (*next_lb), KM_SLEEP); 9959 9960 /* 9961 * We prevent device removal while issuing reads to the device, 9962 * then during the rebuilding phases we drop this lock again so 9963 * that a spa_unload or device remove can be initiated - this is 9964 * safe, because the spa will signal us to stop before removing 9965 * our device and wait for us to stop. 9966 */ 9967 spa_config_enter(spa, SCL_L2ARC, vd, RW_READER); 9968 lock_held = B_TRUE; 9969 9970 /* 9971 * Retrieve the persistent L2ARC device state. 9972 * L2BLK_GET_PSIZE returns aligned size for log blocks. 9973 */ 9974 dev->l2ad_evict = MAX(l2dhdr->dh_evict, dev->l2ad_start); 9975 dev->l2ad_hand = MAX(l2dhdr->dh_start_lbps[0].lbp_daddr + 9976 L2BLK_GET_PSIZE((&l2dhdr->dh_start_lbps[0])->lbp_prop), 9977 dev->l2ad_start); 9978 dev->l2ad_first = !!(l2dhdr->dh_flags & L2ARC_DEV_HDR_EVICT_FIRST); 9979 9980 vd->vdev_trim_action_time = l2dhdr->dh_trim_action_time; 9981 vd->vdev_trim_state = l2dhdr->dh_trim_state; 9982 9983 /* 9984 * In case the zfs module parameter l2arc_rebuild_enabled is false 9985 * we do not start the rebuild process. 9986 */ 9987 if (!l2arc_rebuild_enabled) 9988 goto out; 9989 9990 /* Prepare the rebuild process */ 9991 memcpy(lbps, l2dhdr->dh_start_lbps, sizeof (lbps)); 9992 9993 /* Start the rebuild process */ 9994 for (;;) { 9995 if (!l2arc_log_blkptr_valid(dev, &lbps[0])) 9996 break; 9997 9998 if ((err = l2arc_log_blk_read(dev, &lbps[0], &lbps[1], 9999 this_lb, next_lb, this_io, &next_io)) != 0) 10000 goto out; 10001 10002 /* 10003 * Our memory pressure valve. If the system is running low 10004 * on memory, rather than swamping memory with new ARC buf 10005 * hdrs, we opt not to rebuild the L2ARC. At this point, 10006 * however, we have already set up our L2ARC dev to chain in 10007 * new metadata log blocks, so the user may choose to offline/ 10008 * online the L2ARC dev at a later time (or re-import the pool) 10009 * to reconstruct it (when there's less memory pressure). 10010 */ 10011 if (l2arc_hdr_limit_reached()) { 10012 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_lowmem); 10013 cmn_err(CE_NOTE, "System running low on memory, " 10014 "aborting L2ARC rebuild."); 10015 err = SET_ERROR(ENOMEM); 10016 goto out; 10017 } 10018 10019 spa_config_exit(spa, SCL_L2ARC, vd); 10020 lock_held = B_FALSE; 10021 10022 /* 10023 * Now that we know that the next_lb checks out alright, we 10024 * can start reconstruction from this log block. 10025 * L2BLK_GET_PSIZE returns aligned size for log blocks. 10026 */ 10027 uint64_t asize = L2BLK_GET_PSIZE((&lbps[0])->lbp_prop); 10028 l2arc_log_blk_restore(dev, this_lb, asize); 10029 10030 /* 10031 * log block restored, include its pointer in the list of 10032 * pointers to log blocks present in the L2ARC device. 10033 */ 10034 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 10035 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), 10036 KM_SLEEP); 10037 memcpy(lb_ptr_buf->lb_ptr, &lbps[0], 10038 sizeof (l2arc_log_blkptr_t)); 10039 mutex_enter(&dev->l2ad_mtx); 10040 list_insert_tail(&dev->l2ad_lbptr_list, lb_ptr_buf); 10041 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 10042 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 10043 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 10044 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 10045 mutex_exit(&dev->l2ad_mtx); 10046 vdev_space_update(vd, asize, 0, 0); 10047 10048 /* 10049 * Protection against loops of log blocks: 10050 * 10051 * l2ad_hand l2ad_evict 10052 * V V 10053 * l2ad_start |=======================================| l2ad_end 10054 * -----|||----|||---|||----||| 10055 * (3) (2) (1) (0) 10056 * ---|||---|||----|||---||| 10057 * (7) (6) (5) (4) 10058 * 10059 * In this situation the pointer of log block (4) passes 10060 * l2arc_log_blkptr_valid() but the log block should not be 10061 * restored as it is overwritten by the payload of log block 10062 * (0). Only log blocks (0)-(3) should be restored. We check 10063 * whether l2ad_evict lies in between the payload starting 10064 * offset of the next log block (lbps[1].lbp_payload_start) 10065 * and the payload starting offset of the present log block 10066 * (lbps[0].lbp_payload_start). If true and this isn't the 10067 * first pass, we are looping from the beginning and we should 10068 * stop. 10069 */ 10070 if (l2arc_range_check_overlap(lbps[1].lbp_payload_start, 10071 lbps[0].lbp_payload_start, dev->l2ad_evict) && 10072 !dev->l2ad_first) 10073 goto out; 10074 10075 kpreempt(KPREEMPT_SYNC); 10076 for (;;) { 10077 mutex_enter(&l2arc_rebuild_thr_lock); 10078 if (dev->l2ad_rebuild_cancel) { 10079 dev->l2ad_rebuild = B_FALSE; 10080 cv_signal(&l2arc_rebuild_thr_cv); 10081 mutex_exit(&l2arc_rebuild_thr_lock); 10082 err = SET_ERROR(ECANCELED); 10083 goto out; 10084 } 10085 mutex_exit(&l2arc_rebuild_thr_lock); 10086 if (spa_config_tryenter(spa, SCL_L2ARC, vd, 10087 RW_READER)) { 10088 lock_held = B_TRUE; 10089 break; 10090 } 10091 /* 10092 * L2ARC config lock held by somebody in writer, 10093 * possibly due to them trying to remove us. They'll 10094 * likely to want us to shut down, so after a little 10095 * delay, we check l2ad_rebuild_cancel and retry 10096 * the lock again. 10097 */ 10098 delay(1); 10099 } 10100 10101 /* 10102 * Continue with the next log block. 10103 */ 10104 lbps[0] = lbps[1]; 10105 lbps[1] = this_lb->lb_prev_lbp; 10106 PTR_SWAP(this_lb, next_lb); 10107 this_io = next_io; 10108 next_io = NULL; 10109 } 10110 10111 if (this_io != NULL) 10112 l2arc_log_blk_fetch_abort(this_io); 10113 out: 10114 if (next_io != NULL) 10115 l2arc_log_blk_fetch_abort(next_io); 10116 vmem_free(this_lb, sizeof (*this_lb)); 10117 vmem_free(next_lb, sizeof (*next_lb)); 10118 10119 if (!l2arc_rebuild_enabled) { 10120 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10121 "disabled"); 10122 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) > 0) { 10123 ARCSTAT_BUMP(arcstat_l2_rebuild_success); 10124 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10125 "successful, restored %llu blocks", 10126 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10127 } else if (err == 0 && zfs_refcount_count(&dev->l2ad_lb_count) == 0) { 10128 /* 10129 * No error but also nothing restored, meaning the lbps array 10130 * in the device header points to invalid/non-present log 10131 * blocks. Reset the header. 10132 */ 10133 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10134 "no valid log blocks"); 10135 memset(l2dhdr, 0, dev->l2ad_dev_hdr_asize); 10136 l2arc_dev_hdr_update(dev); 10137 } else if (err == ECANCELED) { 10138 /* 10139 * In case the rebuild was canceled do not log to spa history 10140 * log as the pool may be in the process of being removed. 10141 */ 10142 zfs_dbgmsg("L2ARC rebuild aborted, restored %llu blocks", 10143 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10144 } else if (err != 0) { 10145 spa_history_log_internal(spa, "L2ARC rebuild", NULL, 10146 "aborted, restored %llu blocks", 10147 (u_longlong_t)zfs_refcount_count(&dev->l2ad_lb_count)); 10148 } 10149 10150 if (lock_held) 10151 spa_config_exit(spa, SCL_L2ARC, vd); 10152 10153 return (err); 10154 } 10155 10156 /* 10157 * Attempts to read the device header on the provided L2ARC device and writes 10158 * it to `hdr'. On success, this function returns 0, otherwise the appropriate 10159 * error code is returned. 10160 */ 10161 static int 10162 l2arc_dev_hdr_read(l2arc_dev_t *dev) 10163 { 10164 int err; 10165 uint64_t guid; 10166 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10167 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10168 abd_t *abd; 10169 10170 guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10171 10172 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10173 10174 err = zio_wait(zio_read_phys(NULL, dev->l2ad_vdev, 10175 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, 10176 ZIO_CHECKSUM_LABEL, NULL, NULL, ZIO_PRIORITY_SYNC_READ, 10177 ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY | 10178 ZIO_FLAG_SPECULATIVE, B_FALSE)); 10179 10180 abd_free(abd); 10181 10182 if (err != 0) { 10183 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_dh_errors); 10184 zfs_dbgmsg("L2ARC IO error (%d) while reading device header, " 10185 "vdev guid: %llu", err, 10186 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10187 return (err); 10188 } 10189 10190 if (l2dhdr->dh_magic == BSWAP_64(L2ARC_DEV_HDR_MAGIC)) 10191 byteswap_uint64_array(l2dhdr, sizeof (*l2dhdr)); 10192 10193 if (l2dhdr->dh_magic != L2ARC_DEV_HDR_MAGIC || 10194 l2dhdr->dh_spa_guid != guid || 10195 l2dhdr->dh_vdev_guid != dev->l2ad_vdev->vdev_guid || 10196 l2dhdr->dh_version != L2ARC_PERSISTENT_VERSION || 10197 l2dhdr->dh_log_entries != dev->l2ad_log_entries || 10198 l2dhdr->dh_end != dev->l2ad_end || 10199 !l2arc_range_check_overlap(dev->l2ad_start, dev->l2ad_end, 10200 l2dhdr->dh_evict) || 10201 (l2dhdr->dh_trim_state != VDEV_TRIM_COMPLETE && 10202 l2arc_trim_ahead > 0)) { 10203 /* 10204 * Attempt to rebuild a device containing no actual dev hdr 10205 * or containing a header from some other pool or from another 10206 * version of persistent L2ARC. 10207 */ 10208 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_unsupported); 10209 return (SET_ERROR(ENOTSUP)); 10210 } 10211 10212 return (0); 10213 } 10214 10215 /* 10216 * Reads L2ARC log blocks from storage and validates their contents. 10217 * 10218 * This function implements a simple fetcher to make sure that while 10219 * we're processing one buffer the L2ARC is already fetching the next 10220 * one in the chain. 10221 * 10222 * The arguments this_lp and next_lp point to the current and next log block 10223 * address in the block chain. Similarly, this_lb and next_lb hold the 10224 * l2arc_log_blk_phys_t's of the current and next L2ARC blk. 10225 * 10226 * The `this_io' and `next_io' arguments are used for block fetching. 10227 * When issuing the first blk IO during rebuild, you should pass NULL for 10228 * `this_io'. This function will then issue a sync IO to read the block and 10229 * also issue an async IO to fetch the next block in the block chain. The 10230 * fetched IO is returned in `next_io'. On subsequent calls to this 10231 * function, pass the value returned in `next_io' from the previous call 10232 * as `this_io' and a fresh `next_io' pointer to hold the next fetch IO. 10233 * Prior to the call, you should initialize your `next_io' pointer to be 10234 * NULL. If no fetch IO was issued, the pointer is left set at NULL. 10235 * 10236 * On success, this function returns 0, otherwise it returns an appropriate 10237 * error code. On error the fetching IO is aborted and cleared before 10238 * returning from this function. Therefore, if we return `success', the 10239 * caller can assume that we have taken care of cleanup of fetch IOs. 10240 */ 10241 static int 10242 l2arc_log_blk_read(l2arc_dev_t *dev, 10243 const l2arc_log_blkptr_t *this_lbp, const l2arc_log_blkptr_t *next_lbp, 10244 l2arc_log_blk_phys_t *this_lb, l2arc_log_blk_phys_t *next_lb, 10245 zio_t *this_io, zio_t **next_io) 10246 { 10247 int err = 0; 10248 zio_cksum_t cksum; 10249 abd_t *abd = NULL; 10250 uint64_t asize; 10251 10252 ASSERT(this_lbp != NULL && next_lbp != NULL); 10253 ASSERT(this_lb != NULL && next_lb != NULL); 10254 ASSERT(next_io != NULL && *next_io == NULL); 10255 ASSERT(l2arc_log_blkptr_valid(dev, this_lbp)); 10256 10257 /* 10258 * Check to see if we have issued the IO for this log block in a 10259 * previous run. If not, this is the first call, so issue it now. 10260 */ 10261 if (this_io == NULL) { 10262 this_io = l2arc_log_blk_fetch(dev->l2ad_vdev, this_lbp, 10263 this_lb); 10264 } 10265 10266 /* 10267 * Peek to see if we can start issuing the next IO immediately. 10268 */ 10269 if (l2arc_log_blkptr_valid(dev, next_lbp)) { 10270 /* 10271 * Start issuing IO for the next log block early - this 10272 * should help keep the L2ARC device busy while we 10273 * decompress and restore this log block. 10274 */ 10275 *next_io = l2arc_log_blk_fetch(dev->l2ad_vdev, next_lbp, 10276 next_lb); 10277 } 10278 10279 /* Wait for the IO to read this log block to complete */ 10280 if ((err = zio_wait(this_io)) != 0) { 10281 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_io_errors); 10282 zfs_dbgmsg("L2ARC IO error (%d) while reading log block, " 10283 "offset: %llu, vdev guid: %llu", err, 10284 (u_longlong_t)this_lbp->lbp_daddr, 10285 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10286 goto cleanup; 10287 } 10288 10289 /* 10290 * Make sure the buffer checks out. 10291 * L2BLK_GET_PSIZE returns aligned size for log blocks. 10292 */ 10293 asize = L2BLK_GET_PSIZE((this_lbp)->lbp_prop); 10294 fletcher_4_native(this_lb, asize, NULL, &cksum); 10295 if (!ZIO_CHECKSUM_EQUAL(cksum, this_lbp->lbp_cksum)) { 10296 ARCSTAT_BUMP(arcstat_l2_rebuild_abort_cksum_lb_errors); 10297 zfs_dbgmsg("L2ARC log block cksum failed, offset: %llu, " 10298 "vdev guid: %llu, l2ad_hand: %llu, l2ad_evict: %llu", 10299 (u_longlong_t)this_lbp->lbp_daddr, 10300 (u_longlong_t)dev->l2ad_vdev->vdev_guid, 10301 (u_longlong_t)dev->l2ad_hand, 10302 (u_longlong_t)dev->l2ad_evict); 10303 err = SET_ERROR(ECKSUM); 10304 goto cleanup; 10305 } 10306 10307 /* Now we can take our time decoding this buffer */ 10308 switch (L2BLK_GET_COMPRESS((this_lbp)->lbp_prop)) { 10309 case ZIO_COMPRESS_OFF: 10310 break; 10311 case ZIO_COMPRESS_LZ4: 10312 abd = abd_alloc_for_io(asize, B_TRUE); 10313 abd_copy_from_buf_off(abd, this_lb, 0, asize); 10314 if ((err = zio_decompress_data( 10315 L2BLK_GET_COMPRESS((this_lbp)->lbp_prop), 10316 abd, this_lb, asize, sizeof (*this_lb), NULL)) != 0) { 10317 err = SET_ERROR(EINVAL); 10318 goto cleanup; 10319 } 10320 break; 10321 default: 10322 err = SET_ERROR(EINVAL); 10323 goto cleanup; 10324 } 10325 if (this_lb->lb_magic == BSWAP_64(L2ARC_LOG_BLK_MAGIC)) 10326 byteswap_uint64_array(this_lb, sizeof (*this_lb)); 10327 if (this_lb->lb_magic != L2ARC_LOG_BLK_MAGIC) { 10328 err = SET_ERROR(EINVAL); 10329 goto cleanup; 10330 } 10331 cleanup: 10332 /* Abort an in-flight fetch I/O in case of error */ 10333 if (err != 0 && *next_io != NULL) { 10334 l2arc_log_blk_fetch_abort(*next_io); 10335 *next_io = NULL; 10336 } 10337 if (abd != NULL) 10338 abd_free(abd); 10339 return (err); 10340 } 10341 10342 /* 10343 * Restores the payload of a log block to ARC. This creates empty ARC hdr 10344 * entries which only contain an l2arc hdr, essentially restoring the 10345 * buffers to their L2ARC evicted state. This function also updates space 10346 * usage on the L2ARC vdev to make sure it tracks restored buffers. 10347 */ 10348 static void 10349 l2arc_log_blk_restore(l2arc_dev_t *dev, const l2arc_log_blk_phys_t *lb, 10350 uint64_t lb_asize) 10351 { 10352 uint64_t size = 0, asize = 0; 10353 uint64_t log_entries = dev->l2ad_log_entries; 10354 10355 /* 10356 * Usually arc_adapt() is called only for data, not headers, but 10357 * since we may allocate significant amount of memory here, let ARC 10358 * grow its arc_c. 10359 */ 10360 arc_adapt(log_entries * HDR_L2ONLY_SIZE); 10361 10362 for (int i = log_entries - 1; i >= 0; i--) { 10363 /* 10364 * Restore goes in the reverse temporal direction to preserve 10365 * correct temporal ordering of buffers in the l2ad_buflist. 10366 * l2arc_hdr_restore also does a list_insert_tail instead of 10367 * list_insert_head on the l2ad_buflist: 10368 * 10369 * LIST l2ad_buflist LIST 10370 * HEAD <------ (time) ------ TAIL 10371 * direction +-----+-----+-----+-----+-----+ direction 10372 * of l2arc <== | buf | buf | buf | buf | buf | ===> of rebuild 10373 * fill +-----+-----+-----+-----+-----+ 10374 * ^ ^ 10375 * | | 10376 * | | 10377 * l2arc_feed_thread l2arc_rebuild 10378 * will place new bufs here restores bufs here 10379 * 10380 * During l2arc_rebuild() the device is not used by 10381 * l2arc_feed_thread() as dev->l2ad_rebuild is set to true. 10382 */ 10383 size += L2BLK_GET_LSIZE((&lb->lb_entries[i])->le_prop); 10384 asize += vdev_psize_to_asize(dev->l2ad_vdev, 10385 L2BLK_GET_PSIZE((&lb->lb_entries[i])->le_prop)); 10386 l2arc_hdr_restore(&lb->lb_entries[i], dev); 10387 } 10388 10389 /* 10390 * Record rebuild stats: 10391 * size Logical size of restored buffers in the L2ARC 10392 * asize Aligned size of restored buffers in the L2ARC 10393 */ 10394 ARCSTAT_INCR(arcstat_l2_rebuild_size, size); 10395 ARCSTAT_INCR(arcstat_l2_rebuild_asize, asize); 10396 ARCSTAT_INCR(arcstat_l2_rebuild_bufs, log_entries); 10397 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, lb_asize); 10398 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, asize / lb_asize); 10399 ARCSTAT_BUMP(arcstat_l2_rebuild_log_blks); 10400 } 10401 10402 /* 10403 * Restores a single ARC buf hdr from a log entry. The ARC buffer is put 10404 * into a state indicating that it has been evicted to L2ARC. 10405 */ 10406 static void 10407 l2arc_hdr_restore(const l2arc_log_ent_phys_t *le, l2arc_dev_t *dev) 10408 { 10409 arc_buf_hdr_t *hdr, *exists; 10410 kmutex_t *hash_lock; 10411 arc_buf_contents_t type = L2BLK_GET_TYPE((le)->le_prop); 10412 uint64_t asize; 10413 10414 /* 10415 * Do all the allocation before grabbing any locks, this lets us 10416 * sleep if memory is full and we don't have to deal with failed 10417 * allocations. 10418 */ 10419 hdr = arc_buf_alloc_l2only(L2BLK_GET_LSIZE((le)->le_prop), type, 10420 dev, le->le_dva, le->le_daddr, 10421 L2BLK_GET_PSIZE((le)->le_prop), le->le_birth, 10422 L2BLK_GET_COMPRESS((le)->le_prop), le->le_complevel, 10423 L2BLK_GET_PROTECTED((le)->le_prop), 10424 L2BLK_GET_PREFETCH((le)->le_prop), 10425 L2BLK_GET_STATE((le)->le_prop)); 10426 asize = vdev_psize_to_asize(dev->l2ad_vdev, 10427 L2BLK_GET_PSIZE((le)->le_prop)); 10428 10429 /* 10430 * vdev_space_update() has to be called before arc_hdr_destroy() to 10431 * avoid underflow since the latter also calls vdev_space_update(). 10432 */ 10433 l2arc_hdr_arcstats_increment(hdr); 10434 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10435 10436 mutex_enter(&dev->l2ad_mtx); 10437 list_insert_tail(&dev->l2ad_buflist, hdr); 10438 (void) zfs_refcount_add_many(&dev->l2ad_alloc, arc_hdr_size(hdr), hdr); 10439 mutex_exit(&dev->l2ad_mtx); 10440 10441 exists = buf_hash_insert(hdr, &hash_lock); 10442 if (exists) { 10443 /* Buffer was already cached, no need to restore it. */ 10444 arc_hdr_destroy(hdr); 10445 /* 10446 * If the buffer is already cached, check whether it has 10447 * L2ARC metadata. If not, enter them and update the flag. 10448 * This is important is case of onlining a cache device, since 10449 * we previously evicted all L2ARC metadata from ARC. 10450 */ 10451 if (!HDR_HAS_L2HDR(exists)) { 10452 arc_hdr_set_flags(exists, ARC_FLAG_HAS_L2HDR); 10453 exists->b_l2hdr.b_dev = dev; 10454 exists->b_l2hdr.b_daddr = le->le_daddr; 10455 exists->b_l2hdr.b_arcs_state = 10456 L2BLK_GET_STATE((le)->le_prop); 10457 mutex_enter(&dev->l2ad_mtx); 10458 list_insert_tail(&dev->l2ad_buflist, exists); 10459 (void) zfs_refcount_add_many(&dev->l2ad_alloc, 10460 arc_hdr_size(exists), exists); 10461 mutex_exit(&dev->l2ad_mtx); 10462 l2arc_hdr_arcstats_increment(exists); 10463 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10464 } 10465 ARCSTAT_BUMP(arcstat_l2_rebuild_bufs_precached); 10466 } 10467 10468 mutex_exit(hash_lock); 10469 } 10470 10471 /* 10472 * Starts an asynchronous read IO to read a log block. This is used in log 10473 * block reconstruction to start reading the next block before we are done 10474 * decoding and reconstructing the current block, to keep the l2arc device 10475 * nice and hot with read IO to process. 10476 * The returned zio will contain a newly allocated memory buffers for the IO 10477 * data which should then be freed by the caller once the zio is no longer 10478 * needed (i.e. due to it having completed). If you wish to abort this 10479 * zio, you should do so using l2arc_log_blk_fetch_abort, which takes 10480 * care of disposing of the allocated buffers correctly. 10481 */ 10482 static zio_t * 10483 l2arc_log_blk_fetch(vdev_t *vd, const l2arc_log_blkptr_t *lbp, 10484 l2arc_log_blk_phys_t *lb) 10485 { 10486 uint32_t asize; 10487 zio_t *pio; 10488 l2arc_read_callback_t *cb; 10489 10490 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10491 asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10492 ASSERT(asize <= sizeof (l2arc_log_blk_phys_t)); 10493 10494 cb = kmem_zalloc(sizeof (l2arc_read_callback_t), KM_SLEEP); 10495 cb->l2rcb_abd = abd_get_from_buf(lb, asize); 10496 pio = zio_root(vd->vdev_spa, l2arc_blk_fetch_done, cb, 10497 ZIO_FLAG_CANFAIL | ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY); 10498 (void) zio_nowait(zio_read_phys(pio, vd, lbp->lbp_daddr, asize, 10499 cb->l2rcb_abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10500 ZIO_PRIORITY_ASYNC_READ, ZIO_FLAG_CANFAIL | 10501 ZIO_FLAG_DONT_PROPAGATE | ZIO_FLAG_DONT_RETRY, B_FALSE)); 10502 10503 return (pio); 10504 } 10505 10506 /* 10507 * Aborts a zio returned from l2arc_log_blk_fetch and frees the data 10508 * buffers allocated for it. 10509 */ 10510 static void 10511 l2arc_log_blk_fetch_abort(zio_t *zio) 10512 { 10513 (void) zio_wait(zio); 10514 } 10515 10516 /* 10517 * Creates a zio to update the device header on an l2arc device. 10518 */ 10519 void 10520 l2arc_dev_hdr_update(l2arc_dev_t *dev) 10521 { 10522 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10523 const uint64_t l2dhdr_asize = dev->l2ad_dev_hdr_asize; 10524 abd_t *abd; 10525 int err; 10526 10527 VERIFY(spa_config_held(dev->l2ad_spa, SCL_STATE_ALL, RW_READER)); 10528 10529 l2dhdr->dh_magic = L2ARC_DEV_HDR_MAGIC; 10530 l2dhdr->dh_version = L2ARC_PERSISTENT_VERSION; 10531 l2dhdr->dh_spa_guid = spa_guid(dev->l2ad_vdev->vdev_spa); 10532 l2dhdr->dh_vdev_guid = dev->l2ad_vdev->vdev_guid; 10533 l2dhdr->dh_log_entries = dev->l2ad_log_entries; 10534 l2dhdr->dh_evict = dev->l2ad_evict; 10535 l2dhdr->dh_start = dev->l2ad_start; 10536 l2dhdr->dh_end = dev->l2ad_end; 10537 l2dhdr->dh_lb_asize = zfs_refcount_count(&dev->l2ad_lb_asize); 10538 l2dhdr->dh_lb_count = zfs_refcount_count(&dev->l2ad_lb_count); 10539 l2dhdr->dh_flags = 0; 10540 l2dhdr->dh_trim_action_time = dev->l2ad_vdev->vdev_trim_action_time; 10541 l2dhdr->dh_trim_state = dev->l2ad_vdev->vdev_trim_state; 10542 if (dev->l2ad_first) 10543 l2dhdr->dh_flags |= L2ARC_DEV_HDR_EVICT_FIRST; 10544 10545 abd = abd_get_from_buf(l2dhdr, l2dhdr_asize); 10546 10547 err = zio_wait(zio_write_phys(NULL, dev->l2ad_vdev, 10548 VDEV_LABEL_START_SIZE, l2dhdr_asize, abd, ZIO_CHECKSUM_LABEL, NULL, 10549 NULL, ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE)); 10550 10551 abd_free(abd); 10552 10553 if (err != 0) { 10554 zfs_dbgmsg("L2ARC IO error (%d) while writing device header, " 10555 "vdev guid: %llu", err, 10556 (u_longlong_t)dev->l2ad_vdev->vdev_guid); 10557 } 10558 } 10559 10560 /* 10561 * Commits a log block to the L2ARC device. This routine is invoked from 10562 * l2arc_write_buffers when the log block fills up. 10563 * This function allocates some memory to temporarily hold the serialized 10564 * buffer to be written. This is then released in l2arc_write_done. 10565 */ 10566 static uint64_t 10567 l2arc_log_blk_commit(l2arc_dev_t *dev, zio_t *pio, l2arc_write_callback_t *cb) 10568 { 10569 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10570 l2arc_dev_hdr_phys_t *l2dhdr = dev->l2ad_dev_hdr; 10571 uint64_t psize, asize; 10572 zio_t *wzio; 10573 l2arc_lb_abd_buf_t *abd_buf; 10574 uint8_t *tmpbuf = NULL; 10575 l2arc_lb_ptr_buf_t *lb_ptr_buf; 10576 10577 VERIFY3S(dev->l2ad_log_ent_idx, ==, dev->l2ad_log_entries); 10578 10579 abd_buf = zio_buf_alloc(sizeof (*abd_buf)); 10580 abd_buf->abd = abd_get_from_buf(lb, sizeof (*lb)); 10581 lb_ptr_buf = kmem_zalloc(sizeof (l2arc_lb_ptr_buf_t), KM_SLEEP); 10582 lb_ptr_buf->lb_ptr = kmem_zalloc(sizeof (l2arc_log_blkptr_t), KM_SLEEP); 10583 10584 /* link the buffer into the block chain */ 10585 lb->lb_prev_lbp = l2dhdr->dh_start_lbps[1]; 10586 lb->lb_magic = L2ARC_LOG_BLK_MAGIC; 10587 10588 /* 10589 * l2arc_log_blk_commit() may be called multiple times during a single 10590 * l2arc_write_buffers() call. Save the allocated abd buffers in a list 10591 * so we can free them in l2arc_write_done() later on. 10592 */ 10593 list_insert_tail(&cb->l2wcb_abd_list, abd_buf); 10594 10595 /* try to compress the buffer */ 10596 psize = zio_compress_data(ZIO_COMPRESS_LZ4, 10597 abd_buf->abd, (void **) &tmpbuf, sizeof (*lb), 0); 10598 10599 /* a log block is never entirely zero */ 10600 ASSERT(psize != 0); 10601 asize = vdev_psize_to_asize(dev->l2ad_vdev, psize); 10602 ASSERT(asize <= sizeof (*lb)); 10603 10604 /* 10605 * Update the start log block pointer in the device header to point 10606 * to the log block we're about to write. 10607 */ 10608 l2dhdr->dh_start_lbps[1] = l2dhdr->dh_start_lbps[0]; 10609 l2dhdr->dh_start_lbps[0].lbp_daddr = dev->l2ad_hand; 10610 l2dhdr->dh_start_lbps[0].lbp_payload_asize = 10611 dev->l2ad_log_blk_payload_asize; 10612 l2dhdr->dh_start_lbps[0].lbp_payload_start = 10613 dev->l2ad_log_blk_payload_start; 10614 L2BLK_SET_LSIZE( 10615 (&l2dhdr->dh_start_lbps[0])->lbp_prop, sizeof (*lb)); 10616 L2BLK_SET_PSIZE( 10617 (&l2dhdr->dh_start_lbps[0])->lbp_prop, asize); 10618 L2BLK_SET_CHECKSUM( 10619 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10620 ZIO_CHECKSUM_FLETCHER_4); 10621 if (asize < sizeof (*lb)) { 10622 /* compression succeeded */ 10623 memset(tmpbuf + psize, 0, asize - psize); 10624 L2BLK_SET_COMPRESS( 10625 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10626 ZIO_COMPRESS_LZ4); 10627 } else { 10628 /* compression failed */ 10629 memcpy(tmpbuf, lb, sizeof (*lb)); 10630 L2BLK_SET_COMPRESS( 10631 (&l2dhdr->dh_start_lbps[0])->lbp_prop, 10632 ZIO_COMPRESS_OFF); 10633 } 10634 10635 /* checksum what we're about to write */ 10636 fletcher_4_native(tmpbuf, asize, NULL, 10637 &l2dhdr->dh_start_lbps[0].lbp_cksum); 10638 10639 abd_free(abd_buf->abd); 10640 10641 /* perform the write itself */ 10642 abd_buf->abd = abd_get_from_buf(tmpbuf, sizeof (*lb)); 10643 abd_take_ownership_of_buf(abd_buf->abd, B_TRUE); 10644 wzio = zio_write_phys(pio, dev->l2ad_vdev, dev->l2ad_hand, 10645 asize, abd_buf->abd, ZIO_CHECKSUM_OFF, NULL, NULL, 10646 ZIO_PRIORITY_ASYNC_WRITE, ZIO_FLAG_CANFAIL, B_FALSE); 10647 DTRACE_PROBE2(l2arc__write, vdev_t *, dev->l2ad_vdev, zio_t *, wzio); 10648 (void) zio_nowait(wzio); 10649 10650 dev->l2ad_hand += asize; 10651 /* 10652 * Include the committed log block's pointer in the list of pointers 10653 * to log blocks present in the L2ARC device. 10654 */ 10655 memcpy(lb_ptr_buf->lb_ptr, &l2dhdr->dh_start_lbps[0], 10656 sizeof (l2arc_log_blkptr_t)); 10657 mutex_enter(&dev->l2ad_mtx); 10658 list_insert_head(&dev->l2ad_lbptr_list, lb_ptr_buf); 10659 ARCSTAT_INCR(arcstat_l2_log_blk_asize, asize); 10660 ARCSTAT_BUMP(arcstat_l2_log_blk_count); 10661 zfs_refcount_add_many(&dev->l2ad_lb_asize, asize, lb_ptr_buf); 10662 zfs_refcount_add(&dev->l2ad_lb_count, lb_ptr_buf); 10663 mutex_exit(&dev->l2ad_mtx); 10664 vdev_space_update(dev->l2ad_vdev, asize, 0, 0); 10665 10666 /* bump the kstats */ 10667 ARCSTAT_INCR(arcstat_l2_write_bytes, asize); 10668 ARCSTAT_BUMP(arcstat_l2_log_blk_writes); 10669 ARCSTAT_F_AVG(arcstat_l2_log_blk_avg_asize, asize); 10670 ARCSTAT_F_AVG(arcstat_l2_data_to_meta_ratio, 10671 dev->l2ad_log_blk_payload_asize / asize); 10672 10673 /* start a new log block */ 10674 dev->l2ad_log_ent_idx = 0; 10675 dev->l2ad_log_blk_payload_asize = 0; 10676 dev->l2ad_log_blk_payload_start = 0; 10677 10678 return (asize); 10679 } 10680 10681 /* 10682 * Validates an L2ARC log block address to make sure that it can be read 10683 * from the provided L2ARC device. 10684 */ 10685 boolean_t 10686 l2arc_log_blkptr_valid(l2arc_dev_t *dev, const l2arc_log_blkptr_t *lbp) 10687 { 10688 /* L2BLK_GET_PSIZE returns aligned size for log blocks */ 10689 uint64_t asize = L2BLK_GET_PSIZE((lbp)->lbp_prop); 10690 uint64_t end = lbp->lbp_daddr + asize - 1; 10691 uint64_t start = lbp->lbp_payload_start; 10692 boolean_t evicted = B_FALSE; 10693 10694 /* 10695 * A log block is valid if all of the following conditions are true: 10696 * - it fits entirely (including its payload) between l2ad_start and 10697 * l2ad_end 10698 * - it has a valid size 10699 * - neither the log block itself nor part of its payload was evicted 10700 * by l2arc_evict(): 10701 * 10702 * l2ad_hand l2ad_evict 10703 * | | lbp_daddr 10704 * | start | | end 10705 * | | | | | 10706 * V V V V V 10707 * l2ad_start ============================================ l2ad_end 10708 * --------------------------|||| 10709 * ^ ^ 10710 * | log block 10711 * payload 10712 */ 10713 10714 evicted = 10715 l2arc_range_check_overlap(start, end, dev->l2ad_hand) || 10716 l2arc_range_check_overlap(start, end, dev->l2ad_evict) || 10717 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, start) || 10718 l2arc_range_check_overlap(dev->l2ad_hand, dev->l2ad_evict, end); 10719 10720 return (start >= dev->l2ad_start && end <= dev->l2ad_end && 10721 asize > 0 && asize <= sizeof (l2arc_log_blk_phys_t) && 10722 (!evicted || dev->l2ad_first)); 10723 } 10724 10725 /* 10726 * Inserts ARC buffer header `hdr' into the current L2ARC log block on 10727 * the device. The buffer being inserted must be present in L2ARC. 10728 * Returns B_TRUE if the L2ARC log block is full and needs to be committed 10729 * to L2ARC, or B_FALSE if it still has room for more ARC buffers. 10730 */ 10731 static boolean_t 10732 l2arc_log_blk_insert(l2arc_dev_t *dev, const arc_buf_hdr_t *hdr) 10733 { 10734 l2arc_log_blk_phys_t *lb = &dev->l2ad_log_blk; 10735 l2arc_log_ent_phys_t *le; 10736 10737 if (dev->l2ad_log_entries == 0) 10738 return (B_FALSE); 10739 10740 int index = dev->l2ad_log_ent_idx++; 10741 10742 ASSERT3S(index, <, dev->l2ad_log_entries); 10743 ASSERT(HDR_HAS_L2HDR(hdr)); 10744 10745 le = &lb->lb_entries[index]; 10746 memset(le, 0, sizeof (*le)); 10747 le->le_dva = hdr->b_dva; 10748 le->le_birth = hdr->b_birth; 10749 le->le_daddr = hdr->b_l2hdr.b_daddr; 10750 if (index == 0) 10751 dev->l2ad_log_blk_payload_start = le->le_daddr; 10752 L2BLK_SET_LSIZE((le)->le_prop, HDR_GET_LSIZE(hdr)); 10753 L2BLK_SET_PSIZE((le)->le_prop, HDR_GET_PSIZE(hdr)); 10754 L2BLK_SET_COMPRESS((le)->le_prop, HDR_GET_COMPRESS(hdr)); 10755 le->le_complevel = hdr->b_complevel; 10756 L2BLK_SET_TYPE((le)->le_prop, hdr->b_type); 10757 L2BLK_SET_PROTECTED((le)->le_prop, !!(HDR_PROTECTED(hdr))); 10758 L2BLK_SET_PREFETCH((le)->le_prop, !!(HDR_PREFETCH(hdr))); 10759 L2BLK_SET_STATE((le)->le_prop, hdr->b_l1hdr.b_state->arcs_state); 10760 10761 dev->l2ad_log_blk_payload_asize += vdev_psize_to_asize(dev->l2ad_vdev, 10762 HDR_GET_PSIZE(hdr)); 10763 10764 return (dev->l2ad_log_ent_idx == dev->l2ad_log_entries); 10765 } 10766 10767 /* 10768 * Checks whether a given L2ARC device address sits in a time-sequential 10769 * range. The trick here is that the L2ARC is a rotary buffer, so we can't 10770 * just do a range comparison, we need to handle the situation in which the 10771 * range wraps around the end of the L2ARC device. Arguments: 10772 * bottom -- Lower end of the range to check (written to earlier). 10773 * top -- Upper end of the range to check (written to later). 10774 * check -- The address for which we want to determine if it sits in 10775 * between the top and bottom. 10776 * 10777 * The 3-way conditional below represents the following cases: 10778 * 10779 * bottom < top : Sequentially ordered case: 10780 * <check>--------+-------------------+ 10781 * | (overlap here?) | 10782 * L2ARC dev V V 10783 * |---------------<bottom>============<top>--------------| 10784 * 10785 * bottom > top: Looped-around case: 10786 * <check>--------+------------------+ 10787 * | (overlap here?) | 10788 * L2ARC dev V V 10789 * |===============<top>---------------<bottom>===========| 10790 * ^ ^ 10791 * | (or here?) | 10792 * +---------------+---------<check> 10793 * 10794 * top == bottom : Just a single address comparison. 10795 */ 10796 boolean_t 10797 l2arc_range_check_overlap(uint64_t bottom, uint64_t top, uint64_t check) 10798 { 10799 if (bottom < top) 10800 return (bottom <= check && check <= top); 10801 else if (bottom > top) 10802 return (check <= top || bottom <= check); 10803 else 10804 return (check == top); 10805 } 10806 10807 EXPORT_SYMBOL(arc_buf_size); 10808 EXPORT_SYMBOL(arc_write); 10809 EXPORT_SYMBOL(arc_read); 10810 EXPORT_SYMBOL(arc_buf_info); 10811 EXPORT_SYMBOL(arc_getbuf_func); 10812 EXPORT_SYMBOL(arc_add_prune_callback); 10813 EXPORT_SYMBOL(arc_remove_prune_callback); 10814 10815 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min, param_set_arc_min, 10816 spl_param_get_u64, ZMOD_RW, "Minimum ARC size in bytes"); 10817 10818 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, max, param_set_arc_max, 10819 spl_param_get_u64, ZMOD_RW, "Maximum ARC size in bytes"); 10820 10821 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, meta_balance, UINT, ZMOD_RW, 10822 "Balance between metadata and data on ghost hits."); 10823 10824 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, grow_retry, param_set_arc_int, 10825 param_get_uint, ZMOD_RW, "Seconds before growing ARC size"); 10826 10827 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, shrink_shift, param_set_arc_int, 10828 param_get_uint, ZMOD_RW, "log2(fraction of ARC to reclaim)"); 10829 10830 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, pc_percent, UINT, ZMOD_RW, 10831 "Percent of pagecache to reclaim ARC to"); 10832 10833 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, average_blocksize, UINT, ZMOD_RD, 10834 "Target average block size"); 10835 10836 ZFS_MODULE_PARAM(zfs, zfs_, compressed_arc_enabled, INT, ZMOD_RW, 10837 "Disable compressed ARC buffers"); 10838 10839 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prefetch_ms, param_set_arc_int, 10840 param_get_uint, ZMOD_RW, "Min life of prefetch block in ms"); 10841 10842 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, min_prescient_prefetch_ms, 10843 param_set_arc_int, param_get_uint, ZMOD_RW, 10844 "Min life of prescient prefetched block in ms"); 10845 10846 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_max, U64, ZMOD_RW, 10847 "Max write bytes per interval"); 10848 10849 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, write_boost, U64, ZMOD_RW, 10850 "Extra write bytes during device warmup"); 10851 10852 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom, U64, ZMOD_RW, 10853 "Number of max device writes to precache"); 10854 10855 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, headroom_boost, U64, ZMOD_RW, 10856 "Compressed l2arc_headroom multiplier"); 10857 10858 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, trim_ahead, U64, ZMOD_RW, 10859 "TRIM ahead L2ARC write size multiplier"); 10860 10861 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_secs, U64, ZMOD_RW, 10862 "Seconds between L2ARC writing"); 10863 10864 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_min_ms, U64, ZMOD_RW, 10865 "Min feed interval in milliseconds"); 10866 10867 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, noprefetch, INT, ZMOD_RW, 10868 "Skip caching prefetched buffers"); 10869 10870 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, feed_again, INT, ZMOD_RW, 10871 "Turbo L2ARC warmup"); 10872 10873 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, norw, INT, ZMOD_RW, 10874 "No reads during writes"); 10875 10876 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, meta_percent, UINT, ZMOD_RW, 10877 "Percent of ARC size allowed for L2ARC-only headers"); 10878 10879 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_enabled, INT, ZMOD_RW, 10880 "Rebuild the L2ARC when importing a pool"); 10881 10882 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, rebuild_blocks_min_l2size, U64, ZMOD_RW, 10883 "Min size in bytes to write rebuild log blocks in L2ARC"); 10884 10885 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, mfuonly, INT, ZMOD_RW, 10886 "Cache only MFU data from ARC into L2ARC"); 10887 10888 ZFS_MODULE_PARAM(zfs_l2arc, l2arc_, exclude_special, INT, ZMOD_RW, 10889 "Exclude dbufs on special vdevs from being cached to L2ARC if set."); 10890 10891 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, lotsfree_percent, param_set_arc_int, 10892 param_get_uint, ZMOD_RW, "System free memory I/O throttle in bytes"); 10893 10894 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, sys_free, param_set_arc_u64, 10895 spl_param_get_u64, ZMOD_RW, "System free memory target size in bytes"); 10896 10897 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit, param_set_arc_u64, 10898 spl_param_get_u64, ZMOD_RW, "Minimum bytes of dnodes in ARC"); 10899 10900 ZFS_MODULE_PARAM_CALL(zfs_arc, zfs_arc_, dnode_limit_percent, 10901 param_set_arc_int, param_get_uint, ZMOD_RW, 10902 "Percent of ARC meta buffers for dnodes"); 10903 10904 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, dnode_reduce_percent, UINT, ZMOD_RW, 10905 "Percentage of excess dnodes to try to unpin"); 10906 10907 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, eviction_pct, UINT, ZMOD_RW, 10908 "When full, ARC allocation waits for eviction of this % of alloc size"); 10909 10910 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, evict_batch_limit, UINT, ZMOD_RW, 10911 "The number of headers to evict per sublist before moving to the next"); 10912 10913 ZFS_MODULE_PARAM(zfs_arc, zfs_arc_, prune_task_threads, INT, ZMOD_RW, 10914 "Number of arc_prune threads"); 10915